Pettigrew, Luisa M; Kumpunen, Stephanie; Mays, Nicholas; Rosen, Rebecca; Posaner, Rachel
2018-03-01
Over the past decade, collaboration between general practices in England to form new provider networks and large-scale organisations has been driven largely by grassroots action among GPs. However, it is now being increasingly advocated for by national policymakers. Expectations of what scaling up general practice in England will achieve are significant. To review the evidence of the impact of new forms of large-scale general practice provider collaborations in England. Systematic review. Embase, MEDLINE, Health Management Information Consortium, and Social Sciences Citation Index were searched for studies reporting the impact on clinical processes and outcomes, patient experience, workforce satisfaction, or costs of new forms of provider collaborations between general practices in England. A total of 1782 publications were screened. Five studies met the inclusion criteria and four examined the same general practice networks, limiting generalisability. Substantial financial investment was required to establish the networks and the associated interventions that were targeted at four clinical areas. Quality improvements were achieved through standardised processes, incentives at network level, information technology-enabled performance dashboards, and local network management. The fifth study of a large-scale multisite general practice organisation showed that it may be better placed to implement safety and quality processes than conventional practices. However, unintended consequences may arise, such as perceptions of disenfranchisement among staff and reductions in continuity of care. Good-quality evidence of the impacts of scaling up general practice provider organisations in England is scarce. As more general practice collaborations emerge, evaluation of their impacts will be important to understand which work, in which settings, how, and why. © British Journal of General Practice 2018.
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
Caldwell, Robert R
2011-12-28
The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.
Transition from large-scale to small-scale dynamo.
Ponty, Y; Plunian, F
2011-04-15
The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.
Azmy, Muna Maryam; Hashim, Mazlan; Numata, Shinya; Hosaka, Tetsuro; Noor, Nur Supardi Md.; Fletcher, Christine
2016-01-01
General flowering (GF) is a unique phenomenon wherein, at irregular intervals, taxonomically diverse trees in Southeast Asian dipterocarp forests synchronize their reproduction at the community level. Triggers of GF, including drought and low minimum temperatures a few months previously has been limitedly observed across large regional scales due to lack of meteorological stations. Here, we aim to identify the climatic conditions that trigger large-scale GF in Peninsular Malaysia using satellite sensors, Tropical Rainfall Measuring Mission (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), to evaluate the climatic conditions of focal forests. We observed antecedent drought, low temperature and high photosynthetic radiation conditions before large-scale GF events, suggesting that large-scale GF events could be triggered by these factors. In contrast, we found higher-magnitude GF in forests where lower precipitation preceded large-scale GF events. GF magnitude was also negatively influenced by land surface temperature (LST) for a large-scale GF event. Therefore, we suggest that spatial extent of drought may be related to that of GF forests, and that the spatial pattern of LST may be related to that of GF occurrence. With significant new findings and other results that were consistent with previous research we clarified complicated environmental correlates with the GF phenomenon. PMID:27561887
Azmy, Muna Maryam; Hashim, Mazlan; Numata, Shinya; Hosaka, Tetsuro; Noor, Nur Supardi Md; Fletcher, Christine
2016-08-26
General flowering (GF) is a unique phenomenon wherein, at irregular intervals, taxonomically diverse trees in Southeast Asian dipterocarp forests synchronize their reproduction at the community level. Triggers of GF, including drought and low minimum temperatures a few months previously has been limitedly observed across large regional scales due to lack of meteorological stations. Here, we aim to identify the climatic conditions that trigger large-scale GF in Peninsular Malaysia using satellite sensors, Tropical Rainfall Measuring Mission (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), to evaluate the climatic conditions of focal forests. We observed antecedent drought, low temperature and high photosynthetic radiation conditions before large-scale GF events, suggesting that large-scale GF events could be triggered by these factors. In contrast, we found higher-magnitude GF in forests where lower precipitation preceded large-scale GF events. GF magnitude was also negatively influenced by land surface temperature (LST) for a large-scale GF event. Therefore, we suggest that spatial extent of drought may be related to that of GF forests, and that the spatial pattern of LST may be related to that of GF occurrence. With significant new findings and other results that were consistent with previous research we clarified complicated environmental correlates with the GF phenomenon.
On large-scale dynamo action at high magnetic Reynolds number
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk
2014-07-01
We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less
Polymer Physics of the Large-Scale Structure of Chromatin.
Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario
2016-01-01
We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.
Christopher P. Bloch; Michael R. Willi
2006-01-01
Large-scale natural disturbances, such as hurricanes, can have profound effects on animal populations. Nonetheless, generalizations about the effects of disturbance are elusive, and few studies consider long-term responses of a single population or community to multiple large-scale disturbance events. In the last 20 y, twomajor hurricanes (Hugo and Georges) have struck...
Imprint of thawing scalar fields on the large scale galaxy overdensity
NASA Astrophysics Data System (ADS)
Dinda, Bikash R.; Sen, Anjan A.
2018-04-01
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.
NASA Technical Reports Server (NTRS)
Boyle, A. R.; Dangermond, J.; Marble, D.; Simonett, D. S.; Tomlinson, R. F.
1983-01-01
Problems and directions for large scale geographic information system development were reviewed and the general problems associated with automated geographic information systems and spatial data handling were addressed.
a Model Study of Small-Scale World Map Generalization
NASA Astrophysics Data System (ADS)
Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.
2018-04-01
With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.
Confirmation of general relativity on large scales from weak lensing and galaxy velocities.
Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E; Lombriser, Lucas; Smith, Robert E
2010-03-11
Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, E(G), that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to 'galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of E(G) different from the general relativistic prediction because, in these theories, the 'gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that E(G) = 0.39 +/- 0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of E(G) approximately 0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f(R) theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.
Confirmation of general relativity on large scales from weak lensing and galaxy velocities
NASA Astrophysics Data System (ADS)
Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E.; Lombriser, Lucas; Smith, Robert E.
2010-03-01
Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, EG, that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to `galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of EG different from the general relativistic prediction because, in these theories, the `gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that EG = 0.39+/-0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of EG~0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f() theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.
On the large eddy simulation of turbulent flows in complex geometry
NASA Technical Reports Server (NTRS)
Ghosal, Sandip
1993-01-01
Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.
ESRI applications of GIS technology: Mineral resource development
NASA Technical Reports Server (NTRS)
Derrenbacher, W.
1981-01-01
The application of geographic information systems technology to large scale regional assessment related to mineral resource development, identifying candidate sites for related industry, and evaluating sites for waste disposal is discussed. Efforts to develop data bases were conducted at scales ranging from 1:3,000,000 to 1:25,000. In several instances, broad screening was conducted for large areas at a very general scale with more detailed studies subsequently undertaken in promising areas windowed out of the generalized data base. Increasingly, the systems which are developed are structured as the spatial framework for the long-term collection, storage, referencing, and retrieval of vast amounts of data about large regions. Typically, the reconnaissance data base for a large region is structured at 1:250,000 scale, data bases for smaller areas being structured at 1:25,000, 1:50,000 or 1:63,360. An integrated data base for the coterminous US was implemented at a scale of 1:3,000,000 for two separate efforts.
Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-01-01
This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Seogjoo; Hoyer, Stephan; Fleming, Graham
2014-10-31
A generalized master equation (GME) governing quantum evolution of modular exciton density (MED) is derived for large scale light harvesting systems composed of weakly interacting modules of multiple chromophores. The GME-MED offers a practical framework to incorporate real time coherent quantum dynamics calculations of small length scales into dynamics over large length scales, and also provides a non-Markovian generalization and rigorous derivation of the Pauli master equation employing multichromophoric Förster resonance energy transfer rates. A test of the GME-MED for four sites of the Fenna-Matthews-Olson complex demonstrates how coherent dynamics of excitonic populations over coupled chromophores can be accurately describedmore » by transitions between subgroups (modules) of delocalized excitons. Application of the GME-MED to the exciton dynamics between a pair of light harvesting complexes in purple bacteria demonstrates its promise as a computationally efficient tool to investigate large scale exciton dynamics in complex environments.« less
48 CFR 852.236-71 - Specifications and drawings for construction.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...
48 CFR 852.236-71 - Specifications and drawings for construction.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...
48 CFR 852.236-71 - Specifications and drawings for construction.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...
48 CFR 852.236-71 - Specifications and drawings for construction.
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...
48 CFR 852.236-71 - Specifications and drawings for construction.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...
Generalization of Turbulent Pair Dispersion to Large Initial Separations
NASA Astrophysics Data System (ADS)
Shnapp, Ron; Liberzon, Alex; International Collaboration for Turbulence Research
2018-06-01
We present a generalization of turbulent pair dispersion to large initial separations (η
ERIC Educational Resources Information Center
Harris, Alma; Jones, Michelle
2017-01-01
The challenges of securing educational change and transformation, at scale, remain considerable. While sustained progress has been made in some education systems (Fullan, 2009; Hargreaves & Shirley, 2009) generally, it remains the case that the pathway to large-scale, system improvement is far from easy or straightforward. While large-scale…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Maartens, Roy; Raccanelli, Alvise
We extend previous analyses of wide-angle correlations in the galaxy power spectrum in redshift space to include all general relativistic effects. These general relativistic corrections to the standard approach become important on large scales and at high redshifts, and they lead to new terms in the wide-angle correlations. We show that in principle the new terms can produce corrections of nearly 10% on Gpc scales over the usual Newtonian approximation. General relativistic corrections will be important for future large-volume surveys such as SKA and Euclid, although the problem of cosmic variance will present a challenge in observing this.
CHARACTERIZATION OF SMALL ESTUARIES AS A COMPONENT OF A REGIONAL-SCALE MONITORING PROGRAM
Large-scale environmental monitoring programs, such as EPA's Environmental Monitoring and Assessment Program (EMAP), by nature focus on estimating the ecological condition of large geographic areas. Generally missing is the ability to provide estimates of condition of individual ...
Robbins, Blaine
2013-01-01
Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Large-scale microwave anisotropy from gravitating seeds
NASA Technical Reports Server (NTRS)
Veeraraghavan, Shoba; Stebbins, Albert
1992-01-01
Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.
Phase-relationships between scales in the perturbed turbulent boundary layer
NASA Astrophysics Data System (ADS)
Jacobi, I.; McKeon, B. J.
2017-12-01
The phase-relationship between large-scale motions and small-scale fluctuations in a non-equilibrium turbulent boundary layer was investigated. A zero-pressure-gradient flat plate turbulent boundary layer was perturbed by a short array of two-dimensional roughness elements, both statically, and under dynamic actuation. Within the compound, dynamic perturbation, the forcing generated a synthetic very-large-scale motion (VLSM) within the flow. The flow was decomposed by phase-locking the flow measurements to the roughness forcing, and the phase-relationship between the synthetic VLSM and remaining fluctuating scales was explored by correlation techniques. The general relationship between large- and small-scale motions in the perturbed flow, without phase-locking, was also examined. The synthetic large scale cohered with smaller scales in the flow via a phase-relationship that is similar to that of natural large scales in an unperturbed flow, but with a much stronger organizing effect. Cospectral techniques were employed to describe the physical implications of the perturbation on the relative orientation of large- and small-scale structures in the flow. The correlation and cospectral techniques provide tools for designing more efficient control strategies that can indirectly control small-scale motions via the large scales.
Thomson, William Murray; Malden, Penelope Elizabeth
2011-09-01
To examine the properties, validity and responsiveness of the Family Impact Scale in a consecutive clinical sample of patients undergoing dental treatment under general anaesthesia. A consecutive clinical sample of parents/caregivers of children receiving dental treatment under general anaesthesia provided data using the Family Impact Scale (FIS) component of the COHQOL(©) Questionnaire. The first questionnaire was completed before treatment, the follow-up questionnaire 1-4 weeks afterward. Treatment-associated changes in the FIS and its components were determined by comparing baseline and follow-up data. Baseline and follow-up data were obtained for 202 and 130 participants, respectively (64.4% follow-up). All FIS items showed large relative decreases in prevalence, the greatest seen in those relating to having sleep disrupted, blaming others, being upset, the child requiring more attention, financial difficulties and having to take time off work. Factor analysis largely confirmed the underlying factor structure, with three sub-scales (parental/family, parental emotions and family conflict) identified. The parental/family and parental emotions sub-scales showed the greatest treatment-associated improvement, with large effect sizes. There was a moderate improvement in scores on the family conflict sub-scale. The overall FIS showed a large improvement. Treating children with severe caries under general anaesthesia results in OHRQoL improvements for the family. Severe dental caries is not merely a restorative and preventive challenge for those who treat children; it has far-reaching effects on those who share the household and care for the affected child.
Robbins, Blaine
2013-01-01
Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation. PMID:23527211
Effect of helicity on the correlation time of large scales in turbulent flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2017-11-01
Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.
A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Mechanisation of large-scale agricultural fields in developing countries - a review.
Onwude, Daniel I; Abdulstter, Rafia; Gomes, Chandima; Hashim, Norhashila
2016-09-01
Mechanisation of large-scale agricultural fields often requires the application of modern technologies such as mechanical power, automation, control and robotics. These technologies are generally associated with relatively well developed economies. The application of these technologies in some developing countries in Africa and Asia is limited by factors such as technology compatibility with the environment, availability of resources to facilitate the technology adoption, cost of technology purchase, government policies, adequacy of technology and appropriateness in addressing the needs of the population. As a result, many of the available resources have been used inadequately by farmers, who continue to rely mostly on conventional means of agricultural production, using traditional tools and equipment in most cases. This has led to low productivity and high cost of production among others. Therefore this paper attempts to evaluate the application of present day technology and its limitations to the advancement of large-scale mechanisation in developing countries of Africa and Asia. Particular emphasis is given to a general understanding of the various levels of mechanisation, present day technology, its management and application to large-scale agricultural fields. This review also focuses on/gives emphasis to future outlook that will enable a gradual, evolutionary and sustainable technological change. The study concludes that large-scale-agricultural farm mechanisation for sustainable food production in Africa and Asia must be anchored on a coherent strategy based on the actual needs and priorities of the large-scale farmers. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Large scale anomalies in the microwave background: causation and correlation.
Aslanyan, Grigor; Easther, Richard
2013-12-27
Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.
Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resseguie, David R
There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less
Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions
NASA Technical Reports Server (NTRS)
Grujic, L. T.; Siljak, D. D.
1973-01-01
The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jaiyul
2010-10-15
We extend the general relativistic description of galaxy clustering developed in Yoo, Fitzpatrick, and Zaldarriaga (2009). For the first time we provide a fully general relativistic description of the observed matter power spectrum and the observed galaxy power spectrum with the linear bias ansatz. It is significantly different from the standard Newtonian description on large scales and especially its measurements on large scales can be misinterpreted as the detection of the primordial non-Gaussianity even in the absence thereof. The key difference in the observed galaxy power spectrum arises from the real-space matter fluctuation defined as the matter fluctuation at themore » hypersurface of the observed redshift. As opposed to the standard description, the shape of the observed galaxy power spectrum evolves in redshift, providing additional cosmological information. While the systematic errors in the standard Newtonian description are negligible in the current galaxy surveys at low redshift, correct general relativistic description is essential for understanding the galaxy power spectrum measurements on large scales in future surveys with redshift depth z{>=}3. We discuss ways to improve the detection significance in the current galaxy surveys and comment on applications of our general relativistic formalism in future surveys.« less
Liu, Yuqiong; Du, Qingyun; Wang, Qi; Yu, Huanyun; Liu, Jianfeng; Tian, Yu; Chang, Chunying; Lei, Jing
2017-07-01
The causation between bioavailability of heavy metals and environmental factors are generally obtained from field experiments at local scales at present, and lack sufficient evidence from large scales. However, inferring causation between bioavailability of heavy metals and environmental factors across large-scale regions is challenging. Because the conventional correlation-based approaches used for causation assessments across large-scale regions, at the expense of actual causation, can result in spurious insights. In this study, a general approach framework, Intervention calculus when the directed acyclic graph (DAG) is absent (IDA) combined with the backdoor criterion (BC), was introduced to identify causation between the bioavailability of heavy metals and the potential environmental factors across large-scale regions. We take the Pearl River Delta (PRD) in China as a case study. The causal structures and effects were identified based on the concentrations of heavy metals (Zn, As, Cu, Hg, Pb, Cr, Ni and Cd) in soil (0-20 cm depth) and vegetable (lettuce) and 40 environmental factors (soil properties, extractable heavy metals and weathering indices) in 94 samples across the PRD. Results show that the bioavailability of heavy metals (Cd, Zn, Cr, Ni and As) was causally influenced by soil properties and soil weathering factors, whereas no causal factor impacted the bioavailability of Cu, Hg and Pb. No latent factor was found between the bioavailability of heavy metals and environmental factors. The causation between the bioavailability of heavy metals and environmental factors at field experiments is consistent with that on a large scale. The IDA combined with the BC provides a powerful tool to identify causation between the bioavailability of heavy metals and environmental factors across large-scale regions. Causal inference in a large system with the dynamic changes has great implications for system-based risk management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
NASA Astrophysics Data System (ADS)
Cassani, Mary Kay Kuhr
The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the courses. The large enrollment SCALE-UP model as implemented at the host university did not increase science teaching self-efficacy of non-science majors, as hypothesized. This was likely due to limited modification of standard cooperative activities according to the inquiry-guided SCALE-UP model. It was also found that larger SCALE-UP enrollments did not decrease science teaching self-efficacy when standard cooperative activities were used in the larger class.
Stability of Rasch Scales over Time
ERIC Educational Resources Information Center
Taylor, Catherine S.; Lee, Yoonsun
2010-01-01
Item response theory (IRT) methods are generally used to create score scales for large-scale tests. Research has shown that IRT scales are stable across groups and over time. Most studies have focused on items that are dichotomously scored. Now Rasch and other IRT models are used to create scales for tests that include polytomously scored items.…
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories
NASA Astrophysics Data System (ADS)
Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.
Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Bai, Hua; Li, Xinshi; Hu, Chao; Zhang, Xuan; Li, Junfang; Yan, Yan; Xi, Guangcheng
2013-01-01
Mesoporous nanostructures represent a unique class of photocatalysts with many applications, including splitting of water, degradation of organic contaminants, and reduction of carbon dioxide. In this work, we report a general Lewis acid catalytic template route for the high–yield producing single– and multi–component large–scale three–dimensional (3D) mesoporous metal oxide networks. The large-scale 3D mesoporous metal oxide networks possess large macroscopic scale (millimeter–sized) and mesoporous nanostructure with huge pore volume and large surface exposure area. This method also can be used for the synthesis of large–scale 3D macro/mesoporous hierarchical porous materials and noble metal nanoparticles loaded 3D mesoporous networks. Photocatalytic degradation of Azo dyes demonstrated that the large–scale 3D mesoporous metal oxide networks enable high photocatalytic activity. The present synthetic method can serve as the new design concept for functional 3D mesoporous nanomaterials. PMID:23857595
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
NASA Astrophysics Data System (ADS)
Blackman, Eric G.; Hubbard, Alexander
2014-08-01
Blackman and Brandenburg argued that magnetic helicity conservation in dynamo theory can in principle be captured by diagrams of mean field dynamos when the magnetic fields are represented by ribbons or tubes, but not by lines. Here, we present such a schematic ribbon diagram for the α2 dynamo that tracks magnetic helicity and provides distinct scales of large-scale magnetic helicity, small-scale magnetic helicity, and kinetic helicity involved in the process. This also motivates our construction of a new `2.5 scale' minimalist generalization of the helicity-evolving equations for the α2 dynamo that separately allows for these three distinct length-scales while keeping only two dynamical equations. We solve these equations and, as in previous studies, find that the large-scale field first grows at a rate independent of the magnetic Reynolds number RM before quenching to an RM-dependent regime. But we also show that the larger the ratio of the wavenumber where the small-scale current helicity resides to that of the forcing scale, the earlier the non-linear dynamo quenching occurs, and the weaker the large-scale field is at the turnoff from linear growth. The harmony between the theory and the schematic diagram exemplifies a general lesson that magnetic fields in magnetohydrodynamic are better visualized as two-dimensional ribbons (or pairs of lines) rather than single lines.
ERIC Educational Resources Information Center
Tindal, Gerald; Lee, Daesik; Geller, Leanne Ketterlin
2008-01-01
In this paper we review different methods for teachers to recommend accommodations in large scale tests. Then we present data on the stability of their judgments on variables relevant to this decision-making process. The outcomes from the judgments support the need for a more explicit model. Four general categories are presented: student…
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
The influence of large-scale wind power on global climate.
Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J
2004-11-16
Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.
On the Interactions Between Planetary and Mesoscale Dynamics in the Oceans
NASA Astrophysics Data System (ADS)
Grooms, I.; Julien, K. A.; Fox-Kemper, B.
2011-12-01
Multiple-scales asymptotic methods are used to investigate the interaction of planetary and mesoscale dynamics in the oceans. We find three regimes. In the first, the slow, large-scale planetary flow sets up a baroclinically unstable background which leads to vigorous mesoscale eddy generation, but the eddy dynamics do not affect the planetary dynamics. In the second, the planetary flow feels the effects of the eddies, but appears to be unable to generate them. The first two regimes rely on horizontally isotropic large-scale dynamics. In the third regime, large-scale anisotropy, as exists for example in the Antarctic Circumpolar Current and in western boundary currents, allows the large-scale dynamics to both generate and respond to mesoscale eddies. We also discuss how the investigation may be brought to bear on the problem of parameterization of unresolved mesoscale dynamics in ocean general circulation models.
Top Five Large-Scale Solar Myths | State, Local, and Tribal Governments |
of large-scale photovoltaic (PV) facilities or solar farms tend to include a myriad of misperceptions technologies do use mirrors which can cause glare, most solar farms use PV modules to generate electricity. PV panels in order to convert solar energy into electricity. PV modules are generally less reflective than
Large-scale delamination of multi-layers transition metal carbides and carbonitrides “MXenes”
Naguib, Michael; Unocic, Raymond R.; Armstrong, Beth L.; ...
2015-04-17
Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.
Landscape-Scale Research In The Ouachita Mountains Of West-Central Arkansas: General Study Design
James M. Guldin
2004-01-01
Abstract A landscape-scale study on forest ecology and management began in 1995 in the eastern Ouachita Mountains. Of four large watersheds, three were within the Winona Ranger District of the Ouachita National Forest, and a major forest industry landowner largely owned and managed the fourth. These watersheds vary from 3,700 to 9,800 acres. At this...
The Modified Abbreviated Math Anxiety Scale: A Valid and Reliable Instrument for Use with Children.
Carey, Emma; Hill, Francesca; Devine, Amy; Szűcs, Dénes
2017-01-01
Mathematics anxiety (MA) can be observed in children from primary school age into the teenage years and adulthood, but many MA rating scales are only suitable for use with adults or older adolescents. We have adapted one such rating scale, the Abbreviated Math Anxiety Scale (AMAS), to be used with British children aged 8-13. In this study, we assess the scale's reliability, factor structure, and divergent validity. The modified AMAS (mAMAS) was administered to a very large ( n = 1746) cohort of British children and adolescents. This large sample size meant that as well as conducting confirmatory factor analysis on the scale itself, we were also able to split the sample to conduct exploratory and confirmatory factor analysis of items from the mAMAS alongside items from child test anxiety and general anxiety rating scales. Factor analysis of the mAMAS confirmed that it has the same underlying factor structure as the original AMAS, with subscales measuring anxiety about Learning and Evaluation in math. Furthermore, both exploratory and confirmatory factor analysis of the mAMAS alongside scales measuring test anxiety and general anxiety showed that mAMAS items cluster onto one factor (perceived to represent MA). The mAMAS provides a valid and reliable scale for measuring MA in children and adolescents, from a younger age than is possible with the original AMAS. Results from this study also suggest that MA is truly a unique construct, separate from both test anxiety and general anxiety, even in childhood.
New Probe of Departures from General Relativity Using Minkowski Functionals.
Fang, Wenjuan; Li, Baojiu; Zhao, Gong-Bo
2017-05-05
The morphological properties of the large scale structure of the Universe can be fully described by four Minkowski functionals (MFs), which provide important complementary information to other statistical observables such as the widely used 2-point statistics in configuration and Fourier spaces. In this work, for the first time, we present the differences in the morphology of the large scale structure caused by modifications to general relativity (to address the cosmic acceleration problem), by measuring the MFs from N-body simulations of modified gravity and general relativity. We find strong statistical power when using the MFs to constrain modified theories of gravity: with a galaxy survey that has survey volume ∼0.125(h^{-1} Gpc)^{3} and galaxy number density ∼1/(h^{-1} Mpc)^{3}, the two normal-branch Dvali-Gabadadze-Porrati models and the F5 f(R) model that we simulated can be discriminated from the ΛCDM model at a significance level ≳5σ with an individual MF measurement. Therefore, the MF of the large scale structure is potentially a powerful probe of gravity, and its application to real data deserves active exploration.
On the large scale structure of X-ray background sources
NASA Technical Reports Server (NTRS)
Bi, H. G.; Meszaros, A.; Meszaros, P.
1991-01-01
The large scale clustering of the sources responsible for the X-ray background is discussed, under the assumption of a discrete origin. The formalism necessary for calculating the X-ray spatial fluctuations in the most general case where the source density contrast in structures varies with redshift is developed. A comparison of this with observational limits is useful for obtaining information concerning various galaxy formation scenarios. The calculations presented show that a varying density contrast has a small impact on the expected X-ray fluctuations. This strengthens and extends previous conclusions concerning the size and comoving density of large scale structures at redshifts 0.5 between 4.0.
Triangles bridge the scales: Quantifying cellular contributions to tissue deformation
NASA Astrophysics Data System (ADS)
Merkel, Matthias; Etournay, Raphaël; Popović, Marko; Salbreux, Guillaume; Eaton, Suzanne; Jülicher, Frank
2017-03-01
In this article, we propose a general framework to study the dynamics and topology of cellular networks that capture the geometry of cell packings in two-dimensional tissues. Such epithelia undergo large-scale deformation during morphogenesis of a multicellular organism. Large-scale deformations emerge from many individual cellular events such as cell shape changes, cell rearrangements, cell divisions, and cell extrusions. Using a triangle-based representation of cellular network geometry, we obtain an exact decomposition of large-scale material deformation. Interestingly, our approach reveals contributions of correlations between cellular rotations and elongation as well as cellular growth and elongation to tissue deformation. Using this triangle method, we discuss tissue remodeling in the developing pupal wing of the fly Drosophila melanogaster.
Stratospheric wind errors, initial states and forecast skill in the GLAS general circulation model
NASA Technical Reports Server (NTRS)
Tenenbaum, J.
1983-01-01
Relations between stratospheric wind errors, initial states and 500 mb skill are investigated using the GLAS general circulation model initialized with FGGE data. Erroneous stratospheric winds are seen in all current general circulation models, appearing also as weak shear above the subtropical jet and as cold polar stratospheres. In this study it is shown that the more anticyclonic large-scale flows are correlated with large forecast stratospheric winds. In addition, it is found that for North America the resulting errors are correlated with initial state jet stream accelerations while for East Asia the forecast winds are correlated with initial state jet strength. Using 500 mb skill scores over Europe at day 5 to measure forecast performance, it is found that both poor forecast skill and excessive stratospheric winds are correlated with more anticyclonic large-scale flows over North America. It is hypothesized that the resulting erroneous kinetic energy contributes to the poor forecast skill, and that the problem is caused by a failure in the modeling of the stratospheric energy cycle in current general circulation models independent of vertical resolution.
Perturbation theory for cosmologies with nonlinear structure
NASA Astrophysics Data System (ADS)
Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy
2017-11-01
The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.
Soini, Jaakko; Ukkonen, Kaisa; Neubauer, Peter
2008-01-01
Background For the cultivation of Escherichia coli in bioreactors trace element solutions are generally designed for optimal growth under aerobic conditions. They do normally not contain selenium and nickel. Molybdenum is only contained in few of them. These elements are part of the formate hydrogen lyase (FHL) complex which is induced under anaerobic conditions. As it is generally known that oxygen limitation appears in shake flask cultures and locally in large-scale bioreactors, function of the FHL complex may influence the process behaviour. Formate has been described to accumulate in large-scale cultures and may have toxic effects on E. coli. Although the anaerobic metabolism of E. coli is well studied, reference data which estimate the impact of the FHL complex on bioprocesses of E. coli with oxygen limitation have so far not been published, but are important for a better process understanding. Results Two sets of fed-batch cultures with conditions triggering oxygen limitation and formate accumulation were performed. Permanent oxygen limitation which is typical for shake flask cultures was caused in a bioreactor by reduction of the agitation rate. Transient oxygen limitation, which has been described to eventually occur in the feed-zone of large-scale bioreactors, was mimicked in a two-compartment scale-down bioreactor consisting of a stirred tank reactor and a plug flow reactor (PFR) with continuous glucose feeding into the PFR. In both models formate accumulated up to about 20 mM in the culture medium without addition of selenium, molybdenum and nickel. By addition of these trace elements the formate accumulation decreased below the level observed in well-mixed laboratory-scale cultures. Interestingly, addition of the extra trace elements caused accumulation of large amounts of lactate and reduced biomass yield in the simulator with permanent oxygen limitation, but not in the scale-down two-compartment bioreactor. Conclusion The accumulation of formate in oxygen limited cultivations of E. coli can be fully prevented by addition of the trace elements selenium, nickel and molybdenum, necessary for the function of FHL complex. For large-scale cultivations, if glucose gradients are likely, the results from the two-compartment scale-down bioreactor indicate that the addition of the extra trace elements is beneficial. No negative effects on the biomass yield or on any other bioprocess parameters could be observed in cultures with the extra trace elements if the cells were repeatedly exposed to transient oxygen limitation. PMID:18687130
Cross-Lingual Neighborhood Effects in Generalized Lexical Decision and Natural Reading
ERIC Educational Resources Information Center
Dirix, Nicolas; Cop, Uschi; Drieghe, Denis; Duyck, Wouter
2017-01-01
The present study assessed intra- and cross-lingual neighborhood effects, using both a generalized lexical decision task and an analysis of a large-scale bilingual eye-tracking corpus (Cop, Dirix, Drieghe, & Duyck, 2016). Using new neighborhood density and frequency measures, the general lexical decision task yielded an inhibitory…
Breaking barriers through collaboration: the example of the Cell Migration Consortium.
Horwitz, Alan Rick; Watson, Nikki; Parsons, J Thomas
2002-10-15
Understanding complex integrated biological processes, such as cell migration, requires interdisciplinary approaches. The Cell Migration Consortium, funded by a Large-Scale Collaborative Project Award from the National Institute of General Medical Science, develops and disseminates new technologies, data, reagents, and shared information to a wide audience. The development and operation of this Consortium may provide useful insights for those who plan similarly large-scale, interdisciplinary approaches.
[Privacy and public benefit in using large scale health databases].
Yamamoto, Ryuichi
2014-01-01
In Japan, large scale heath databases were constructed in a few years, such as National Claim insurance and health checkup database (NDB) and Japanese Sentinel project. But there are some legal issues for making adequate balance between privacy and public benefit by using such databases. NDB is carried based on the act for elderly person's health care but in this act, nothing is mentioned for using this database for general public benefit. Therefore researchers who use this database are forced to pay much concern about anonymization and information security that may disturb the research work itself. Japanese Sentinel project is a national project to detecting drug adverse reaction using large scale distributed clinical databases of large hospitals. Although patients give the future consent for general such purpose for public good, it is still under discussion using insufficiently anonymized data. Generally speaking, researchers of study for public benefit will not infringe patient's privacy, but vague and complex requirements of legislation about personal data protection may disturb the researches. Medical science does not progress without using clinical information, therefore the adequate legislation that is simple and clear for both researchers and patients is strongly required. In Japan, the specific act for balancing privacy and public benefit is now under discussion. The author recommended the researchers including the field of pharmacology should pay attention to, participate in the discussion of, and make suggestion to such act or regulations.
Gravity versus radiation models: on the importance of scale and heterogeneity in commuting flows.
Masucci, A Paolo; Serras, Joan; Johansson, Anders; Batty, Michael
2013-08-01
We test the recently introduced radiation model against the gravity model for the system composed of England and Wales, both for commuting patterns and for public transportation flows. The analysis is performed both at macroscopic scales, i.e., at the national scale, and at microscopic scales, i.e., at the city level. It is shown that the thermodynamic limit assumption for the original radiation model significantly underestimates the commuting flows for large cities. We then generalize the radiation model, introducing the correct normalization factor for finite systems. We show that even if the gravity model has a better overall performance the parameter-free radiation model gives competitive results, especially for large scales.
Scale-free Graphs for General Aviation Flight Schedules
NASA Technical Reports Server (NTRS)
Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.
2003-01-01
In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.
Texas Symposium on Relativistic Astrophysics, 11th, Austin, TX, December 12-17, 1982, Proceedings
NASA Technical Reports Server (NTRS)
Evans, D. S. (Editor)
1984-01-01
Various papers on relativistic astrophysics are presented. The general subjects addressed include: particle physics and astrophysics, general relativity, large-scale structure, big bang cosmology, new-generation telescopes, pulsars, supernovae, high-energy astrophysics, and active galaxies.
Measuring the topology of large-scale structure in the universe
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III
1988-01-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Measuring the topology of large-scale structure in the universe
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III
1988-11-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Linking crop yield anomalies to large-scale atmospheric circulation in Europe.
Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J
2017-06-15
Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.
III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.
Davis-Kean, Pamela E; Jager, Justin
2017-06-01
For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.
NASA/FAA general aviation crash dynamics program - An update
NASA Technical Reports Server (NTRS)
Hayduk, R. J.; Thomson, R. G.; Carden, H. D.
1979-01-01
Work in progress in the NASA/FAA General Aviation Crash Dynamics Program for the development of technology for increased crash-worthiness and occupant survivability of general aviation aircraft is presented. Full-scale crash testing facilities and procedures are outlined, and a chronological summary of full-scale tests conducted and planned is presented. The Plastic and Large Deflection Analysis of Nonlinear Structures and Modified Seat Occupant Model for Light Aircraft computer programs which form part of the effort to predict nonlinear geometric and material behavior of sheet-stringer aircraft structures subjected to large deformations are described, and excellent agreement between simulations and experiments is noted. The development of structural concepts to attenuate the load transmitted to the passenger through the seats and subfloor structure is discussed, and an apparatus built to test emergency locator transmitters in a realistic environment is presented.
Imprint of non-linear effects on HI intensity mapping on large scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umeh, Obinna, E-mail: umeobinna@gmail.com
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on themore » power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.« less
Imprint of non-linear effects on HI intensity mapping on large scales
NASA Astrophysics Data System (ADS)
Umeh, Obinna
2017-06-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.
Numerical methods for large-scale, time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Turkel, E.
1979-01-01
A survey of numerical methods for time dependent partial differential equations is presented. The emphasis is on practical applications to large scale problems. A discussion of new developments in high order methods and moving grids is given. The importance of boundary conditions is stressed for both internal and external flows. A description of implicit methods is presented including generalizations to multidimensions. Shocks, aerodynamics, meteorology, plasma physics and combustion applications are also briefly described.
Barrett, Lisa Feldman; Satpute, Ajay
2013-01-01
Understanding how a human brain creates a human mind ultimately depends on mapping psychological categories and concepts to physical measurements of neural response. Although it has long been assumed that emotional, social, and cognitive phenomena are realized in the operations of separate brain regions or brain networks, we demonstrate that it is possible to understand the body of neuroimaging evidence using a framework that relies on domain general, distributed structure-function mappings. We review current research in affective and social neuroscience and argue that the emerging science of large-scale intrinsic brain networks provides a coherent framework for a domain-general functional architecture of the human brain. PMID:23352202
NASA Astrophysics Data System (ADS)
Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.
2014-04-01
Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.
Tropospheric transport differences between models using the same large-scale meteorological fields
NASA Astrophysics Data System (ADS)
Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.
2017-01-01
The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.
Large-scale recovery of an endangered amphibian despite ongoing exposure to multiple stressors
Knapp, Roland A.; Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A. W.; Vrendenburg, Vance T.; Rosenblum, Erica Bree; Briggs, Cheryl J.
2016-01-01
Amphibians are one of the most threatened animal groups, with 32% of species at risk for extinction. Given this imperiled status, is the disappearance of a large fraction of the Earth’s amphibians inevitable, or are some declining species more resilient than is generally assumed? We address this question in a species that is emblematic of many declining amphibians, the endangered Sierra Nevada yellow-legged frog (Rana sierrae). Based on >7,000 frog surveys conducted across Yosemite National Park over a 20-y period, we show that, after decades of decline and despite ongoing exposure to multiple stressors, including introduced fish, the recently emerged disease chytridiomycosis, and pesticides, R. sierrae abundance increased sevenfold during the study and at a rate of 11% per year. These increases occurred in hundreds of populations throughout Yosemite, providing a rare example of amphibian recovery at an ecologically relevant spatial scale. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis. The disappearance of nonnative fish from numerous water bodies after cessation of stocking also contributed to the recovery. The large-scale increases in R. sierrae abundance that we document suggest that, when habitats are relatively intact and stressors are reduced in their importance by active management or species’ adaptive responses, declines of some amphibians may be partially reversible, at least at a regional scale. Other studies conducted over similarly large temporal and spatial scales are critically needed to provide insight and generality about the reversibility of amphibian declines at a global scale.
Large-scale recovery of an endangered amphibian despite ongoing exposure to multiple stressors.
Knapp, Roland A; Fellers, Gary M; Kleeman, Patrick M; Miller, David A W; Vredenburg, Vance T; Rosenblum, Erica Bree; Briggs, Cheryl J
2016-10-18
Amphibians are one of the most threatened animal groups, with 32% of species at risk for extinction. Given this imperiled status, is the disappearance of a large fraction of the Earth's amphibians inevitable, or are some declining species more resilient than is generally assumed? We address this question in a species that is emblematic of many declining amphibians, the endangered Sierra Nevada yellow-legged frog (Rana sierrae). Based on >7,000 frog surveys conducted across Yosemite National Park over a 20-y period, we show that, after decades of decline and despite ongoing exposure to multiple stressors, including introduced fish, the recently emerged disease chytridiomycosis, and pesticides, R. sierrae abundance increased sevenfold during the study and at a rate of 11% per year. These increases occurred in hundreds of populations throughout Yosemite, providing a rare example of amphibian recovery at an ecologically relevant spatial scale. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis. The disappearance of nonnative fish from numerous water bodies after cessation of stocking also contributed to the recovery. The large-scale increases in R. sierrae abundance that we document suggest that, when habitats are relatively intact and stressors are reduced in their importance by active management or species' adaptive responses, declines of some amphibians may be partially reversible, at least at a regional scale. Other studies conducted over similarly large temporal and spatial scales are critically needed to provide insight and generality about the reversibility of amphibian declines at a global scale.
Large-scale recovery of an endangered amphibian despite ongoing exposure to multiple stressors
Knapp, Roland A.; Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A. W.; Rosenblum, Erica Bree; Briggs, Cheryl J.
2016-01-01
Amphibians are one of the most threatened animal groups, with 32% of species at risk for extinction. Given this imperiled status, is the disappearance of a large fraction of the Earth’s amphibians inevitable, or are some declining species more resilient than is generally assumed? We address this question in a species that is emblematic of many declining amphibians, the endangered Sierra Nevada yellow-legged frog (Rana sierrae). Based on >7,000 frog surveys conducted across Yosemite National Park over a 20-y period, we show that, after decades of decline and despite ongoing exposure to multiple stressors, including introduced fish, the recently emerged disease chytridiomycosis, and pesticides, R. sierrae abundance increased sevenfold during the study and at a rate of 11% per year. These increases occurred in hundreds of populations throughout Yosemite, providing a rare example of amphibian recovery at an ecologically relevant spatial scale. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis. The disappearance of nonnative fish from numerous water bodies after cessation of stocking also contributed to the recovery. The large-scale increases in R. sierrae abundance that we document suggest that, when habitats are relatively intact and stressors are reduced in their importance by active management or species’ adaptive responses, declines of some amphibians may be partially reversible, at least at a regional scale. Other studies conducted over similarly large temporal and spatial scales are critically needed to provide insight and generality about the reversibility of amphibian declines at a global scale. PMID:27698128
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
Homogenization techniques for population dynamics in strongly heterogeneous landscapes.
Yurk, Brian P; Cobbold, Christina A
2018-12-01
An important problem in spatial ecology is to understand how population-scale patterns emerge from individual-level birth, death, and movement processes. These processes, which depend on local landscape characteristics, vary spatially and may exhibit sharp transitions through behavioural responses to habitat edges, leading to discontinuous population densities. Such systems can be modelled using reaction-diffusion equations with interface conditions that capture local behaviour at patch boundaries. In this work we develop a novel homogenization technique to approximate the large-scale dynamics of the system. We illustrate our approach, which also generalizes to multiple species, with an example of logistic growth within a periodic environment. We find that population persistence and the large-scale population carrying capacity is influenced by patch residence times that depend on patch preference, as well as movement rates in adjacent patches. The forms of the homogenized coefficients yield key theoretical insights into how large-scale dynamics arise from the small-scale features.
Measures for a transdimensional multiverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz-Perlov, Delia; Vilenkin, Alexander, E-mail: dperlov@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu
2010-06-01
The multiverse/landscape paradigm that has emerged from eternal inflation and string theory, describes a large-scale multiverse populated by ''pocket universes'' which come in a huge variety of different types, including different dimensionalities. In order to make predictions in the multiverse, we need a probability measure. In (3+1)d landscapes, the scale factor cutoff measure has been previously shown to have a number of attractive properties. Here we consider possible generalizations of this measure to a transdimensional multiverse. We find that a straightforward extension of scale factor cutoff to the transdimensional case gives a measure that strongly disfavors large amounts of slow-rollmore » inflation and predicts low values for the density parameter Ω, in conflict with observations. A suitable generalization, which retains all the good properties of the original measure, is the ''volume factor'' cutoff, which regularizes the infinite spacetime volume using cutoff surfaces of constant volume expansion factor.« less
ERIC Educational Resources Information Center
Constantino, John N.; Frazier, Thomas W.
2013-01-01
In their analysis of the accumulated data from the clinically ascertained Simons Simplex Collection (SSC), Hus et al. (2013) provide a large-scale clinical replication of previously reported associations (see Constantino, Hudziak & Todd, 2003) between quantitative autistic traits [as measured by the Social Responsiveness Scale (SRS)] and…
A global traveling wave on Venus
NASA Technical Reports Server (NTRS)
Smith, Michael D.; Gierasch, Peter J.; Schinder, Paul J.
1993-01-01
The dominant large-scale pattern in the clouds of Venus has been described as a 'Y' or 'Psi' and tentatively identified by earlier workers as a Kelvin wave. A detailed calculation of linear wave modes in the Venus atmosphere verifies this identification. Cloud feedback by infrared heating fluctuations is a plausible excitation mechanism. Modulation of the large-scale pattern by the wave is a possible explanation for the Y. Momentum transfer by the wave could contribute to sustaining the general circulation.
Outbreaks associated to large open air festivals, including music festivals, 1980 to 2012.
Botelho-Nevers, E; Gautret, P
2013-03-14
In the minds of many, large scale open air festivals have become associated with spring and summer, attracting many people, and in the case of music festivals, thousands of music fans. These festivals share the usual health risks associated with large mass gatherings, including transmission of communicable diseases and risk of outbreaks. Large scale open air festivals have however specific characteristics, including outdoor settings, on-site housing and food supply and the generally young age of the participants. Outbreaks at large scale open air festivals have been caused by Cryptosporium parvum, Campylobacter spp., Escherichia coli, Salmonella enterica, Shigella sonnei, Staphylococcus aureus, hepatitis A virus, influenza virus, measles virus, mumps virus and norovirus. Faecal-oral and respiratory transmissions of pathogens result from non-compliance with hygiene rules, inadequate sanitation and insufficient vaccination coverage. Sexual transmission of infectious diseases may also occur and is likely to be underestimated and underreported. Enhanced surveillance during and after festivals is essential. Preventive measures such as immunisations of participants and advice on-site and via social networks should be considered to reduce outbreaks at these large scale open air festivals.
David W. MacFarlane
2015-01-01
Accurately assessing forest biomass potential is contingent upon having accurate tree biomass models to translate data from forest inventories. Building generality into these models is especially important when they are to be applied over large spatial domains, such as regional, national and international scales. Here, new, generalized whole-tree mass / volume...
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.
A novel representation of groundwater dynamics in large-scale land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael; Kollet, Stefan
2017-04-01
Land surface processes are connected to groundwater dynamics via shallow soil moisture. For example, groundwater affects evapotranspiration (by influencing the variability of soil moisture) and runoff generation mechanisms. However, contemporary Land Surface Models (LSM) generally consider isolated soil columns and free drainage lower boundary condition for simulating hydrology. This is mainly due to the fact that incorporating detailed groundwater dynamics in LSMs usually requires considerable computing resources, especially for large-scale applications (e.g., continental to global). Yet, these simplifications undermine the potential effect of groundwater dynamics on land surface mass and energy fluxes. In this study, we present a novel approach of representing high-resolution groundwater dynamics in LSMs that is computationally efficient for large-scale applications. This new parameterization is incorporated in the Joint UK Land Environment Simulator (JULES) and tested at the continental-scale.
General circulation of the South Atlantic between 5 deg N and 35 deg S
NASA Technical Reports Server (NTRS)
Ollitrault, Michel; Mercier, H.; Blanc, F.; Letraon, L. Y.
1991-01-01
The TOPEX/POSEIDON altimeter will provide the temporal mean seal level. So, secondly, we propose to compute the difference between these two surfaces (mean sea level minus general circulation dynamic topography). The result will be an estimate of the marine geoid, which is time invariant for the 5-year period under consideration. If this geoid is precise enough, it will permit a description of seasonal variability of the large-scale surface circulation. If there happens to be enough float data, it may be possible to infer the first vertical modes of this variability. Thus the main goal of our investigation is to determine the 3-D general circulation of the South Atlantic and the large-scale seasonal fluctuations. This last objective, however, may be restricted to the western part of the South Atlantic because float deployments have been scheduled only in the Brasil basin.
Large-scale 3D galaxy correlation function and non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Doré, Olivier; Bertacca, Daniele
We investigate the properties of the 2-point galaxy correlation function at very large scales, including all geometric and local relativistic effects --- wide-angle effects, redshift space distortions, Doppler terms and Sachs-Wolfe type terms in the gravitational potentials. The general three-dimensional correlation function has a nonzero dipole and octupole, in addition to the even multipoles of the flat-sky limit. We study how corrections due to primordial non-Gaussianity and General Relativity affect the multipolar expansion, and we show that they are of similar magnitude (when f{sub NL} is small), so that a relativistic approach is needed. Furthermore, we look at how large-scalemore » corrections depend on the model for the growth rate in the context of modified gravity, and we discuss how a modified growth can affect the non-Gaussian signal in the multipoles.« less
Trietsch, Jasper; van Steenkiste, Ben; Hobma, Sjoerd; Frericks, Arnoud; Grol, Richard; Metsemakers, Job; van der Weijden, Trudy
2014-12-01
A quality improvement strategy consisting of comparative feedback and peer review embedded in available local quality improvement collaboratives proved to be effective in changing the test-ordering behaviour of general practitioners. However, implementing this strategy was problematic. We aimed for large-scale implementation of an adapted strategy covering both test ordering and prescribing performance. Because we failed to achieve large-scale implementation, the aim of this study was to describe and analyse the challenges of the transferring process. In a qualitative study 19 regional health officers, pharmacists, laboratory specialists and general practitioners were interviewed within 6 months after the transfer period. The interviews were audiotaped, transcribed and independently coded by two of the authors. The codes were matched to the dimensions of the normalization process theory. The general idea of the strategy was widely supported, but generating the feedback was more complex than expected and the need for external support after transfer of the strategy remained high because participants did not assume responsibility for the work and the distribution of resources that came with it. Evidence on effectiveness, a national infrastructure for these collaboratives and a general positive attitude were not sufficient for normalization. Thinking about managing large databases, responsibility for tasks and distribution of resources should start as early as possible when planning complex quality improvement strategies. Merely exploring the barriers and facilitators experienced in a preceding trial is not sufficient. Although multifaceted implementation strategies to change professional behaviour are attractive, their inherent complexity is also a pitfall for large-scale implementation. © 2014 John Wiley & Sons, Ltd.
Saito, Masayuki; Koike, Fumito
2013-01-01
Urbanization may alter mammal assemblages via habitat loss, food subsidies, and other factors related to human activities. The general distribution patterns of wild mammal assemblages along urban–rural–forest landscape gradients have not been studied, although many studies have focused on a single species or taxon, such as rodents. We quantitatively evaluated the effects of the urban–rural–forest gradient and spatial scale on the distributions of large and mid-sized mammals in the world's largest metropolitan area in warm-temperate Asia using nonspecific camera-trapping along two linear transects spanning from the urban zone in the Tokyo metropolitan area to surrounding rural and forest landscapes. Many large and mid-sized species generally decreased from forest landscapes to urban cores, although some species preferred anthropogenic landscapes. Sika deer (Cervus nippon), Reeves' muntjac (Muntiacus reevesi), Japanese macaque (Macaca fuscata), Japanese squirrel (Sciurus lis), Japanese marten (Martes melampus), Japanese badger (Meles anakuma), and wild boar (Sus scrofa) generally dominated the mammal assemblage of the forest landscape. Raccoon (Procyon lotor), raccoon dog (Nyctereutes procyonoides), and Japanese hare (Lepus brachyurus) dominated the mammal assemblage in the intermediate zone (i.e., rural and suburban landscape). Cats (feral and free-roaming housecats; Felis catus) were common in the urban assemblage. The key spatial scales for forest species were more than 4000-m radius, indicating that conservation and management plans for these mammal assemblages should be considered on large spatial scales. However, small green spaces will also be important for mammal conservation in the urban landscape, because an indigenous omnivore (raccoon dog) had a smaller key spatial scale (500-m radius) than those of forest mammals. Urbanization was generally the most important factor in the distributions of mammals, and it is necessary to consider the spatial scale of management according to the degree of urbanization. PMID:23741495
Saito, Masayuki; Koike, Fumito
2013-01-01
Urbanization may alter mammal assemblages via habitat loss, food subsidies, and other factors related to human activities. The general distribution patterns of wild mammal assemblages along urban-rural-forest landscape gradients have not been studied, although many studies have focused on a single species or taxon, such as rodents. We quantitatively evaluated the effects of the urban-rural-forest gradient and spatial scale on the distributions of large and mid-sized mammals in the world's largest metropolitan area in warm-temperate Asia using nonspecific camera-trapping along two linear transects spanning from the urban zone in the Tokyo metropolitan area to surrounding rural and forest landscapes. Many large and mid-sized species generally decreased from forest landscapes to urban cores, although some species preferred anthropogenic landscapes. Sika deer (Cervus nippon), Reeves' muntjac (Muntiacus reevesi), Japanese macaque (Macaca fuscata), Japanese squirrel (Sciurus lis), Japanese marten (Martes melampus), Japanese badger (Meles anakuma), and wild boar (Sus scrofa) generally dominated the mammal assemblage of the forest landscape. Raccoon (Procyon lotor), raccoon dog (Nyctereutes procyonoides), and Japanese hare (Lepus brachyurus) dominated the mammal assemblage in the intermediate zone (i.e., rural and suburban landscape). Cats (feral and free-roaming housecats; Felis catus) were common in the urban assemblage. The key spatial scales for forest species were more than 4000-m radius, indicating that conservation and management plans for these mammal assemblages should be considered on large spatial scales. However, small green spaces will also be important for mammal conservation in the urban landscape, because an indigenous omnivore (raccoon dog) had a smaller key spatial scale (500-m radius) than those of forest mammals. Urbanization was generally the most important factor in the distributions of mammals, and it is necessary to consider the spatial scale of management according to the degree of urbanization.
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Jack, Robert L.; Lecomte, Vivien
2017-03-01
We analyze large deviations of the time-averaged activity in the one-dimensional Fredrickson-Andersen model, both numerically and analytically. The model exhibits a dynamical phase transition, which appears as a singularity in the large deviation function. We analyze the finite-size scaling of this phase transition numerically, by generalizing an existing cloning algorithm to include a multicanonical feedback control: this significantly improves the computational efficiency. Motivated by these numerical results, we formulate an effective theory for the model in the vicinity of the phase transition, which accounts quantitatively for the observed behavior. We discuss potential applications of the numerical method and the effective theory in a range of more general contexts.
General relativistic corrections to the weak lensing convergence power spectrum
NASA Astrophysics Data System (ADS)
Giblin, John T.; Mertens, James B.; Starkman, Glenn D.; Zentner, Andrew R.
2017-11-01
We compute the weak lensing convergence power spectrum, Cℓκκ, in a dust-filled universe using fully nonlinear general relativistic simulations. The spectrum is then compared to more standard, approximate calculations by computing the Bardeen (Newtonian) potentials in linearized gravity and partially utilizing the Born approximation. We find corrections to the angular power spectrum amplitude of order ten percent at very large angular scales, ℓ˜2 - 3 , and percent-level corrections at intermediate angular scales of ℓ˜20 - 30 .
Cross-lingual neighborhood effects in generalized lexical decision and natural reading.
Dirix, Nicolas; Cop, Uschi; Drieghe, Denis; Duyck, Wouter
2017-06-01
The present study assessed intra- and cross-lingual neighborhood effects, using both a generalized lexical decision task and an analysis of a large-scale bilingual eye-tracking corpus (Cop, Dirix, Drieghe, & Duyck, 2016). Using new neighborhood density and frequency measures, the general lexical decision task yielded an inhibitory cross-lingual neighborhood density effect on reading times of second language words, replicating van Heuven, Dijkstra, and Grainger (1998). Reaction times for native language words were not influenced by neighborhood density or frequency but error rates showed cross-lingual neighborhood effects depending on target word frequency. The large-scale eye movement corpus confirmed effects of cross-lingual neighborhood on natural reading, even though participants were reading a novel in a unilingual context. Especially second language reading and to a lesser extent native language reading were influenced by lexical candidates from the nontarget language, although these effects in natural reading were largely facilitatory. These results offer strong and direct support for bilingual word recognition models that assume language-independent lexical access. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Performance/price estimates for cortex-scale hardware: a design space exploration.
Zaveri, Mazad S; Hammerstrom, Dan
2011-04-01
In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
Mega-geomorphology: Mars vis a vis Earth
NASA Technical Reports Server (NTRS)
Sharp, R. P.
1985-01-01
The areas of chaotic terrain, the giant chasma of the Valles Marineris region, the complex linear and circular depressions of Labyrinthus Noctis on Mars all suggest the possibility of large scale collapse of parts of the martian crust within equatorial and sub equatorial latitudes. It seems generally accepted that the above features are fossil, being perhaps, more than a billion years old. It is possible that parts of Earth's crust experienced similar episodes of large scale collapse sometime early in the evolution of the planet.
2012-05-01
pressures on supply that led to the global food crisis of 2007 and 2008, allowing prices to fall from their peak in August 2008, the foundational...involved in the acquisition of farmland.9 This trend is also unlikely to slow, with food prices continuing to climb, surpassing the highs of 2007 and...and general secrecy in most large-scale land acquisition contracts, exact data regarding the number of deals and amount of land transferred are
Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model
ERIC Educational Resources Information Center
Von Davier, Matthias; Yamamoto, Kentaro
2004-01-01
The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…
Stability of knotted vortices in wave chaos
NASA Astrophysics Data System (ADS)
Taylor, Alexander; Dennis, Mark
Large scale tangles of disordered filaments occur in many diverse physical systems, from turbulent superfluids to optical volume speckle to liquid crystal phases. They can exhibit particular large scale random statistics despite very different local physics. We have previously used the topological statistics of knotting and linking to characterise the large scale tangling, using the vortices of three-dimensional wave chaos as a universal model system whose physical lengthscales are set only by the wavelength. Unlike geometrical quantities, the statistics of knotting depend strongly on the physical system and boundary conditions. Although knotting patterns characterise different systems, the topology of vortices is highly unstable to perturbation, under which they may reconnect with one another. In systems of constructed knots, these reconnections generally rapidly destroy the knot, but for vortex tangles the topological statistics must be stable. Using large scale simulations of chaotic eigenfunctions, we numerically investigate the prevalence and impact of reconnection events, and their effect on the topology of the tangle.
Stability of large-scale systems with stable and unstable subsystems.
NASA Technical Reports Server (NTRS)
Grujic, Lj. T.; Siljak, D. D.
1972-01-01
The purpose of this paper is to develop new methods for constructing vector Liapunov functions and broaden the application of Liapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. With minor technical adjustments, the same criterion can be used to determine connective asymptotic stability of large-scale systems subject to structural perturbations. By redefining the constraints imposed on the interconnections among the subsystems, the considered class of systems is broadened in an essential way to include composite systems with unstable subsystems. In this way, the theory is brought substantially closer to reality since stability of all subsystems is no longer a necessary assumption in establishing stability of the overall composite system.
Strain localisation in the continental lithosphere, a scale-dependent process
NASA Astrophysics Data System (ADS)
Jolivet, Laurent; Burov, Evguenii
2013-04-01
Strain localisation in continents is a general question tackled by specialists of various disciplines in Earth Sciences. Field geologists working at regional scale are able to describe the succession of events leading to the formation of large strain zones that accommodate large displacement within plate boundaries. On the other end of the spectrum, laboratory experiments provide numbers that quantitatively describe the rheology of rock material at the scale of a few mm and at deformation rates up to 8-10 orders of magnitude faster than in nature. Extrapolating from the scale of the experiment to the scale of the continental lithosphere is a considerable leap across 8-10 orders of magnitude both in space and time. It is however quite obvious that different processes are at work for each scale considered. At the scale of a grain aggregate diffusion within individual grains, dislocation or grain boundary sliding, depending on temperature and fluid conditions, are of primary importance. But at the scale of a mountain belt, a major detachment or a strike-slip shear zone that have accommodated tens or hundreds of kilometres of relative displacement, other parameters will take over such as structural softening and the heterogeneity of the crust inherited from past tectonic events that have juxtaposed rock units of very different compositions and induced a strong orientation of rocks. Once the deformation is localised along major shear zones, grain size reduction, interaction between rocks and fluids and metamorphic reactions and other small-scale processes tend to further localise the strain. Because the crust is colder and more lithologically complex this heterogeneity is likely much more prominent in the crust than in the mantle and then the relative importance of "small-scale" and "large-scale" parameters will be very different in the crust and in the mantle. Thus, depending upon the relative thickness of the crust and mantle in the deforming lithosphere, the role of each mechanism will have more or less important consequences on strain localisation. This complexity sometimes leads to disregard of experimental parameters in large-scale thermo-mechanical models and to use instead ad hoc "large-scale" numbers that better fit the observed geological history. The goal of the ERC RHEOLITH project is to associate to each tectonic process the relevant rheological parameters depending upon the scale considered, in an attempt to elaborate a generalized "Preliminary Rheology Model Set for Lithosphere" (PReMSL), which will cover the entire time and spatial scale range of deformation.
Large-scale velocities and primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Fabian
2010-09-15
We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less
Henry, Julie D; Crawford, John R
2005-06-01
To test the construct validity of the short-form version of the Depression anxiety and stress scale (DASS-21), and in particular, to assess whether stress as indexed by this measure is synonymous with negative affectivity (NA) or whether it represents a related, but distinct, construct. To provide normative data for the general adult population. Cross-sectional, correlational and confirmatory factor analysis (CFA). The DASS-21 was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,794). Competing models of the latent structure of the DASS-21 were evaluated using CFA. The model with optimal fit (RCFI = 0.94) had a quadripartite structure, and consisted of a general factor of psychological distress plus orthogonal specific factors of depression, anxiety, and stress. This model was a significantly better fit than a competing model that tested the possibility that the Stress scale simply measures NA. The DASS-21 subscales can validly be used to measure the dimensions of depression, anxiety, and stress. However, each of these subscales also taps a more general dimension of psychological distress or NA. The utility of the measure is enhanced by the provision of normative data based on a large sample.
SOLAR WIND TURBULENCE FROM MHD TO SUB-ION SCALES: HIGH-RESOLUTION HYBRID SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franci, Luca; Verdini, Andrea; Landi, Simone
2015-05-10
We present results from a high-resolution and large-scale hybrid (fluid electrons and particle-in-cell protons) two-dimensional numerical simulation of decaying turbulence. Two distinct spectral regions (separated by a smooth break at proton scales) develop with clear power-law scaling, each one occupying about a decade in wavenumbers. The simulation results simultaneously exhibit several properties of the observed solar wind fluctuations: spectral indices of the magnetic, kinetic, and residual energy spectra in the magnetohydrodynamic (MHD) inertial range along with a flattening of the electric field spectrum, an increase in magnetic compressibility, and a strong coupling of the cascade with the density and themore » parallel component of the magnetic fluctuations at sub-proton scales. Our findings support the interpretation that in the solar wind, large-scale MHD fluctuations naturally evolve beyond proton scales into a turbulent regime that is governed by the generalized Ohm’s law.« less
Solar Wind Turbulence from MHD to Sub-ion Scales: High-resolution Hybrid Simulations
NASA Astrophysics Data System (ADS)
Franci, Luca; Verdini, Andrea; Matteini, Lorenzo; Landi, Simone; Hellinger, Petr
2015-05-01
We present results from a high-resolution and large-scale hybrid (fluid electrons and particle-in-cell protons) two-dimensional numerical simulation of decaying turbulence. Two distinct spectral regions (separated by a smooth break at proton scales) develop with clear power-law scaling, each one occupying about a decade in wavenumbers. The simulation results simultaneously exhibit several properties of the observed solar wind fluctuations: spectral indices of the magnetic, kinetic, and residual energy spectra in the magnetohydrodynamic (MHD) inertial range along with a flattening of the electric field spectrum, an increase in magnetic compressibility, and a strong coupling of the cascade with the density and the parallel component of the magnetic fluctuations at sub-proton scales. Our findings support the interpretation that in the solar wind, large-scale MHD fluctuations naturally evolve beyond proton scales into a turbulent regime that is governed by the generalized Ohm’s law.
Application of LANDSAT data to delimitation of avalanche hazards in Montane Colorado
NASA Technical Reports Server (NTRS)
Knepper, D. H. (Principal Investigator); Ives, J. D.; Summer, R.
1975-01-01
The author has identified the following significant results. Interpretation of small scale LANDSAT imagery provides a means for determining the general location and distribution of avalanche paths. The accuracy and completeness of small scale mapping is less than is obtained from the interpretation of large scale color infrared photos. Interpretation of enlargement prints (18X) of LANDSAT imagery is superior to small scale imagery, because more detailed information can be extracted and annotated.
ERIC Educational Resources Information Center
Stifle, Jack
The PLATO IV computer-based instructional system consists of a large scale centrally located CDC 6400 computer and a large number of remote student terminals. This is a brief and general description of the proposed input/output hardware necessary to interface the student terminals with the computer's central processing unit (CPU) using available…
Andrew P. Kinziger; Rodney J. Nakamoto; Bret C. Harvey
2014-01-01
Given the general pattern of invasions with severe ecological consequences commonly resulting from multiple introductions of large numbers of individuals on the intercontinental scale, we explored an example of a highly successful, ecologically significant invader introduced over a short distance, possibly via minimal propagule pressure. The Sacramento pikeminnow (
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
Using MHD Models for Context for Multispacecraft Missions
NASA Astrophysics Data System (ADS)
Reiff, P. H.; Sazykin, S. Y.; Webster, J.; Daou, A.; Welling, D. T.; Giles, B. L.; Pollock, C.
2016-12-01
The use of global MHD models such as BATS-R-US to provide context to data from widely spaced multispacecraft mission platforms is gaining in popularity and in effectiveness. Examples are shown, primarily from the Magnetospheric Multiscale Mission (MMS) program compared to BATS-R-US. We present several examples of large-scale magnetospheric configuration changes such as tail dipolarization events and reconfigurations after a sector boundary crossing which are made much more easily understood by placing the spacecraft in the model fields. In general, the models can reproduce the large-scale changes observed by the various spacecraft but sometimes miss small-scale or rapid time changes.
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Large scale, synchronous variability of marine fish populations driven by commercial exploitation.
Frank, Kenneth T; Petrie, Brian; Leggett, William C; Boyce, Daniel G
2016-07-19
Synchronous variations in the abundance of geographically distinct marine fish populations are known to occur across spatial scales on the order of 1,000 km and greater. The prevailing assumption is that this large-scale coherent variability is a response to coupled atmosphere-ocean dynamics, commonly represented by climate indexes, such as the Atlantic Multidecadal Oscillation and North Atlantic Oscillation. On the other hand, it has been suggested that exploitation might contribute to this coherent variability. This possibility has been generally ignored or dismissed on the grounds that exploitation is unlikely to operate synchronously at such large spatial scales. Our analysis of adult fishing mortality and spawning stock biomass of 22 North Atlantic cod (Gadus morhua) stocks revealed that both the temporal and spatial scales in fishing mortality and spawning stock biomass were equivalent to those of the climate drivers. From these results, we conclude that greater consideration must be given to the potential of exploitation as a driving force behind broad, coherent variability of heavily exploited fish species.
NASA Astrophysics Data System (ADS)
Madriz Aguilar, José Edgar; Bellini, Mauricio
2009-08-01
Considering a five-dimensional (5D) Riemannian spacetime with a particular stationary Ricci-flat metric, we obtain in the framework of the induced matter theory an effective 4D static and spherically symmetric metric which give us ordinary gravitational solutions on small (planetary and astrophysical) scales, but repulsive (anti gravitational) forces on very large (cosmological) scales with ω=-1. Our approach is an unified manner to describe dark energy, dark matter and ordinary matter. We illustrate the theory with two examples, the solar system and the great attractor. From the geometrical point of view, these results follow from the assumption that exists a confining force that make possible that test particles move on a given 4D hypersurface.
Statistics of galaxy orientations - Morphology and large-scale structure
NASA Technical Reports Server (NTRS)
Lambas, Diego G.; Groth, Edward J.; Peebles, P. J. E.
1988-01-01
Using the Uppsala General Catalog of bright galaxies and the northern and southern maps of the Lick counts of galaxies, statistical evidence of a morphology-orientation effect is found. Major axes of elliptical galaxies are preferentially oriented along the large-scale features of the Lick maps. However, the orientations of the major axes of spiral and lenticular galaxies show no clear signs of significant nonrandom behavior at a level of less than about one-fifth of the effect seen for ellipticals. The angular scale of the detected alignment effect for Uppsala ellipticals extends to at least theta of about 2 deg, which at a redshift of z of about 0.02 corresponds to a linear scale of about 2/h Mpc.
Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing
NASA Astrophysics Data System (ADS)
Colombo, Matteo
2017-03-01
Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence.
A priori testing of subgrid-scale models for large-eddy simulation of the atmospheric boundary layer
NASA Astrophysics Data System (ADS)
Juneja, Anurag; Brasseur, James G.
1996-11-01
Subgrid-scale models are generally developed assuming homogeneous isotropic turbulence with the filter cutoff lying in the inertial range. In the surface layer and capping inversion regions of the atmospheric boundary layer, the turbulence is strongly anisotropic and, in general, influenced by both buoyancy and shear. Furthermore, the integral scale motions are under-resolved in these regions. Herein we perform direct numerical simulations of shear and buoyancy-generated homogeneous anisotropic turbulence to compute and analyze the actual subgrid-resolved-scale (SGS-RS) dynamics as the filter cutoff moves into the energy-containing scales. These are compared with the SGS-RS dynamics predicted by Smagorinsky-based models with a focus on motivating improved closures. We find that, in general, the underlying assumption of such models, that the anisotropic part of the subgrid stress tensor be aligned with the resolved strain rate tensor, is a poor approximation. Similarly, we find poor alignment between the actual and predicted stress divergence, and find low correlations between the actual and modeled subgrid-scale contribution to the pressure and pressure gradient. Details will be given in the talk.
Cyclicity in Upper Mississippian Bangor Limestone, Blount County, Alabama
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronner, R.L.
1988-01-01
The Upper Mississippian (Chesterian) Bangor Limestone in Alabama consists of a thick, complex sequence of carbonate platform deposits. A continuous core through the Bangor on Blount Mountain in north-central Alabama provides the opportunity to analyze the unit for cyclicity and to identify controls on vertical facies sequence. Lithologies from the core represent four general environments of deposition: (1) subwave-base, open marine, (2) shoal, (3) lagoon, and (4) peritidal. Analysis of the vertical sequence of lithologies in the core indicates the presence of eight large-scale cycles dominated by subtidal deposits, but defined on the basis of peritidal caps. These large-scale cyclesmore » can be subdivided into 16 small-scale cycles that may be entirely subtidal but illustrate upward shallowing followed by rapid deepening. Large-scale cycles range from 33 to 136 ft thick, averaging 68 ft; small-scale cycles range from 5 to 80 ft thick and average 34 ft. Small-scale cycles have an average duration of approximately 125,000 years, which is compatible with Milankovitch periodicity. The large-scale cycles have an average duration of approximately 250,000 years, which may simply reflect variations in amplitude of sea level fluctuation or the influence of tectonic subsidence along the southeastern margin of the North American craton.« less
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.; Shamrani, Abdul Rahman
2015-01-01
This study examines the psychometric features of a General Aptitude Test-Verbal Part, which is used with assessments of high school graduates in Saudi Arabia. The data supported a bifactor model, with one general factor and three content domains (Analogy, Sentence Completion, and Reading Comprehension) as latent aspects of verbal aptitude.
Architectural Optimization of Digital Libraries
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1998-01-01
This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.
Splitting of the weak hypercharge quantum
NASA Astrophysics Data System (ADS)
Nielsen, H. B.; Brene, N.
1991-08-01
The ratio between the weak hypercharge quantum for particles having no coupling to the gauge bosons corresponding to the semi-simple component of the gauge group and the smallest hypercharge quantum for particles that do have such couplings is exceptionally large for the standard model, considering its rank. To compare groups with respect to this property we propose a quantity χ which depends on the rank of the group and the splitting ratio of the hypercharge(s) to be found in the group. The quantity χ has maximal value for the gauge group of the standard model. This suggests that the hypercharge splitting may play an important rôle either in the origin of the gauge symmetry at a fundamental scale or in some kind of selection mechanism at a scale perhaps nearer to the experimental scale. Such a selection mechanism might be what we have called confusion which removes groups with many (so-called generalized) automorphisms. The quantity χ tends to be large for groups with few generalized automorphisms.
General-relativistic Large-eddy Simulations of Binary Neutron Star Mergers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radice, David, E-mail: dradice@astro.princeton.edu
The flow inside remnants of binary neutron star (NS) mergers is expected to be turbulent, because of magnetohydrodynamics instability activated at scales too small to be resolved in simulations. To study the large-scale impact of these instabilities, we develop a new formalism, based on the large-eddy simulation technique, for the modeling of subgrid-scale turbulent transport in general relativity. We apply it, for the first time, to the simulation of the late-inspiral and merger of two NSs. We find that turbulence can significantly affect the structure and survival time of the merger remnant, as well as its gravitational-wave (GW) and neutrinomore » emissions. The former will be relevant for GW observation of merging NSs. The latter will affect the composition of the outflow driven by the merger and might influence its nucleosynthetic yields. The accretion rate after black hole formation is also affected. Nevertheless, we find that, for the most likely values of the turbulence mixing efficiency, these effects are relatively small and the GW signal will be affected only weakly by the turbulence. Thus, our simulations provide a first validation of all existing post-merger GW models.« less
Does deep ocean mixing drive upwelling or downwelling of abyssal waters?
NASA Astrophysics Data System (ADS)
Ferrari, R. M.; McDougall, T. J.; Mashayek, A.; Nikurashin, M.; Campin, J. M.
2016-02-01
It is generally understood that small-scale mixing, such as is caused by breaking internal waves, drives upwelling of the densest ocean waters that sink to the ocean bottom at high latitudes. However the observational evidence that the turbulent fluxes generated by small-scale mixing in the stratified ocean interior are more vigorous close to the ocean bottom than above implies that small-scale mixing converts light waters into denser ones, thus driving a net sinking of abyssal water. Using a combination of numerical models and observations, it will be shown that abyssal waters return to the surface along weakly stratified boundary layers, where the small-scale mixing of density decays to zero. The net ocean meridional overturning circulation is thus the small residual of a large sinking of waters, driven by small-scale mixing in the stratified interior, and a comparably large upwelling, driven by the reduced small-scale mixing along the ocean boundaries.
Forced Alignment for Understudied Language Varieties: Testing Prosodylab-Aligner with Tongan Data
ERIC Educational Resources Information Center
Johnson, Lisa M.; Di Paolo, Marianna; Bell, Adrian
2018-01-01
Automated alignment of transcriptions to audio files expedites the process of preparing data for acoustic analysis. Unfortunately, the benefits of auto-alignment have generally been available only to researchers studying majority languages, for which large corpora exist and for which acoustic models have been created by large-scale research…
Implications of Small Samples for Generalization: Adjustments and Rules of Thumb
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hallberg, Kelly; Hedges, Larry V.; Chan, Wendy
2015-01-01
Policy-makers are frequently interested in understanding how effective a particular intervention may be for a specific (and often broad) population. In many fields, particularly education and social welfare, the ideal form of these evaluations is a large-scale randomized experiment. Recent research has highlighted that sites in these large-scale…
Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.
Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van
2017-06-01
In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig
It is argued by extrapolation of general relativity and quantum mechanics that a classical inertial frame corresponds to a statistically defined observable that rotationally fluctuates due to Planck scale indeterminacy. Physical effects of exotic nonlocal rotational correlations on large scale field states are estimated. Their entanglement with the strong interaction vacuum is estimated to produce a universal, statistical centrifugal acceleration that resembles the observed cosmological constant.
ERIC Educational Resources Information Center
Kampa, Nele; Köller, Olaf
2016-01-01
National and international large-scale assessments (LSA) have a major impact on educational systems, which raises fundamental questions about the validity of the measures regarding their internal structure and their relations to relevant covariates. Given its importance, research on the validity of instruments specifically developed for LSA is…
ERIC Educational Resources Information Center
Bowling, Nathan A.; Hammond, Gregory D.
2008-01-01
Although several different measures have been developed to assess job satisfaction, large-scale examinations of the psychometric properties of most satisfaction scales are generally lacking. In the current study we used meta-analysis to examine the construct validity of the Michigan Organizational Assessment Questionnaire Job Satisfaction Subscale…
COPS: Large-scale nonlinearly constrained optimization problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondarenko, A.S.; Bortz, D.M.; More, J.J.
2000-02-10
The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.
Atmospheric Diabatic Heating in Different Weather States and the General Circulation
NASA Technical Reports Server (NTRS)
Rossow, William B.; Zhang, Yuanchong; Tselioudis, George
2016-01-01
Analysis of multiple global satellite products identifies distinctive weather states of the atmosphere from the mesoscale pattern of cloud properties and quantifies the associated diabatic heating/cooling by radiative flux divergence, precipitation, and surface sensible heat flux. The results show that the forcing for the atmospheric general circulation is a very dynamic process, varying strongly at weather space-time scales, comprising relatively infrequent, strong heating events by ''stormy'' weather and more nearly continuous, weak cooling by ''fair'' weather. Such behavior undercuts the value of analyses of time-averaged energy exchanges in observations or numerical models. It is proposed that an analysis of the joint time-related variations of the global weather states and the general circulation on weather space-time scales might be used to establish useful ''feedback like'' relationships between cloud processes and the large-scale circulation.
ATLAS and LHC computing on CRAY
NASA Astrophysics Data System (ADS)
Sciacca, F. G.; Haug, S.; ATLAS Collaboration
2017-10-01
Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.
Heterogeneity and scale of sustainable development in cities.
Brelsford, Christa; Lobo, José; Hand, Joe; Bettencourt, Luís M A
2017-08-22
Rapid worldwide urbanization is at once the main cause and, potentially, the main solution to global sustainable development challenges. The growth of cities is typically associated with increases in socioeconomic productivity, but it also creates strong inequalities. Despite a growing body of evidence characterizing these heterogeneities in developed urban areas, not much is known systematically about their most extreme forms in developing cities and their consequences for sustainability. Here, we characterize the general patterns of income and access to services in a large number of developing cities, with an emphasis on an extensive, high-resolution analysis of the urban areas of Brazil and South Africa. We use detailed census data to construct sustainable development indices in hundreds of thousands of neighborhoods and show that their statistics are scale-dependent and point to the critical role of large cities in creating higher average incomes and greater access to services within their national context. We then quantify the general statistical trajectory toward universal basic service provision at different scales to show that it is characterized by varying levels of inequality, with initial increases in access being typically accompanied by growing disparities over characteristic spatial scales. These results demonstrate how extensions of these methods to other goals and data can be used over time and space to produce a simple but general quantitative assessment of progress toward internationally agreed sustainable development goals.
Heterogeneity and scale of sustainable development in cities
Brelsford, Christa; Lobo, José; Hand, Joe
2017-01-01
Rapid worldwide urbanization is at once the main cause and, potentially, the main solution to global sustainable development challenges. The growth of cities is typically associated with increases in socioeconomic productivity, but it also creates strong inequalities. Despite a growing body of evidence characterizing these heterogeneities in developed urban areas, not much is known systematically about their most extreme forms in developing cities and their consequences for sustainability. Here, we characterize the general patterns of income and access to services in a large number of developing cities, with an emphasis on an extensive, high-resolution analysis of the urban areas of Brazil and South Africa. We use detailed census data to construct sustainable development indices in hundreds of thousands of neighborhoods and show that their statistics are scale-dependent and point to the critical role of large cities in creating higher average incomes and greater access to services within their national context. We then quantify the general statistical trajectory toward universal basic service provision at different scales to show that it is characterized by varying levels of inequality, with initial increases in access being typically accompanied by growing disparities over characteristic spatial scales. These results demonstrate how extensions of these methods to other goals and data can be used over time and space to produce a simple but general quantitative assessment of progress toward internationally agreed sustainable development goals. PMID:28461489
NASA Astrophysics Data System (ADS)
Rewieński, M.; Lamecki, A.; Mrozowski, M.
2013-09-01
This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...
2015-06-19
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less
Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu
2016-01-01
Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (<1 Hz). However, it is difficult to determine the frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Agrawal, Ankit; Ganai, Nirmalendu; Sengupta, Surajit; Menon, Gautam I.
2017-01-01
Active matter models describe a number of biophysical phenomena at the cell and tissue scale. Such models explore the macroscopic consequences of driving specific soft condensed matter systems of biological relevance out of equilibrium through ‘active’ processes. Here, we describe how active matter models can be used to study the large-scale properties of chromosomes contained within the nuclei of human cells in interphase. We show that polymer models for chromosomes that incorporate inhomogeneous activity reproduce many general, yet little understood, features of large-scale nuclear architecture. These include: (i) the spatial separation of gene-rich, low-density euchromatin, predominantly found towards the centre of the nucleus, vis a vis. gene-poor, denser heterochromatin, typically enriched in proximity to the nuclear periphery, (ii) the differential positioning of individual gene-rich and gene-poor chromosomes, (iii) the formation of chromosome territories, as well as (iv), the weak size-dependence of the positions of individual chromosome centres-of-mass relative to the nuclear centre that is seen in some cell types. Such structuring is induced purely by the combination of activity and confinement and is absent in thermal equilibrium. We systematically explore active matter models for chromosomes, discussing how our model can be generalized to study variations in chromosome positioning across different cell types. The approach and model we outline here represent a preliminary attempt towards a quantitative, first-principles description of the large-scale architecture of the cell nucleus.
Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities (Book)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2013-03-01
To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. The U.S. Department of Energy's Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessary private capital to complete them. This guide is intended to provide a general resource that will begin to develop the Federal employee's awareness and understanding of the project developer's operating environment and the private sector's awareness and understandingmore » of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this guide has been organized to match Federal processes with typical phases of commercial project development. The main purpose of this guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project.« less
NASA Astrophysics Data System (ADS)
Zhu, Hongfen; Bi, Rutian; Duan, Yonghong; Xu, Zhanjun
2017-06-01
Understanding scale- and location-specific variations of soil nutrients in cultivated land is a crucial consideration for managing agriculture and natural resources effectively. In the present study, wavelet coherency was used to reveal the scale-location specific correlations between soil nutrients, including soil organic matter (SOM), total nitrogen (TN), available phosphorus (AP), and available potassium (AK), as well as topographic factors (elevation, slope, aspect, and wetness index) in the cultivated land of the Fen River Basin in Shanxi Province, China. The results showed that SOM, TN, AP, and AK were significantly inter-correlated, and that the scales at which soil nutrients were correlated differed in different landscapes, and were generally smaller in topographically rougher terrain. All soil nutrients but TN were significantly influenced by the wetness index at relatively large scales (32-72 km) and AK was significantly affected by the aspect at large scales at partial locations, showing localized features. The results of this study imply that the wetness index should be taken into account during farming practices to improve the soil nutrients of cultivated land in the Fen River Basin at large scales.
Ground-water flow in low permeability environments
Neuzil, Christopher E.
1986-01-01
Certain geologic media are known to have small permeability; subsurface environments composed of these media and lacking well developed secondary permeability have groundwater flow sytems with many distinctive characteristics. Moreover, groundwater flow in these environments appears to influence the evolution of certain hydrologic, geologic, and geochemical systems, may affect the accumulation of pertroleum and ores, and probably has a role in the structural evolution of parts of the crust. Such environments are also important in the context of waste disposal. This review attempts to synthesize the diverse contributions of various disciplines to the problem of flow in low-permeability environments. Problems hindering analysis are enumerated together with suggested approaches to overcoming them. A common thread running through the discussion is the significance of size- and time-scale limitations of the ability to directly observe flow behavior and make measurements of parameters. These limitations have resulted in rather distinct small- and large-scale approaches to the problem. The first part of the review considers experimental investigations of low-permeability flow, including in situ testing; these are generally conducted on temporal and spatial scales which are relatively small compared with those of interest. Results from this work have provided increasingly detailed information about many aspects of the flow but leave certain questions unanswered. Recent advances in laboratory and in situ testing techniques have permitted measurements of permeability and storage properties in progressively “tighter” media and investigation of transient flow under these conditions. However, very large hydraulic gradients are still required for the tests; an observational gap exists for typical in situ gradients. The applicability of Darcy's law in this range is therefore untested, although claims of observed non-Darcian behavior appear flawed. Two important nonhydraulic flow phenomena, osmosis and ultrafiltration, are experimentally well established in prepared clays but have been incompletely investigated, particularly in undisturbed geologic media. Small-scale experimental results form much of the basis for analyses of flow in low-permeability environments which occurs on scales of time and size too large to permit direct observation. Such large-scale flow behavior is the focus of the second part of the review. Extrapolation of small-scale experimental experience becomes an important and sometimes controversial problem in this context. In large flow systems under steady state conditions the regional permeability can sometimes be determined, but systems with transient flow are more difficult to analyze. The complexity of the problem is enhanced by the sensitivity of large-scale flow to the effects of slow geologic processes. One-dimensional studies have begun to elucidate how simple burial or exhumation can generate transient flow conditions by changing the state of stress and temperature and by burial metamorphism. Investigation of the more complex problem of the interaction of geologic processes and flow in two and three dimensions is just beginning. Because these transient flow analyses have largely been based on flow in experimental scale systems or in relatively permeable systems, deformation in response to effective stress changes is generally treated as linearly elastic; however, this treatment creates difficulties for the long periods of interest because viscoelastic deformation is probably significant. Also, large-scale flow simulations in argillaceous environments generally have neglected osmosis and ultrafiltration, in part because extrapolation of laboratory experience with coupled flow to large scales under in situ conditions is controversial. Nevertheless, the effects are potentially quite important because the coupled flow might cause ultra long lived transient conditions. The difficulties associated with analysis are matched by those of characterizing hydrologic conditions in tight environments; measurements of hydraulic head and sampling of pore fluids have been done only rarely because of the practical difficulties involved. These problems are also discussed in the second part of this paper.
Scale invariance, conformality, and generalized free fields
Dymarsky, Anatoly; Farnsworth, Kara; Komargodski, Zohar; ...
2016-02-16
This paper addresses the question of whether there are 4D Lorentz invariant unitary quantum fi eld theories with scale invariance but not conformal invariance. We present an important loophole in the arguments of Luty-Polchinski-Rattazzi and Dymarsky-Komargodski-Schwimmer-Theisen that is the trace of the energy-momentum tensor T could be a generalized free field. In this paper we rule out this possibility. The key ingredient is the observation that a unitary theory with scale but not conformal invariance necessarily has a non-vanishing anomaly for global scale transformations. We show that this anomaly cannot be reproduced if T is a generalized free field unlessmore » the theory also contains a dimension-2 scalar operator. In the special case where such an operator is present it can be used to redefine ("improve") the energy-momentum tensor, and we show that there is at least one energy-momentum tensor that is not a generalized free field. In addition, we emphasize that, in general, large momentum limits of correlation functions cannot be understood from the leading terms of the coordinate space OPE. This invalidates a recent argument by Farnsworth-Luty-Prilepina (FLP). Finally, despite the invalidity of the general argument of FLP, some of the techniques turn out to be useful in the present context.« less
A numerical projection technique for large-scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang
2011-10-01
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
The Experience of Cognitive Intrusion of Pain: scale development and validation
Attridge, Nina; Crombez, Geert; Van Ryckeghem, Dimitri; Keogh, Edmund; Eccleston, Christopher
2015-01-01
Abstract Patients with chronic pain often report their cognition to be impaired by pain, and this observation has been supported by numerous studies measuring the effects of pain on cognitive task performance. Furthermore, cognitive intrusion by pain has been identified as one of 3 components of pain anxiety, alongside general distress and fear of pain. Although cognitive intrusion is a critical characteristic of pain, no specific measure designed to capture its effects exists. In 3 studies, we describe the initial development and validation of a new measure of pain interruption: the Experience of Cognitive Intrusion of Pain (ECIP) scale. In study 1, the ECIP scale was administered to a general population sample to assess its structure and construct validity. In study 2, the factor structure of the ECIP scale was confirmed in a large general population sample experiencing no pain, acute pain, or chronic pain. In study 3, we examined the predictive value of the ECIP scale in pain-related disability in fibromyalgia patients. The ECIP scale scores followed a normal distribution with good variance in a general population sample. The scale had high internal reliability and a clear 1-component structure. It differentiated between chronic pain and control groups, and it was a significant predictor of pain-related disability over and above pain intensity. Repairing attentional interruption from pain may become a novel target for pain management interventions, both pharmacologic and nonpharmacologic. PMID:26067388
Review of the outer scale of the atmospheric turbulence
NASA Astrophysics Data System (ADS)
Ziad, Aziz
2016-07-01
Outer scale is a relevant parameter for the experimental performance evaluation of large telescopes. Different techniques have been used for the outer scale estimation. In situ measurements with radiosounding balloons have given very small values of outer scale. This latter has also been estimated directly at the ground level from the wavefront analysis with High Angular Resolution (HAR) techniques using interferometric or Shack-Hartmann or more generally AO systems data. Dedicated instruments have been also developed for the outer scale monitoring such as the Generalized Seeing Monitor (GSM) and the Monitor of Outer Scale Profile (MOSP). The measured values of outer scale from HAR techniques, GSM and MOSP are somewhat coherent and are larger than the in situ results. The main explanation of this difference comes from the definition of the outer scale itself. This paper aims to give a review in a non-exhaustive way of different techniques and instruments for the measurement of the outer scale. Comparisons of outer scale measurements will be discussed in the light of the different definitions of this parameter, the associated observable quantities and the atmospheric turbulence model as well.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Best Practices in the Evaluation of Large-scale STEM-focused Events: A Review of Recent Literature
NASA Astrophysics Data System (ADS)
Shebby, S.; Cobb, W. H.; Buxner, S.; Shipp, S. S.
2015-12-01
Each year, the National Aeronautics and Space Administration (NASA) sponsors a variety of educational events to share information with educators, students, and the general public. Intended outcomes of these events include increased interest in and awareness of the mission and goals of NASA. Events range in size from relatively small family science nights at a local school to large-scale mission and celestial event celebrations involving thousands of members of the general public. To support community members in designing event evaluations, the Science Mission Directorate (SMD) Planetary Science Forum sponsored the creation of a Best Practices Guide. The guide was generated by reviewing published large-scale event evaluation reports; however, the best practices described within are pertinent for all event organizers and evaluators regardless of event size. Each source included in the guide identified numerous challenges to conducting their event evaluation. These included difficulty in identifying extant instruments or items, collecting representative data, and disaggregating data to inform different evaluation questions. Overall, the guide demonstrates that evaluations of the large-scale events are generally done at a very basic level, with the types of data collected limited to observable demographic information and participant reactions collected via online survey. In addition to these findings, this presentation will describe evaluation best practices that will help practitioners move beyond these basic indicators and examine how to make the evaluation process an integral—and valuable—element of event planning, ultimately informing event outcomes and impacts. It will provide detailed information on five recommendations presented in the guide: 1) consider evaluation methodology, including data analysis, in advance; 2) design data collection instruments well in advance of the event; 3) collect data at different times and from multiple sources; 4) use technology to make the job easier; and 5) be aware of how challenging it is to measure impact.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Measures of Agreement Between Many Raters for Ordinal Classifications
Nelson, Kerrie P.; Edwards, Don
2015-01-01
Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449
NASA Technical Reports Server (NTRS)
Spinks, Debra (Compiler)
1997-01-01
This report contains the 1997 annual progress reports of the research fellows and students supported by the Center for Turbulence Research (CTR). Titles include: Invariant modeling in large-eddy simulation of turbulence; Validation of large-eddy simulation in a plain asymmetric diffuser; Progress in large-eddy simulation of trailing-edge turbulence and aeronautics; Resolution requirements in large-eddy simulations of shear flows; A general theory of discrete filtering for LES in complex geometry; On the use of discrete filters for large eddy simulation; Wall models in large eddy simulation of separated flow; Perspectives for ensemble average LES; Anisotropic grid-based formulas for subgrid-scale models; Some modeling requirements for wall models in large eddy simulation; Numerical simulation of 3D turbulent boundary layers using the V2F model; Accurate modeling of impinging jet heat transfer; Application of turbulence models to high-lift airfoils; Advances in structure-based turbulence modeling; Incorporating realistic chemistry into direct numerical simulations of turbulent non-premixed combustion; Effects of small-scale structure on turbulent mixing; Turbulent premixed combustion in the laminar flamelet and the thin reaction zone regime; Large eddy simulation of combustion instabilities in turbulent premixed burners; On the generation of vorticity at a free-surface; Active control of turbulent channel flow; A generalized framework for robust control in fluid mechanics; Combined immersed-boundary/B-spline methods for simulations of flow in complex geometries; and DNS of shock boundary-layer interaction - preliminary results for compression ramp flow.
NASA Astrophysics Data System (ADS)
Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.
2014-09-01
Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.
Sleep Enhances a Spatially Mediated Generalization of Learned Values
ERIC Educational Resources Information Center
Javadi, Amir-Homayoun; Tolat, Anisha; Spiers, Hugo J.
2015-01-01
Sleep is thought to play an important role in memory consolidation. Here we tested whether sleep alters the subjective value associated with objects located in spatial clusters that were navigated to in a large-scale virtual town. We found that sleep enhances a generalization of the value of high-value objects to the value of locally clustered…
Squire, J.; Bhattacharjee, A.
2016-03-14
A novel large-scale dynamo mechanism, the magnetic shear-current effect, is discussed and explored. Here, the effect relies on the interaction of magnetic fluctuations with a mean shear flow, meaning the saturated state of the small-scale dynamo can drive a large-scale dynamo – in some sense the inverse of dynamo quenching. The dynamo is non-helical, with the mean fieldmore » $${\\it\\alpha}$$coefficient zero, and is caused by the interaction between an off-diagonal component of the turbulent resistivity and the stretching of the large-scale field by shear flow. Following up on previous numerical and analytic work, this paper presents further details of the numerical evidence for the effect, as well as an heuristic description of how magnetic fluctuations can interact with shear flow to produce the required electromotive force. The pressure response of the fluid is fundamental to this mechanism, which helps explain why the magnetic effect is stronger than its kinematic cousin, and the basic idea is related to the well-known lack of turbulent resistivity quenching by magnetic fluctuations. As well as being interesting for its applications to general high Reynolds number astrophysical turbulence, where strong small-scale magnetic fluctuations are expected to be prevalent, the magnetic shear-current effect is a likely candidate for large-scale dynamo in the unstratified regions of ionized accretion disks. Evidence for this is discussed, as well as future research directions and the challenges involved with understanding details of the effect in astrophysically relevant regimes.« less
ERIC Educational Resources Information Center
Lowe, Geoffrey M.
2018-01-01
Competition is reported in the general education literature as having a largely detrimental impact upon student engagement and long-term motivation, yet competition has long been an accepted part of the music education ensemble landscape. Adjudicated ensemble competitions and competition-festivals are commonplace in most Australian states, as…
Environmental Studies: Mathematical, Computational and Statistical Analyses
1993-03-03
mathematical analysis addresses the seasonally and longitudinally averaged circulation which is under the influence of a steady forcing located asymmetrically...employed, as has been suggested for some situations. A general discussion of how interfacial phenomena influence both the original contamination process...describing the large-scale advective and dispersive behaviour of contaminants transported by groundwater and the uncertainty associated with field-scale
No-scale ripple inflation revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tianjun; Li, Zhijin; Nanopoulos, Dimitri V., E-mail: tli@itp.ac.cn, E-mail: lizhijin@physics.tamu.edu, E-mail: dimitri@physics.tamu.edu
We revisit the no-scale ripple inflation model, where no-scale supergravity is modified by an additional term for the inflaton field in the Kähler potential. This term not only breaks one SU(N,1) symmetry explicitly, but also plays an important role for inflation. We generalize the superpotential in the no-scale ripple inflation model slightly. There exists a discrete Z{sub 2} symmetry/parity in the scalar potential in general, which can be preserved or violated by the non-canonical nomalized inflaton kinetic term. Thus, there are three inflation paths: one parity invariant path, and the left and right paths for parity violating scenario. We showmore » that the inflations along the parity invariant path and right path are consistent with the Planck results. However, the gavitino mass for the parity invariant path is so large that the inflation results will be invalid if we consider the inflaton supersymmetry breaking soft mass term. Thus, only the inflation along the right path gives the correct and consistent results. Notably, the tensor-to-scalar ratio in such case can be large, with a value around 0.05, which may be probed by the future Planck experiment.« less
ERIC Educational Resources Information Center
Levin, Ben
2013-01-01
This brief discusses the problem of scaling innovations in education in the United States so that they can serve very large numbers of students. It begins with a general discussion of the issues involved, develops a set of five criteria for assessing challenges of scaling, and then uses three programs widely discussed in the U.S. as examples of…
NASA Astrophysics Data System (ADS)
Konno, Yohko; Suzuki, Keiji
This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.
Large-scale functional models of visual cortex for remote sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E
Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less
Elvish, Ruth; Burrow, Simon; Cawley, Rosanne; Harney, Kathryn; Pilling, Mark; Gregory, Julie; Keady, John
2018-01-01
Objectives The aims were to evaluate a second phase roll-out of a dementia care training programme for general hospital staff and to further develop two outcome scales: the Confidence in Dementia scale for measuring confidence in working with people with dementia and the Knowledge in Dementia scale for measuring knowledge in dementia. Method Following a 'training the trainers' phase, the study involved the delivery of the 'Getting to Know Me' training programme to a large number of staff (n = 517) across three National Health Service (NHS) Trusts situated in North-West England. The impact of the programme was evaluated using a pre-post design which explored: (i) changes in confidence in dementia, (ii) changes in knowledge in dementia, and (iii) changes in beliefs about behaviours that challenge. Results Statistically significant change was identified between pre-post training on all outcome measures (Confidence in Dementia: eight point increase, p < 0.001; Knowledge in Dementia: two point increase p < 0.001; controllability beliefs scale: four point decrease, p < 0.001). Medium to large effect sizes were demonstrated on all outcome measures. The psychometric properties of the Confidence in Dementia and Knowledge in Dementia scales are reported. Conclusion Staff knowledge in dementia and confidence in working with people with dementia significantly increased following attendance at the training sessions. The findings are consistent with preliminary findings and strengthen current knowledge about the impact of dementia care training in general hospitals. The Confidence in Dementia and Knowledge in Dementia scales continue to demonstrate psychometrically sound properties and demonstrate utility in the field of dementia research.
Ferrari, Renata; Marzinelli, Ezequiel M; Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F; Byrne, Maria; Malcolm, Hamish A; Williams, Stefan B; Steinberg, Peter D
2018-01-01
Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate 'no-take' and 'general-use' (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5-10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales.
Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F.; Byrne, Maria; Malcolm, Hamish A.; Williams, Stefan B.; Steinberg, Peter D.
2018-01-01
Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate ‘no-take’ and ‘general-use’ (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5–10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales. PMID:29547656
Nelson, Jason M; Canivez, Gary L; Watkins, Marley W
2013-06-01
Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Multiscale modeling and general theory of non-equilibrium plasma-assisted ignition and combustion
NASA Astrophysics Data System (ADS)
Yang, Suo; Nagaraja, Sharath; Sun, Wenting; Yang, Vigor
2017-11-01
A self-consistent framework for modeling and simulations of plasma-assisted ignition and combustion is established. In this framework, a ‘frozen electric field’ modeling approach is applied to take advantage of the quasi-periodic behaviors of the electrical characteristics to avoid the re-calculation of electric field for each pulse. The correlated dynamic adaptive chemistry (CO-DAC) method is employed to accelerate the calculation of large and stiff chemical mechanisms. The time-step is dynamically updated during the simulation through a three-stage multi-time scale modeling strategy, which utilizes the large separation of time scales in nanosecond pulsed plasma discharges. A general theory of plasma-assisted ignition and combustion is then proposed. Nanosecond pulsed plasma discharges for ignition and combustion can be divided into four stages. Stage I is the discharge pulse, with time scales of O (1-10 ns). In this stage, input energy is coupled into electron impact excitation and dissociation reactions to generate charged/excited species and radicals. Stage II is the afterglow during the gap between two adjacent pulses, with time scales of O (1 0 0 ns). In this stage, quenching of excited species dissociates O2 and fuel molecules, and provides fast gas heating. Stage III is the remaining gap between pulses, with time scales of O (1-100 µs). The radicals generated during Stages I and II significantly enhance exothermic reactions in this stage. The cumulative effects of multiple pulses is seen in Stage IV, with time scales of O (1-1000 ms), which include preheated gas temperatures and a large pool of radicals and fuel fragments to trigger ignition. For flames, plasma could significantly enhance the radical generation and gas heating in the pre-heat zone, thereby enhancing the flame establishment.
Gravity Waves in the Atmosphere: Instability, Saturation, and Transport.
1995-11-13
role of gravity wave drag in the extratropical QBO , destabilization of large-scale tropical waves by deep moist convection, and a general theory of equatorial inertial instability on a zonally nonuniform, nonparallel flow.
Terluin, Berend; Smits, Niels; Brouwers, Evelien P M; de Vet, Henrica C W
2016-09-15
The Four-Dimensional Symptom Questionnaire (4DSQ) is a self-report questionnaire measuring distress, depression, anxiety and somatization with separate scales. The 4DSQ has extensively been validated in clinical samples, especially from primary care settings. Information about measurement properties and normative data in the general population was lacking. In a Dutch general population sample we examined the 4DSQ scales' structure, the scales' reliability and measurement invariance with respect to gender, age and education, the scales' score distributions across demographic categories, and normative data. 4DSQ data were collected in a representative Dutch Internet panel. Confirmatory factor analysis was used to examine the scales' structure. Reliability was examined by Cronbach's alpha, and coefficients omega-total and omega-hierarchical. Differential item functioning (DIF) analysis was used to evaluate measurement invariance across gender, age and education. The total response rate was 82.4 % (n = 5273/6399). The depression scale proved to be unidimensional. The other scales were best represented as bifactor models consisting of a large general factor and one or more smaller specific factors. The general factors accounted for more than 95 % of the reliable variance of the scales. Reliability was high (≥0.85) by all estimates. The distress-, depression- and anxiety scales were invariant across gender, age and education. The somatization scale demonstrated some lack of measurement invariance as a result of decreased thresholds for some of the items in young people (16-24 years) and increased thresholds in elderly people (65+ years). The somatization scale was invariant regarding gender and education. The 4DSQ scores varied significantly across demographic categories, but the explained variance was small (<6 %). Normative data were generated for gender and age categories. Approximately 17 % of the participants scored above average on de distress scale, whereas 12 % scored above average on de somatization scale. Percentages of people scoring high enough on depression or anxiety as to suspect the presence of depressive or anxiety disorder were 4.1 and 2.5 respectively. Evidence supports reliability and measurement invariance of the 4DSQ in the general Dutch population. The normative data provided in this study can be used to compare a subject's 4DSQ scores with a general population reference group.
Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR
NASA Astrophysics Data System (ADS)
Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.
2017-12-01
Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.
Non-Gaussian Nature of Fracture and the Survival of Fat-Tail Exponents
NASA Astrophysics Data System (ADS)
Tallakstad, Ken Tore; Toussaint, Renaud; Santucci, Stephane; Måløy, Knut Jørgen
2013-04-01
We study the fluctuations of the global velocity Vl(t), computed at various length scales l, during the intermittent mode-I propagation of a crack front. The statistics converge to a non-Gaussian distribution, with an asymmetric shape and a fat tail. This breakdown of the central limit theorem (CLT) is due to the diverging variance of the underlying local crack front velocity distribution, displaying a power law tail. Indeed, by the application of a generalized CLT, the full shape of our experimental velocity distribution at large scale is shown to follow the stable Levy distribution, which preserves the power law tail exponent under upscaling. This study aims to demonstrate in general for crackling noise systems how one can infer the complete scale dependence of the activity—and extreme event distributions—by measuring only at a global scale.
General relativistic screening in cosmological simulations
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Paranjape, Aseem
2016-10-01
We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.
Weyl current, scale-invariant inflation, and Planck scale generation
Ferreira, Pedro G.; Hill, Christopher T.; Ross, Graham G.
2017-02-08
Scalar fields,more » $$\\phi$$ i, can be coupled nonminimally to curvature and satisfy the general criteria: (i) the theory has no mass input parameters, including M P=0; (ii) the $$\\phi$$ i have arbitrary values and gradients, but undergo a general expansion and relaxation to constant values that satisfy a nontrivial constraint, K($$\\phi$$ i)=constant; (iii) this constraint breaks scale symmetry spontaneously, and the Planck mass is dynamically generated; (iv) there can be adequate inflation associated with slow roll in a scale-invariant potential subject to the constraint; (v) the final vacuum can have a small to vanishing cosmological constant; (vi) large hierarchies in vacuum expectation values can naturally form; (vii) there is a harmless dilaton which naturally eludes the usual constraints on massless scalars. Finally, these models are governed by a global Weyl scale symmetry and its conserved current, K μ. At the quantum level the Weyl scale symmetry can be maintained by an invariant specification of renormalized quantities.« less
A Large Scale Dynamical System Immune Network Modelwith Finite Connectivity
NASA Astrophysics Data System (ADS)
Uezu, T.; Kadono, C.; Hatchett, J.; Coolen, A. C. C.
We study a model of an idiotypic immune network which was introduced by N. K. Jerne. It is known that in immune systems there generally exist several kinds of immune cells which can recognize any particular antigen. Taking this fact into account and assuming that each cell interacts with only a finite number of other cells, we analyze a large scale immune network via both numerical simulations and statistical mechanical methods, and show that the distribution of the concentrations of antibodies becomes non-trivial for a range of values of the strength of the interaction and the connectivity.
Simulation research on the process of large scale ship plane segmentation intelligent workshop
NASA Astrophysics Data System (ADS)
Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei
2017-04-01
Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.
NASA Technical Reports Server (NTRS)
Fu, L.-L.; Chelton, D. B.
1985-01-01
A new method is developed for studying large-scale temporal variability of ocean currents from satellite altimetric sea level measurements at intersections (crossovers) of ascending and descending orbit ground tracks. Using this method, sea level time series can be constructed from crossover sea level differences in small sample areas where altimetric crossovers are clustered. The method is applied to Seasat altimeter data to study the temporal evolution of the Antarctic Circumpolar Current (ACC) over the 3-month Seasat mission (July-October 1978). The results reveal a generally eastward acceleration of the ACC around the Southern Ocean with meridional disturbances which appear to be associated with bottom topographic features. This is the first direct observational evidence for large-scale coherence in the temporal variability of the ACC. It demonstrates the great potential of satellite altimetry for synoptic observation of temporal variability of the world ocean circulation.
A study of rotor and platform design trade-offs for large-scale floating vertical axis wind turbines
NASA Astrophysics Data System (ADS)
Griffith, D. Todd; Paquette, Joshua; Barone, Matthew; Goupee, Andrew J.; Fowler, Matthew J.; Bull, Diana; Owens, Brian
2016-09-01
Vertical axis wind turbines are receiving significant attention for offshore siting. In general, offshore wind offers proximity to large populations centers, a vast & more consistent wind resource, and a scale-up opportunity, to name a few beneficial characteristics. On the other hand, offshore wind suffers from high levelized cost of energy (LCOE) and in particular high balance of system (BoS) costs owing to accessibility challenges and limited project experience. To address these challenges associated with offshore wind, Sandia National Laboratories is researching large-scale (MW class) offshore floating vertical axis wind turbines (VAWTs). The motivation for this work is that floating VAWTs are a potential transformative technology solution to reduce offshore wind LCOE in deep-water locations. This paper explores performance and cost trade-offs within the design space for floating VAWTs between the configurations for the rotor and platform.
Divergence of perturbation theory in large scale structures
NASA Astrophysics Data System (ADS)
Pajer, Enrico; van der Woude, Drian
2018-05-01
We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.
Multi-level structure in the large scale distribution of optically luminous galaxies
NASA Astrophysics Data System (ADS)
Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen
1992-04-01
Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tessore, Nicolas; Metcalf, R. Benton; Winther, Hans A.
A number of alternatives to general relativity exhibit gravitational screening in the non-linear regime of structure formation. We describe a set of algorithms that can produce weak lensing maps of large scale structure in such theories and can be used to generate mock surveys for cosmological analysis. By analysing a few basic statistics we indicate how these alternatives can be distinguished from general relativity with future weak lensing surveys.
Grain Size and Parameter Recovery with TIMSS and the General Diagnostic Model
ERIC Educational Resources Information Center
Skaggs, Gary; Wilkins, Jesse L. M.; Hein, Serge F.
2016-01-01
The purpose of this study was to explore the degree of grain size of the attributes and the sample sizes that can support accurate parameter recovery with the General Diagnostic Model (GDM) for a large-scale international assessment. In this resampling study, bootstrap samples were obtained from the 2003 Grade 8 TIMSS in Mathematics at varying…
Considering the Use of General and Modified Assessment Items in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wyse, Adam E.; Albano, Anthony D.
2015-01-01
This article used several data sets from a large-scale state testing program to examine the feasibility of combining general and modified assessment items in computerized adaptive testing (CAT) for different groups of students. Results suggested that several of the assumptions made when employing this type of mixed-item CAT may not be met for…
Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom
ERIC Educational Resources Information Center
Clark, Ted M.; Chamberlain, Julia M.
2014-01-01
An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…
ERIC Educational Resources Information Center
Bhaumik, S.; Watson, J. M.; Thorp, C. F.; Tyrer, F.; McGrother, C. W.
2008-01-01
Background: Previous studies of weight problems in adults with intellectual disability (ID) have generally been small or selective and given conflicting results. The objectives of our large-scale study were to identify inequalities in weight problems between adults with ID and the general adult population, and to investigate factors associated…
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2016-03-18
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Scaling up digital circuit computation with DNA strand displacement cascades.
Qian, Lulu; Winfree, Erik
2011-06-03
To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.
2018-05-01
Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.
HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.
Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J
2016-06-03
Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .
Biasing and the search for primordial non-Gaussianity beyond the local type
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gleyzes, Jérôme; De Putter, Roland; Doré, Olivier
Primordial non-Gaussianity encodes valuable information about the physics of inflation, including the spectrum of particles and interactions. Significant improvements in our understanding of non-Gaussanity beyond Planck require information from large-scale structure. The most promising approach to utilize this information comes from the scale-dependent bias of halos. For local non-Gaussanity, the improvements available are well studied but the potential for non-Gaussianity beyond the local type, including equilateral and quasi-single field inflation, is much less well understood. In this paper, we forecast the capabilities of large-scale structure surveys to detect general non-Gaussianity through galaxy/halo power spectra. We study how non-Gaussanity can bemore » distinguished from a general biasing model and where the information is encoded. For quasi-single field inflation, significant improvements over Planck are possible in some regions of parameter space. We also show that the multi-tracer technique can significantly improve the sensitivity for all non-Gaussianity types, providing up to an order of magnitude improvement for equilateral non-Gaussianity over the single-tracer measurement.« less
NASA Technical Reports Server (NTRS)
Daiello, R. V.
1977-01-01
A general technology assessment and manufacturing cost analysis was presented. A near-term (1982) factory design is described, and the results of an experimental production study for the large-scale production of flat-panel silicon and solar-cell arrays are detailed.
Global Carbon Dioxide Transport from AIRS Data, July 2009
2009-11-09
Created with data acquired by JPL Atmospheric Infrared Sounder instrument during July 2009 this image shows large-scale patterns of carbon dioxide concentrations that are transported around Earth by the general circulation of the atmosphere.
A networked voting rule for democratic representation
NASA Astrophysics Data System (ADS)
Hernández, Alexis R.; Gracia-Lázaro, Carlos; Brigatti, Edgardo; Moreno, Yamir
2018-03-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals' interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.
A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet
ERIC Educational Resources Information Center
Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.
2006-01-01
Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…
Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks
Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.
2010-01-01
Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster heads tomore » minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less
NASA Astrophysics Data System (ADS)
Soja, Amber; Westberg, David; Stackhouse, Paul, Jr.; McRae, Douglas; Jin, Ji-Zhong; Sukhinin, Anatoly
2010-05-01
Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under current climate change scenarios. Therefore, changes in fire regimes have the potential to compel ecological change, moving ecosystems more quickly towards equilibrium with a new climate. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to define fire weather danger and fire regimes, so that large-scale data can be confidently used to predict future fire regimes using large-scale fire weather data, like that available from current Intergovernmental Panel on Climate Change (IPCC) climate change scenarios. In this talk, we intent to: (1) evaluate Fire Weather Indices (FWI) derived using reanalysis and interpolated station data; (2) discuss the advantages and disadvantages of using these distinct data sources; and (3) highlight established relationships between large-scale fire weather data, area burned, active fires and ecosystems burned. Specifically, the Canadian Forestry Service (CFS) Fire Weather Index (FWI) will be derived using: (1) NASA Goddard Earth Observing System version 4 (GEOS-4) large-scale reanalysis and NASA Global Precipitation Climatology Project (GPCP) data; and National Climatic Data Center (NCDC) surface station-interpolated data. Requirements of the FWI are local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. GEOS-4 reanalysis and NCDC station-interpolated fire weather indices are generally consistent spatially, temporally and quantitatively. Additionally, increased fire activity coincides with increased FWI ratings in both data products. Relationships have been established between large-scale FWI to area burned, fire frequency, ecosystem types, and these can be use to estimate historic and future fire regimes.
On the relationship between water vapor over the oceans and sea surface temperature
NASA Technical Reports Server (NTRS)
Stephens, Graeme L.
1990-01-01
Monthly mean precipitable water data obtained from passive microwave radiometry were correlated with the National Meteorological Center (NMC) blended sea surface temperature data. It is shown that the monthly mean water vapor content of the atmosphere above the oceans can generally be prescribed from the sea surface temperature with a standard deviation of 0.36 g/sq cm. The form of the relationship between precipitable water and sea surface temperature in the range T (sub s) greater than 18 C also resembles that predicted from simple arguments based on the Clausius-Clapeyron relationship. The annual cycle of the globally integrated mass of Scanning Multichannel Microwave Radiometer (SMMR) water vapor is shown to differ from analyses of other water vapor data in both phase and amplitude and these differences point to a significant influence of the continents on water vapor. Regional scale analyses of water vapor demonstrate that monthly averaged water vapor data, when contrasted with the bulk sea surface temperature relationship developed in this study, reflect various known characteristics of the time mean large-scale circulation over the oceans. A water vapor parameter is introduced to highlight the effects of large-scale motion on atmospheric water vapor. Based on the magnitude of this parameter, it is shown that the effects of large-scale flow on precipitable water vapor are regionally dependent, but for the most part, the influence of circulation is generally less than about + or - 20 percent of the seasonal mean.
On the relationship between water vapor over the oceans and sea surface temperature
NASA Technical Reports Server (NTRS)
Stephens, Graeme L.
1989-01-01
Monthly mean precipitable water data obtained from passive microwave radiometry were correlated with the National Meteorological Center (NMC) blended sea surface temperature data. It is shown that the monthly mean water vapor content of the atmosphere above the oceans can generally be prescribed from the sea surface temperature with a standard deviation of 0.36 g/sq cm. The form of the relationship between precipitable water and sea surface temperature in the range T(sub s) greater than 18 C also resembles that predicted from simple arguments based on the Clausius-Clapeyron relationship. The annual cycle of the globally integrated mass of Scanning Multichannel Microwave Radiometer (SMMR) water vapor is shown to differ from analyses of other water vapor data in both phase and amplitude and these differences point to a significant influence of the continents on water vapor. Regional scale analyses of water vapor demonstrate that monthly averaged water vapor data, when contrasted with the bulk sea surface temperature relationship developed in this study, reflect various known characteristics of the time mean large-scale circulation over the oceans. A water vapor parameter is introduced to highlight the effects of large-scale motion on atmospheric water vapor. Based on the magnitude of this parameter, it is shown that the effects of large-scale flow on precipitable water vapor are regionally dependent, but for the most part, the influence of circulation is generally less than about + or - 20 percent of the seasonal mean.
The use of Merging and Aggregation Operators for MRDB Data Feeding
NASA Astrophysics Data System (ADS)
Kozioł, Krystian; Lupa, Michał
2013-12-01
This paper presents the application of two generalization operators - merging and displacement - in the process of automatic data feeding in a multiresolution data base of topographic objects from large-scale data-bases (1 : 500-1 : 5000). An ordered collection of objects makes a layer of development that in the process of generalization is subjected to the processes of merging and displacement in order to maintain recognizability in the reduced scale of the map. The solution to the above problem is the algorithms described in the work; these algorithms use the standard recognition of drawings (Chrobak 2010), independent of the user. A digital cartographic generalization process is a set of consecutive operators where merging and aggregation play a key role. The proper operation has a significant impact on the qualitative assessment of data generalization
A study on large-scale nudging effects in regional climate model simulation
NASA Astrophysics Data System (ADS)
Yhang, Yoo-Bin; Hong, Song-You
2011-05-01
The large-scale nudging effects on the East Asian summer monsoon (EASM) are examined using the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM). The NCEP/DOE reanalysis data is used to provide large-scale forcings for RSM simulations, configured with an approximately 50-km grid over East Asia, centered on the Korean peninsula. The RSM with a variant of spectral nudging, that is, the scale selective bias correction (SSBC), is forced by perfect boundary conditions during the summers (June-July-August) from 1979 to 2004. The two summers of 2000 and 2004 are investigated to demonstrate the impact of SSBC on precipitation in detail. It is found that the effect of SSBC on the simulated seasonal precipitation is in general neutral without a discernible advantage. Although errors in large-scale circulation for both 2000 and 2004 are reduced by using the SSBC method, the impact on simulated precipitation is found to be negative in 2000 and positive in 2004 summers. One possible reason for a different effect is that precipitation in the summer of 2004 is characterized by a strong baroclinicity, while precipitation in 2000 is caused by thermodynamic instability. The reduction of convective rainfall over the oceans by the application of the SSBC method seems to play an important role in modeled atmosphere.
Zeng, Qingzhi; Wang, Wei Chun; Fang, Yiru; Mellor, David; Mccabe, Marita; Byrne, Linda; Zuo, Sai; Xu, Yifeng
2016-07-30
Relying on the absence, presence of level of symptomatology may not provide an adequate indication of the effects of treatment for depression, nor sufficient information for the development of treatment plans that meet patients' needs. Using a prospective, multi-centered, and observational design, the present study surveyed a large sample of outpatients with depression in China (n=9855). The 17-item Hamilton Rating Scale for Depression (HRSD-17) and the Remission Evaluation and Mood Inventory Tool (REMIT) were administered at baseline, two weeks later and 4 weeks, to assess patients' self-reported symptoms and general sense of mental health and wellbeing. Of 9855 outpatients, 91.3% were diagnosed as experiencing moderate to severe depression. The patients reported significant improvement over time on both depressive symptoms and general sense after 4-week treatment. The effect sizes of change in general sense were lower than those in symptoms at both two week and four week follow-up. Treatment effects on both general sense and depressive symptomatology were associated with demographic and clinical factors. The findings indicate that a focus on both general sense of mental health and wellbeing in addition to depressive symptomatology will provide clinicians, researchers and patients themselves with a broader perspective of the status of patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tarpin, Malo; Canet, Léonie; Wschebor, Nicolás
2018-05-01
In this paper, we present theoretical results on the statistical properties of stationary, homogeneous, and isotropic turbulence in incompressible flows in three dimensions. Within the framework of the non-perturbative renormalization group, we derive a closed renormalization flow equation for a generic n-point correlation (and response) function for large wave-numbers with respect to the inverse integral scale. The closure is obtained from a controlled expansion and relies on extended symmetries of the Navier-Stokes field theory. It yields the exact leading behavior of the flow equation at large wave-numbers |p→ i| and for arbitrary time differences ti in the stationary state. Furthermore, we obtain the form of the general solution of the corresponding fixed point equation, which yields the analytical form of the leading wave-number and time dependence of n-point correlation functions, for large wave-numbers and both for small ti and in the limit ti → ∞. At small ti, the leading contribution at large wave-numbers is logarithmically equivalent to -α (ɛL ) 2 /3|∑tip→ i|2, where α is a non-universal constant, L is the integral scale, and ɛ is the mean energy injection rate. For the 2-point function, the (tp)2 dependence is known to originate from the sweeping effect. The derived formula embodies the generalization of the effect of sweeping to n-point correlation functions. At large wave-numbers and large ti, we show that the ti2 dependence in the leading order contribution crosses over to a |ti| dependence. The expression of the correlation functions in this regime was not derived before, even for the 2-point function. Both predictions can be tested in direct numerical simulations and in experiments.
NASA Astrophysics Data System (ADS)
Kovanen, Dori J.; Slaymaker, Olav
2008-07-01
Active debris flow fans in the North Cascade Foothills of Washington State constitute a natural hazard of importance to land managers, private property owners and personal security. In the absence of measurements of the sediment fluxes involved in debris flow events, a morphological-evolutionary systems approach, emphasizing stratigraphy, dating, fan morphology and debris flow basin morphometry, was used. Using the stratigraphic framework and 47 radiocarbon dates, frequency of occurrence and relative magnitudes of debris flow events have been estimated for three spatial scales of debris flow systems: the within-fan site scale (84 observations); the fan meso-scale (six observations) and the lumped fan, regional or macro-scale (one fan average and adjacent lake sediments). In order to characterize the morphometric framework, plots of basin area v. fan area, basin area v. fan gradient and the Melton ruggedness number v. fan gradient for the 12 debris flow basins were compared with those documented for semi-arid and paraglacial fans. Basin area to fan area ratios were generally consistent with the estimated level of debris flow activity during the Holocene as reported below. Terrain analysis of three of the most active debris flow basins revealed the variety of modes of slope failure and sediment production in the region. Micro-scale debris flow event systems indicated a range of recurrence intervals for large debris flows from 106-3645 years. The spatial variation of these rates across the fans was generally consistent with previously mapped hazard zones. At the fan meso-scale, the range of recurrence intervals for large debris flows was 273-1566 years and at the regional scale, the estimated recurrence interval of large debris flows was 874 years (with undetermined error bands) during the past 7290 years. Dated lake sediments from the adjacent Lake Whatcom gave recurrence intervals for large sediment producing events ranging from 481-557 years over the past 3900 years and clearly discernible sedimentation events in the lacustrine sediments had a recurrence interval of 67-78 years over that same period.
Conservation laws in the quantum Hall Liouvillian theory and its generalizations
NASA Astrophysics Data System (ADS)
Moore, Joel
2003-03-01
It is known that the localization length scaling of noninteracting electrons near the quantum Hall plateau transition can be described in a theory of the bosonic density operators, with no reference to the underlying fermions. The resulting ``Liouvillian'' theory has a U(1|1) global supersymmetry as well as a hierarchy of geometric conservation laws related to the noncommutative geometry of the lowest Landau level (LLL). Mean-field and large-N generalizations of the Liouvillian are shown to describe problems of noninteracting bosons (without any obvious pathologies, contrary to recent claims) that enlarge the U(1|1) supersymmetry to U(1|1) × SO(N) or U(1|1) × SU(N). The N>1 generalizations preserve the first two of the hierarchy of geometric conservation laws, leading to logarithmic corrections at order 1/N to the diffusive large-N limit, but do not preserve the remaining conservation laws. The emergence of nontrivial scaling at the plateau transition, in the Liouvillian approach, is shown to depend sensitively on the unusual geometry of Landau levels.
Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio
2017-10-24
High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.
Implicity restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations
NASA Technical Reports Server (NTRS)
Sorensen, Danny C.
1996-01-01
Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large-scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.
Highly Efficient Large-Scale Lentiviral Vector Concentration by Tandem Tangential Flow Filtration
Cooper, Aaron R.; Patel, Sanjeet; Senadheera, Shantha; Plath, Kathrin; Kohn, Donald B.; Hollis, Roger P.
2014-01-01
Large-scale lentiviral vector (LV) concentration can be inefficient and time consuming, often involving multiple rounds of filtration and centrifugation. This report describes a simpler method using two tangential flow filtration (TFF) steps to concentrate liter-scale volumes of LV supernatant, achieving in excess of 2000-fold concentration in less than 3 hours with very high recovery (>97%). Large volumes of LV supernatant can be produced easily through the use of multi-layer flasks, each having 1720 cm2 surface area and producing ~560 mL of supernatant per flask. Combining the use of such flasks and TFF greatly simplifies large-scale production of LV. As a demonstration, the method is used to produce a very high titer LV (>1010 TU/mL) and transduce primary human CD34+ hematopoietic stem/progenitor cells at high final vector concentrations with no overt toxicity. A complex LV (STEMCCA) for induced pluripotent stem cell generation is also concentrated from low initial titer and used to transduce and reprogram primary human fibroblasts with no overt toxicity. Additionally, a generalized and simple multiplexed real- time PCR assay is described for lentiviral vector titer and copy number determination. PMID:21784103
Organic field effect transistor with ultra high amplification
NASA Astrophysics Data System (ADS)
Torricelli, Fabrizio
2016-09-01
High-gain transistors are essential for the large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show organic transistors fabricated on plastic foils enabling unipolar amplifiers with ultra-gain. The proposed approach is general and opens up new opportunities for ultra-large signal amplification in organic circuits and sensors.
Lam, Max; Trampush, Joey W; Yu, Jin; Knowles, Emma; Davies, Gail; Liewald, David C; Starr, John M; Djurovic, Srdjan; Melle, Ingrid; Sundet, Kjetil; Christoforou, Andrea; Reinvang, Ivar; DeRosse, Pamela; Lundervold, Astri J; Steen, Vidar M; Espeseth, Thomas; Räikkönen, Katri; Widen, Elisabeth; Palotie, Aarno; Eriksson, Johan G; Giegling, Ina; Konte, Bettina; Roussos, Panos; Giakoumaki, Stella; Burdick, Katherine E; Payton, Antony; Ollier, William; Chiba-Falek, Ornit; Attix, Deborah K; Need, Anna C; Cirulli, Elizabeth T; Voineskos, Aristotle N; Stefanis, Nikos C; Avramopoulos, Dimitrios; Hatzimanolis, Alex; Arking, Dan E; Smyrnis, Nikolaos; Bilder, Robert M; Freimer, Nelson A; Cannon, Tyrone D; London, Edythe; Poldrack, Russell A; Sabb, Fred W; Congdon, Eliza; Conley, Emily Drabant; Scult, Matthew A; Dickinson, Dwight; Straub, Richard E; Donohoe, Gary; Morris, Derek; Corvin, Aiden; Gill, Michael; Hariri, Ahmad R; Weinberger, Daniel R; Pendleton, Neil; Bitsios, Panos; Rujescu, Dan; Lahti, Jari; Le Hellard, Stephanie; Keller, Matthew C; Andreassen, Ole A; Deary, Ian J; Glahn, David C; Malhotra, Anil K; Lencz, Todd
2017-11-28
Here, we present a large (n = 107,207) genome-wide association study (GWAS) of general cognitive ability ("g"), further enhanced by combining results with a large-scale GWAS of educational attainment. We identified 70 independent genomic loci associated with general cognitive ability. Results showed significant enrichment for genes causing Mendelian disorders with an intellectual disability phenotype. Competitive pathway analysis implicated the biological processes of neurogenesis and synaptic regulation, as well as the gene targets of two pharmacologic agents: cinnarizine, a T-type calcium channel blocker, and LY97241, a potassium channel inhibitor. Transcriptome-wide and epigenome-wide analysis revealed that the implicated loci were enriched for genes expressed across all brain regions (most strongly in the cerebellum). Enrichment was exclusive to genes expressed in neurons but not oligodendrocytes or astrocytes. Finally, we report genetic correlations between cognitive ability and disparate phenotypes including psychiatric disorders, several autoimmune disorders, longevity, and maternal age at first birth. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
A Large Scale, High Resolution Agent-Based Insurgency Model
2013-09-30
CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation
Global Carbon Dioxide Transport from AIRS Data, July 2008
2008-09-24
This image was created with data acquired by JPLa Atmospheric Infrared Sounder during July 2008. The image shows large scale patterns of carbon dioxide concentrations that are transported around the Earth by the general circulation of the atmosphere.
Biodiversity and ecosystem stability across scales in metacommunities
Wang, Shaopeng; Loreau, Michel
2016-01-01
Although diversity-stability relationships have been extensively studied in local ecosystems, the global biodiversity crisis calls for an improved understanding of these relationships in a spatial context. Here we use a dynamical model of competitive metacommunities to study the relationships between species diversity and ecosystem variability across scales. We derive analytic relationships under a limiting case; these results are extended to more general cases with numerical simulations. Our model shows that, while alpha diversity decreases local ecosystem variability, beta diversity generally contributes to increasing spatial asynchrony among local ecosystems. Consequently, both alpha and beta diversity provide stabilizing effects for regional ecosystems, through local and spatial insurance effects, respectively. We further show that at the regional scale, the stabilizing effect of biodiversity increases as spatial environmental correlation increases. Our findings have important implications for understanding the interactive effects of global environmental changes (e.g. environmental homogenization) and biodiversity loss on ecosystem sustainability at large scales. PMID:26918536
The basis for cosmic ray feedback: Written on the wind
Zweibel, Ellen G.
2017-01-01
Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback. Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed. PMID:28579734
The basis for cosmic ray feedback: Written on the wind
NASA Astrophysics Data System (ADS)
Zweibel, Ellen G.
2017-05-01
Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback. Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed.
The basis for cosmic ray feedback: Written on the wind.
Zweibel, Ellen G
2017-05-01
Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback . Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed.
Evolving from bioinformatics in-the-small to bioinformatics in-the-large.
Parker, D Stott; Gorlick, Michael M; Lee, Christopher J
2003-01-01
We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.
Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-28
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
NASA Astrophysics Data System (ADS)
Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-01
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
Social welfare as small-scale help: evolutionary psychology and the deservingness heuristic.
Petersen, Michael Bang
2012-01-01
Public opinion concerning social welfare is largely driven by perceptions of recipient deservingness. Extant research has argued that this heuristic is learned from a variety of cultural, institutional, and ideological sources. The present article provides evidence supporting a different view: that the deservingness heuristic is rooted in psychological categories that evolved over the course of human evolution to regulate small-scale exchanges of help. To test predictions made on the basis of this view, a method designed to measure social categorization is embedded in nationally representative surveys conducted in different countries. Across the national- and individual-level differences that extant research has used to explain the heuristic, people categorize welfare recipients on the basis of whether they are lazy or unlucky. This mode of categorization furthermore induces people to think about large-scale welfare politics as its presumed ancestral equivalent: small-scale help giving. The general implications for research on heuristics are discussed.
Multiscale solvers and systematic upscaling in computational physics
NASA Astrophysics Data System (ADS)
Brandt, A.
2005-07-01
Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).
A Generalized Simple Formulation of Convective Adjustment ...
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la
Diffusion of strongly magnetized cosmic ray particles in a turbulent medium
NASA Technical Reports Server (NTRS)
Ptuskin, V. S.
1985-01-01
Cosmic ray (CR) propagation in a turbulent medium is usually considered in the diffusion approximation. Here, the diffusion equation is obtained for strongly magnetized particles in the general form. The influence of a large-scale random magnetic field on CR propagation in interstellar medium is discussed. Cosmic rays are assumed to propagate in a medium with a regular field H and an ensemble of random MHD waves. The energy density of waves on scales smaller than the free path 1 of CR particles is small. The collision integral of the general form which describes interaction between relativistic particles and waves in the quasilinear approximation is used.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-01-01
The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.
Oklahoma Downbursts and Their Asymmetry.
1986-11-01
velocity across the divergence center of at least 10 m s-1. Further, downbursts are called micro- bursts when they are 0.4-4 km in diameter, and macrobursts ...outflows in- vestigated in this study are larger-scale downbursts ( macrobursts ) that were imbedded in large intense convective storms. This does not...observed in this study were associated with intense convective storms and were generally of much larger horizontal scale ( macrobursts ). However, due to
Cosmological constant implementing Mach principle in general relativity
NASA Astrophysics Data System (ADS)
Namavarian, Nadereh; Farhoudi, Mehrdad
2016-10-01
We consider the fact that noticing on the operational meaning of the physical concepts played an impetus role in the appearance of general relativity (GR). Thus, we have paid more attention to the operational definition of the gravitational coupling constant in this theory as a dimensional constant which is gained through an experiment. However, as all available experiments just provide the value of this constant locally, this coupling constant can operationally be meaningful only in a local area. Regarding this point, to obtain an extension of GR for the large scale, we replace it by a conformal invariant model and then, reduce this model to a theory for the cosmological scale via breaking down the conformal symmetry through singling out a specific conformal frame which is characterized by the large scale characteristics of the universe. Finally, we come to the same field equations that historically were proposed by Einstein for the cosmological scale (GR plus the cosmological constant) as the result of his endeavor for making GR consistent with the Mach principle. However, we declare that the obtained field equations in this alternative approach do not carry the problem of the field equations proposed by Einstein for being consistent with Mach's principle (i.e., the existence of de Sitter solution), and can also be considered compatible with this principle in the Sciama view.
NASA Astrophysics Data System (ADS)
Webb, James R.
2016-09-01
This book is intended to be a course about the creation and evolution of the universe at large, including the basic macroscopic building blocks (galaxies) and the overall large-scale structure. This text covers a broad range of topics for a graduate-level class in a physics department where students' available credit hours for astrophysics classes are limited. The sections cover galactic structure, external galaxies, galaxy clustering, active galaxies, general relativity and cosmology.
Generalization and capacity of extensively large two-layered perceptrons.
Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido
2002-09-01
The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.
Deflected mirage mediation: a phenomenological framework for generalized supersymmetry breaking.
Everett, Lisa L; Kim, Ian-Woo; Ouyang, Peter; Zurek, Kathryn M
2008-09-05
We present a general phenomenological framework for dialing between gravity mediation, gauge mediation, and anomaly mediation. The approach is motivated from recent developments in moduli stabilization, which suggest that gravity mediated terms can be effectively loop suppressed and thus comparable to gauge and anomaly mediated terms. The gauginos exhibit a mirage unification behavior at a "deflected" scale, and gluinos are often the lightest colored sparticles. The approach provides a rich setting in which to explore generalized supersymmetry breaking at the CERN Large Hadron Collider.
A survey on routing protocols for large-scale wireless sensor networks.
Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong
2011-01-01
With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed.
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Time-sliced perturbation theory for large scale structure I: general formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less
Cosmological measurements with general relativistic galaxy correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Montanari, Francesco; Durrer, Ruth
We investigate the cosmological dependence and the constraining power of large-scale galaxy correlations, including all redshift-distortions, wide-angle, lensing and gravitational potential effects on linear scales. We analyze the cosmological information present in the lensing convergence and in the gravitational potential terms describing the so-called ''relativistic effects'', and we find that, while smaller than the information contained in intrinsic galaxy clustering, it is not negligible. We investigate how neglecting them does bias cosmological measurements performed by future spectroscopic and photometric large-scale surveys such as SKA and Euclid. We perform a Fisher analysis using the CLASS code, modified to include scale-dependent galaxymore » bias and redshift-dependent magnification and evolution bias. Our results show that neglecting relativistic terms, especially lensing convergence, introduces an error in the forecasted precision in measuring cosmological parameters of the order of a few tens of percent, in particular when measuring the matter content of the Universe and primordial non-Gaussianity parameters. The analysis suggests a possible substantial systematic error in cosmological parameter constraints. Therefore, we argue that radial correlations and integrated relativistic terms need to be taken into account when forecasting the constraining power of future large-scale number counts of galaxy surveys.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
Driving terrestrial ecosystem models from space
NASA Technical Reports Server (NTRS)
Waring, R. H.
1993-01-01
Regional air pollution, land-use conversion, and projected climate change all affect ecosystem processes at large scales. Changes in vegetation cover and growth dynamics can impact the functioning of ecosystems, carbon fluxes, and climate. As a result, there is a need to assess and monitor vegetation structure and function comprehensively at regional to global scales. To provide a test of our present understanding of how ecosystems operate at large scales we can compare model predictions of CO2, O2, and methane exchange with the atmosphere against regional measurements of interannual variation in the atmospheric concentration of these gases. Recent advances in remote sensing of the Earth's surface are beginning to provide methods for estimating important ecosystem variables at large scales. Ecologists attempting to generalize across landscapes have made extensive use of models and remote sensing technology. The success of such ventures is dependent on merging insights and expertise from two distinct fields. Ecologists must provide the understanding of how well models emulate important biological variables and their interactions; experts in remote sensing must provide the biophysical interpretation of complex optical reflectance and radar backscatter data.
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
Telecommunications technology and rural education in the United States
NASA Technical Reports Server (NTRS)
Perrine, J. R.
1975-01-01
The rural sector of the US is examined from the point of view of whether telecommunications technology can augment the development of rural education. Migratory farm workers and American Indians were the target groups which were examined as examples of groups with special needs in rural areas. The general rural population and the target groups were examined to identify problems and to ascertain specific educational needs. Educational projects utilizing telecommunications technology in target group settings were discussed. Large scale regional ATS-6 satellite-based experimental educational telecommunications projects were described. Costs and organizational factors were also examined for large scale rural telecommunications projects.
HRLSim: a high performance spiking neural network simulator for GPGPU clusters.
Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan
2014-02-01
Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.
The Expanding Universe and the Large-Scale Geometry of Spacetime.
ERIC Educational Resources Information Center
Shu, Frank
1983-01-01
Presents a condensed version of textbook account of cosmological theory and principles. Topics discussed include quasars, general and special relativity, relativistic cosmology, and the curvature of spacetime. Some philosophical assumptions necessary to the theory are also discussed. (JM)
ERIC Educational Resources Information Center
Platten, Marvin R.; Williams, Larry R.
1981-01-01
This study largely replicates the findings of a previous study reported by the authors. Further research involving the physical dimension as a possible facet of general self-concept is suggested. (Author/BW)
SUMMARY OF SOLIDIFICATION/STABILIZATION SITE DEMONSTRATIONS AT UNCONTROLLED HAZARDOUS WASTE SITES
Four large-scale solidification/stabilization demonstrations have occurred under EPA's SITE program. In general, physical testing results have been acceptable. Reduction in metal leachability, as determined by the TCLP test, has been observed. Reduction in organic leachability ha...
Properties and spatial distribution of galaxy superclusters
NASA Astrophysics Data System (ADS)
Liivamägi, Lauri Juhan
2017-01-01
Astronomy is a science that can offer plenty of unforgettable imagery, and the large-scale distribution of galaxies is no exception. Among the first features the viewer's eye is likely to be drawn to, are large concentrations of galaxies - galaxy superclusters, contrasting to the seemingly empty regions beside them. Superclusters can extend from tens to over hundred megaparsecs, they contain from hundreds to thousands of galaxies, and many galaxy groups and clusters. Unlike galaxy clusters, superclusters are clearly unrelaxed systems, not gravitationally bound as crossing times exceed the age of the universe, and show little to no radial symmetry. Superclusters, as part of the large-scale structure, are sensitive to the initial power spectrum and the following evolution. They are massive enough to leave an imprint on the cosmic microwave background radiation. Superclusters can also provide an unique environment for their constituent galaxies and galaxy clusters. In this study we used two different observational and one simulated galaxy samples to create several catalogues of structures that, we think, correspond to what are generally considered galaxy superclusters. Superclusters were delineated as continuous over-dense regions in galaxy luminosity density fields. When calculating density fields several corrections were applied to remove small-scale redshift distortions and distance-dependent selection effects. Resulting catalogues of objects display robust statistical properties, showing that flux-limited galaxy samples can be used to create nearly volume-limited catalogues of superstructures. Generally, large superclusters can be regarded as massive, often branching filamentary structures, that are mainly characterised by their length. Smaller superclusters, on the other hand, can display a variety of shapes. Spatial distribution of superclusters shows large-scale variations, with high-density concentrations often found in semi-regularly spaced groups. Future studies are needed to quantify the relations between superclusters and finer details of the galaxy distribution. Supercluster catalogues from this thesis have already been used in numerous other studies.
Direct and inverse energy cascades in a forced rotating turbulence experiment
NASA Astrophysics Data System (ADS)
Campagne, Antoine; Gallet, Basile; Moisy, Frédéric; Cortet, Pierre-Philippe
2014-12-01
We present experimental evidence for a double cascade of kinetic energy in a statistically stationary rotating turbulence experiment. Turbulence is generated by a set of vertical flaps, which continuously injects velocity fluctuations towards the center of a rotating water tank. The energy transfers are evaluated from two-point third-order three-component velocity structure functions, which we measure using stereoscopic particle image velocimetry in the rotating frame. Without global rotation, the energy is transferred from large to small scales, as in classical three-dimensional turbulence. For nonzero rotation rates, the horizontal kinetic energy presents a double cascade: a direct cascade at small horizontal scales and an inverse cascade at large horizontal scales. By contrast, the vertical kinetic energy is always transferred from large to small horizontal scales, a behavior reminiscent of the dynamics of a passive scalar in two-dimensional turbulence. At the largest rotation rate, the flow is nearly two-dimensional, and a pure inverse energy cascade is found for the horizontal energy. To describe the scale-by-scale energy budget, we consider a generalization of the Kármán-Howarth-Monin equation to inhomogeneous turbulent flows, in which the energy input is explicitly described as the advection of turbulent energy from the flaps through the surface of the control volume where the measurements are performed.
Scaling Principles for Understanding and Exploiting Adhesion
NASA Astrophysics Data System (ADS)
Crosby, Alfred
A grand challenge in the science of adhesion is the development of a general design paradigm for adhesive materials that can sustain large forces across an interface yet be detached with minimal force upon command. Essential to this challenge is the generality of achieving this performance under a wide set of external conditions and across an extensive range of forces. Nature has provided some guidance through various examples, e.g. geckos, for how to meet this challenge; however, a single solution is not evident upon initial investigation. To help provide insight into nature's ability to scale reversible adhesion and adapt to different external constraints, we have developed a general scaling theory that describes the force capacity of an adhesive interface in the context of biological locomotion. We have demonstrated that this scaling theory can be used to understand the relative performance of a wide range of organisms, including numerous gecko species and insects, as well as an extensive library of synthetic adhesive materials. We will present the development and testing of this scaling theory, and how this understanding has helped guide the development of new composite materials for high capacity adhesives. We will also demonstrate how this scaling theory has led to the development of new strategies for transfer printing and adhesive applications in manufacturing processes. Overall, the developed scaling principles provide a framework for guiding the design of adhesives.
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
NASA Astrophysics Data System (ADS)
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2010-05-01
In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Panoptes: web-based exploration of large scale genome variation data.
Vauterin, Paul; Jeffery, Ben; Miles, Alistair; Amato, Roberto; Hart, Lee; Wright, Ian; Kwiatkowski, Dominic
2017-10-15
The size and complexity of modern large-scale genome variation studies demand novel approaches for exploring and sharing the data. In order to unlock the potential of these data for a broad audience of scientists with various areas of expertise, a unified exploration framework is required that is accessible, coherent and user-friendly. Panoptes is an open-source software framework for collaborative visual exploration of large-scale genome variation data and associated metadata in a web browser. It relies on technology choices that allow it to operate in near real-time on very large datasets. It can be used to browse rich, hybrid content in a coherent way, and offers interactive visual analytics approaches to assist the exploration. We illustrate its application using genome variation data of Anopheles gambiae, Plasmodium falciparum and Plasmodium vivax. Freely available at https://github.com/cggh/panoptes, under the GNU Affero General Public License. paul.vauterin@gmail.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Basic numerical competences in large-scale assessment data: Structure and long-term relevance.
Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian
2018-03-01
Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.
Shaw, Emily E; Schultz, Aaron P; Sperling, Reisa A; Hedden, Trey
2015-10-01
Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65-90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging.
Asymptotic theory of time varying networks with burstiness and heterogeneous activation patterns
NASA Astrophysics Data System (ADS)
Burioni, Raffaella; Ubaldi, Enrico; Vezzani, Alessandro
2017-05-01
The recent availability of large-scale, time-resolved and high quality digital datasets has allowed for a deeper understanding of the structure and properties of many real-world networks. The empirical evidence of a temporal dimension prompted the switch of paradigm from a static representation of networks to a time varying one. In this work we briefly review the framework of time-varying-networks in real world social systems, especially focusing on the activity-driven paradigm. We develop a framework that allows for the encoding of three generative mechanisms that seem to play a central role in the social networks’ evolution: the individual’s propensity to engage in social interactions, its strategy in allocate these interactions among its alters and the burstiness of interactions amongst social actors. The functional forms and probability distributions encoding these mechanisms are typically data driven. A natural question arises if different classes of strategies and burstiness distributions, with different local scale behavior and analogous asymptotics can lead to the same long time and large scale structure of the evolving networks. We consider the problem in its full generality, by investigating and solving the system dynamics in the asymptotic limit, for general classes of ties allocation mechanisms and waiting time probability distributions. We show that the asymptotic network evolution is driven by a few characteristics of these functional forms, that can be extracted from direct measurements on large datasets.
Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa
2015-01-01
Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.
The atmospheric implications of radiation belt remediation
NASA Astrophysics Data System (ADS)
Rodger, C. J.; Clilverd, M. A.; Ulich, Th.; Verronen, P. T.; Turunen, E.; Thomson, N. R.
2006-08-01
High altitude nuclear explosions (HANEs) and geomagnetic storms can produce large scale injections of relativistic particles into the inner radiation belts. It is recognised that these large increases in >1 MeV trapped electron fluxes can shorten the operational lifetime of low Earth orbiting satellites, threatening a large, valuable population. Therefore, studies are being undertaken to bring about practical human control of the radiation belts, termed "Radiation Belt Remediation" (RBR). Here we consider the upper atmospheric consequences of an RBR system operating over either 1 or 10 days. The RBR-forced neutral chemistry changes, leading to NOx enhancements and Ox depletions, are significant during the timescale of the precipitation but are generally not long-lasting. The magnitudes, time-scales, and altitudes of these changes are no more significant than those observed during large solar proton events. In contrast, RBR-operation will lead to unusually intense HF blackouts for about the first half of the operation time, producing large scale disruptions to radio communication and navigation systems. While the neutral atmosphere changes are not particularly important, HF disruptions could be an important area for policy makers to consider, particularly for the remediation of natural injections.
NASA Technical Reports Server (NTRS)
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang;
2015-01-01
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.
NASA Astrophysics Data System (ADS)
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat
2015-06-01
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.
The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.
Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun
2017-01-01
Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.
Selecting habitat to survive: the impact of road density on survival in a large carnivore.
Basille, Mathieu; Van Moorter, Bram; Herfindal, Ivar; Martin, Jodie; Linnell, John D C; Odden, John; Andersen, Reidar; Gaillard, Jean-Michel
2013-01-01
Habitat selection studies generally assume that animals select habitat and food resources at multiple scales to maximise their fitness. However, animals sometimes prefer habitats of apparently low quality, especially when considering the costs associated with spatially heterogeneous human disturbance. We used spatial variation in human disturbance, and its consequences on lynx survival, a direct fitness component, to test the Hierarchical Habitat Selection hypothesis from a population of Eurasian lynx Lynx lynx in southern Norway. Data from 46 lynx monitored with telemetry indicated that a high proportion of forest strongly reduced the risk of mortality from legal hunting at the home range scale, while increasing road density strongly increased such risk at the finer scale within the home range. We found hierarchical effects of the impact of human disturbance, with a higher road density at a large scale reinforcing its negative impact at a fine scale. Conversely, we demonstrated that lynx shifted their habitat selection to avoid areas with the highest road densities within their home ranges, thus supporting a compensatory mechanism at fine scale enabling lynx to mitigate the impact of large-scale disturbance. Human impact, positively associated with high road accessibility, was thus a stronger driver of lynx space use at a finer scale, with home range characteristics nevertheless constraining habitat selection. Our study demonstrates the truly hierarchical nature of habitat selection, which aims at maximising fitness by selecting against limiting factors at multiple spatial scales, and indicates that scale-specific heterogeneity of the environment is driving individual spatial behaviour, by means of trade-offs across spatial scales.
"Non-cold" dark matter at small scales: a general approach
NASA Astrophysics Data System (ADS)
Murgia, R.; Merle, A.; Viel, M.; Totzauer, M.; Schneider, A.
2017-11-01
Structure formation at small cosmological scales provides an important frontier for dark matter (DM) research. Scenarios with small DM particle masses, large momenta or hidden interactions tend to suppress the gravitational clustering at small scales. The details of this suppression depend on the DM particle nature, allowing for a direct link between DM models and astrophysical observations. However, most of the astrophysical constraints obtained so far refer to a very specific shape of the power suppression, corresponding to thermal warm dark matter (WDM), i.e., candidates with a Fermi-Dirac or Bose-Einstein momentum distribution. In this work we introduce a new analytical fitting formula for the power spectrum, which is simple yet flexible enough to reproduce the clustering signal of large classes of non-thermal DM models, which are not at all adequately described by the oversimplified notion of WDM . We show that the formula is able to fully cover the parameter space of sterile neutrinos (whether resonantly produced or from particle decay), mixed cold and warm models, fuzzy dark matter, as well as other models suggested by effective theory of structure formation (ETHOS). Based on this fitting formula, we perform a large suite of N-body simulations and we extract important nonlinear statistics, such as the matter power spectrum and the halo mass function. Finally, we present first preliminary astrophysical constraints, based on linear theory, from both the number of Milky Way satellites and the Lyman-α forest. This paper is a first step towards a general and comprehensive modeling of small-scale departures from the standard cold DM model.
A GENERAL SIMULATION MODEL FOR INFORMATION SYSTEMS: A REPORT ON A MODELLING CONCEPT
The report is concerned with the design of large-scale management information systems (MIS). A special design methodology was created, along with a design model to complement it. The purpose of the paper is to present the model.
Computer-aided design of large-scale integrated circuits - A concept
NASA Technical Reports Server (NTRS)
Schansman, T. T.
1971-01-01
Circuit design and mask development sequence are improved by using general purpose computer with interactive graphics capability establishing efficient two way communications link between design engineer and system. Interactive graphics capability places design engineer in direct control of circuit development.
NASA Astrophysics Data System (ADS)
Vogl, Raimund
2001-08-01
In 1997, a large PACS was first introduced at Innsbruck University Hospital in the context of a new traumatology centre. In the subsequent years, this initial PACS setting covering only one department was expanded to most of the hospital campus, with currently some 250 viewing stations attached. Constantly connecting new modalities and viewing stations created the demand for several redesigns from the original PACS configuration to cope with the increasing data load. We give an account of these changes necessary to develop a multi hospital PACS and the considerations that lead us there. Issues of personnel for running a large scale PACS are discussed and we give an outlook to the new information systems currently under development for archiving and communication of general medical imaging data and for simple telemedicine networking between several large university hospitals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, Davi C.; Piattella, Oliver F.; Chauvineau, Bertrand, E-mail: davi.rodrigues@cosmo-ufes.org, E-mail: Bertrand.Chauvineau@oca.eu, E-mail: oliver.piattella@pq.cnpq.br
We show that Renormalization Group extensions of the Einstein-Hilbert action for large scale physics are not, in general, a particular case of standard Scalar-Tensor (ST) gravity. We present a new class of ST actions, in which the potential is not necessarily fixed at the action level, and show that this extended ST theory formally contains the Renormalization Group case. We also propose here a Renormalization Group scale setting identification that is explicitly covariant and valid for arbitrary relativistic fluids.
Dennis, Ann M.; Herbeck, Joshua T.; Brown, Andrew Leigh; Kellam, Paul; de Oliveira, Tulio; Pillay, Deenan; Fraser, Christophe; Cohen, Myron S.
2014-01-01
Efficient and effective HIV prevention measures for generalized epidemics in sub-Saharan Africa have not yet been validated at the population-level. Design and impact evaluation of such measures requires fine-scale understanding of local HIV transmission dynamics. The novel tools of HIV phylogenetics and molecular epidemiology may elucidate these transmission dynamics. Such methods have been incorporated into studies of concentrated HIV epidemics to identify proximate and determinant traits associated with ongoing transmission. However, applying similar phylogenetic analyses to generalized epidemics, including the design and evaluation of prevention trials, presents additional challenges. Here we review the scope of these methods and present examples of their use in concentrated epidemics in the context of prevention. Next, we describe the current uses for phylogenetics in generalized epidemics, and discuss their promise for elucidating transmission patterns and informing prevention trials. Finally, we review logistic and technical challenges inherent to large-scale molecular epidemiological studies of generalized epidemics, and suggest potential solutions. PMID:24977473
Parallel Index and Query for Large Scale Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less
What drives the formation of massive stars and clusters?
NASA Astrophysics Data System (ADS)
Ochsendorf, Bram; Meixner, Margaret; Roman-Duval, Julia; Evans, Neal J., II; Rahman, Mubdi; Zinnecker, Hans; Nayak, Omnarayani; Bally, John; Jones, Olivia C.; Indebetouw, Remy
2018-01-01
Galaxy-wide surveys allow to study star formation in unprecedented ways. In this talk, I will discuss our analysis of the Large Magellanic Cloud (LMC) and the Milky Way, and illustrate how studying both the large and small scale structure of galaxies are critical in addressing the question: what drives the formation of massive stars and clusters?I will show that ‘turbulence-regulated’ star formation models do not reproduce massive star formation properties of GMCs in the LMC and Milky Way: this suggests that theory currently does not capture the full complexity of star formation on small scales. I will also report on the discovery of a massive star forming complex in the LMC, which in many ways manifests itself as an embedded twin of 30 Doradus: this may shed light on the formation of R136 and 'Super Star Clusters' in general. Finally, I will highlight what we can expect in the next years in the field of star formation with large-scale sky surveys, ALMA, and our JWST-GTO program.
NASA Astrophysics Data System (ADS)
Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena
2015-12-01
In this paper, we outline the theory behind the educational design used to implement a large-scale high school astronomy education project. This design was created in response to the realization of ineffective educational design in the initial early stages of the project. The new design follows an iterative improvement model where the materials and general approach can evolve in response to solicited feedback. The improvement cycle concentrates on avoiding overly positive self-evaluation while addressing relevant external school and community factors while concentrating on backward mapping from clearly set goals. Limiting factors, including time, resources, support and the potential for failure in the classroom, are dealt with as much as possible in the large-scale design allowing teachers the best chance of successful implementation in their real-world classroom. The actual approach adopted following the principles of this design is also outlined, which has seen success in bringing real astronomical data and access to telescopes into the high school classroom.
Fitting a Point Cloud to a 3d Polyhedral Surface
NASA Astrophysics Data System (ADS)
Popov, E. V.; Rotkov, S. I.
2017-05-01
The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.
NASA Technical Reports Server (NTRS)
Vandegriend, A. A.; Owe, M.; Chang, A. T. C.
1992-01-01
The Botswana water and surface energy balance research program was developed to study and evaluate the integrated use of multispectral satellite remote sensing for monitoring the hydrological status of the Earth's surface. The research program consisted of two major, mutually related components: a surface energy balance modeling component, built around an extensive field campaign; and a passive microwave research component which consisted of a retrospective study of large scale moisture conditions and Nimbus scanning multichannel microwave radiometer microwave signatures. The integrated approach of both components are explained in general and activities performed within the passive microwave research component are summarized. The microwave theory is discussed taking into account: soil dielectric constant, emissivity, soil roughness effects, vegetation effects, optical depth, single scattering albedo, and wavelength effects. The study site is described. The soil moisture data and its processing are considered. The relation between observed large scale soil moisture and normalized brightness temperatures is discussed. Vegetation characteristics and inverse modeling of soil emissivity is considered.
States of mind: Emotions, body feelings, and thoughts share distributed neural networks
Oosterwijk, Suzanne; Lindquist, Kristen A.; Anderson, Eric; Dautoff, Rebecca; Moriguchi, Yoshiya; Barrett, Lisa Feldman
2012-01-01
Scientists have traditionally assumed that different kinds of mental states (e.g., fear, disgust, love, memory, planning, concentration, etc.) correspond to different psychological faculties that have domain-specific correlates in the brain. Yet, growing evidence points to the constructionist hypothesis that mental states emerge from the combination of domain-general psychological processes that map to large-scale distributed brain networks. In this paper, we report a novel study testing a constructionist model of the mind in which participants generated three kinds of mental states (emotions, body feelings, or thoughts) while we measured activity within large-scale distributed brain networks using fMRI. We examined the similarity and differences in the pattern of network activity across these three classes of mental states. Consistent with a constructionist hypothesis, a combination of large-scale distributed networks contributed to emotions, thoughts, and body feelings, although these mental states differed in the relative contribution of those networks. Implications for a constructionist functional architecture of diverse mental states are discussed. PMID:22677148
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Deep learning with non-medical training used for chest pathology identification
NASA Astrophysics Data System (ADS)
Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit
2015-03-01
In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.
Generalized scaling of seasonal thermal stratification in lakes
NASA Astrophysics Data System (ADS)
Shatwell, T.; Kirillin, G.
2016-12-01
The mixing regime is fundamental to the biogeochemisty and ecology of lakes because it determines the vertical transport of matter such as gases, nutrients, and organic material. Whereas shallow lakes are usually polymictic and regularly mix to the bottom, deep lakes tend to stratify seasonally, separating surface water from deep sediments and deep water from the atmosphere. Although empirical relationships exist to predict the mixing regime, a physically based, quantitative criterion is lacking. Here we review our recent research on thermal stratification in lakes at the transition between polymictic and stratified regimes. Using the mechanistic balance between potential and kinetic energy in terms of the Richardson number, we derive a generalized physical scaling for seasonal stratification in a closed lake basin. The scaling parameter is the critical mean basin depth that delineates polymictic and seasonally stratified lakes based on lake water transparency (Secchi depth), lake length, and an annual mean estimate for the Monin-Obukhov length. We validated the scaling on available data of 374 global lakes using logistic regression and found it to perform better than other criteria including a conventional open basin scaling or a simple depth threshold. The scaling has potential applications in estimating large scale greenhouse gas fluxes from lakes because the required inputs, like water transparency and basin morphology, can be acquired using the latest remote sensing technologies. The generalized scaling is universal for freshwater lakes and allows the seasonal mixing regime to be estimated without numerically solving the heat transport equations.
A hierarchy of distress and invariant item ordering in the General Health Questionnaire-12.
Doyle, F; Watson, R; Morgan, K; McBride, O
2012-06-01
Invariant item ordering (IIO) is defined as the extent to which items have the same ordering (in terms of item difficulty/severity - i.e. demonstrating whether items are difficult [rare] or less difficult [common]) for each respondent who completes a scale. IIO is therefore crucial for establishing a scale hierarchy that is replicable across samples, but no research has demonstrated IIO in scales of psychological distress. We aimed to determine if a hierarchy of distress with IIO exists in a large general population sample who completed a scale measuring distress. Data from 4107 participants who completed the 12-item General Health Questionnaire (GHQ-12) from the Northern Ireland Health and Social Wellbeing Survey 2005-6 were analysed. Mokken scaling was used to determine the dimensionality and hierarchy of the GHQ-12, and items were investigated for IIO. All items of the GHQ-12 formed a single, strong unidimensional scale (H=0.58). IIO was found for six of the 12 items (H-trans=0.55), and these symptoms reflected the following hierarchy: anhedonia, concentration, participation, coping, decision-making and worthlessness. The cross-sectional analysis needs replication. The GHQ-12 showed a hierarchy of distress, but IIO is only demonstrated for six of the items, and the scale could therefore be shortened. Adopting brief, hierarchical scales with IIO may be beneficial in both clinical and research contexts. Copyright © 2011 Elsevier B.V. All rights reserved.
Invariance in the recurrence of large returns and the validation of models of price dynamics
NASA Astrophysics Data System (ADS)
Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey
2013-08-01
Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.
Global Scale Solar Disturbances
NASA Astrophysics Data System (ADS)
Title, A. M.; Schrijver, C. J.; DeRosa, M. L.
2013-12-01
The combination of the STEREO and SDO missions have allowed for the first time imagery of the entire Sun. This coupled with the high cadence, broad thermal coverage, and the large dynamic range of the Atmospheric Imaging Assembly on SDO has allowed discovery of impulsive solar disturbances that can significantly affect a hemisphere or more of the solar volume. Such events are often, but not always, associated with M and X class flares. GOES C and even B class flares are also associated with these large scale disturbances. Key to the recognition of the large scale disturbances was the creation of log difference movies. By taking the log of images before differencing events in the corona become much more evident. Because such events cover such a large portion of the solar volume their passage can effect the dynamics of the entire corona as it adjusts to and recovers from their passage. In some cases this may lead to a another flare or filament ejection, but in general direct causal evidence of 'sympathetic' behavior is lacking. However, evidence is accumulating these large scale events create an environment that encourages other solar instabilities to occur. Understanding the source of these events and how the energy that drives them is built up, stored, and suddenly released is critical to understanding the origins of space weather. Example events and comments of their relevance will be presented.
The role of Natural Flood Management in managing floods in large scale basins during extreme events
NASA Astrophysics Data System (ADS)
Quinn, Paul; Owen, Gareth; ODonnell, Greg; Nicholson, Alex; Hetherington, David
2016-04-01
There is a strong evidence database showing the negative impacts of land use intensification and soil degradation in NW European river basins on hydrological response and to flood impact downstream. However, the ability to target zones of high runoff production and the extent to which we can manage flood risk using nature-based flood management solution are less known. A move to planting more trees and having less intense farmed landscapes is part of natural flood management (NFM) solutions and these methods suggest that flood risk can be managed in alternative and more holistic ways. So what local NFM management methods should be used, where in large scale basin should they be deployed and how does flow is propagate to any point downstream? Generally, how much intervention is needed and will it compromise food production systems? If we are observing record levels of rainfall and flow, for example during Storm Desmond in Dec 2015 in the North West of England, what other flood management options are really needed to complement our traditional defences in large basins for the future? In this paper we will show examples of NFM interventions in the UK that have impacted at local scale sites. We will demonstrate the impact of interventions at local, sub-catchment (meso-scale) and finally at the large scale. These tools include observations, process based models and more generalised Flood Impact Models. Issues of synchronisation and the design level of protection will be debated. By reworking observed rainfall and discharge (runoff) for observed extreme events in the River Eden and River Tyne, during Storm Desmond, we will show how much flood protection is needed in large scale basins. The research will thus pose a number of key questions as to how floods may have to be managed in large scale basins in the future. We will seek to support a method of catchment systems engineering that holds water back across the whole landscape as a major opportunity to management water in large scale basins in the future. The broader benefits of engineering landscapes to hold water for pollution control, sediment loss and drought minimisation will also be shown.
Ultrasonic Recovery and Modification of Food Ingredients
NASA Astrophysics Data System (ADS)
Vilkhu, Kamaljit; Manasseh, Richard; Mawson, Raymond; Ashokkumar, Muthupandian
There are two general classes of effects that sound, and ultrasound in particular, can have on a fluid. First, very significant modifications to the nature of food and food ingredients can be due to the phenomena of bubble acoustics and cavitation. The applied sound oscillates bubbles in the fluid, creating intense forces at microscopic scales thus driving chemical changes. Second, the sound itself can cause the fluid to flow vigorously, both on a large scale and on a microscopic scale; furthermore, the sound can cause particles in the fluid to move relative to the fluid. These streaming phenomena can redistribute materials within food and food ingredients at both microscopic and macroscopic scales.
Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds
NASA Astrophysics Data System (ADS)
Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.
2015-12-01
An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.
Sensitivity simulations of superparameterised convection in a general circulation model
NASA Astrophysics Data System (ADS)
Rybka, Harald; Tost, Holger
2015-04-01
Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid variability (individual CRM cell output) is analysed in order to illustrate the importance of a highly varying atmospheric structure inside a single GCM grid box. Finally, the convective transport of Radon is observed comparing different transport procedures and their influence on the vertical tracer distribution.
The spatial and temporal domains of modern ecology.
Estes, Lyndon; Elsen, Paul R; Treuer, Timothy; Ahmed, Labeeb; Caylor, Kelly; Chang, Jason; Choi, Jonathan J; Ellis, Erle C
2018-05-01
To understand ecological phenomena, it is necessary to observe their behaviour across multiple spatial and temporal scales. Since this need was first highlighted in the 1980s, technology has opened previously inaccessible scales to observation. To help to determine whether there have been corresponding changes in the scales observed by modern ecologists, we analysed the resolution, extent, interval and duration of observations (excluding experiments) in 348 studies that have been published between 2004 and 2014. We found that observational scales were generally narrow, because ecologists still primarily use conventional field techniques. In the spatial domain, most observations had resolutions ≤1 m 2 and extents ≤10,000 ha. In the temporal domain, most observations were either unreplicated or infrequently repeated (>1 month interval) and ≤1 year in duration. Compared with studies conducted before 2004, observational durations and resolutions appear largely unchanged, but intervals have become finer and extents larger. We also found a large gulf between the scales at which phenomena are actually observed and the scales those observations ostensibly represent, raising concerns about observational comprehensiveness. Furthermore, most studies did not clearly report scale, suggesting that it remains a minor concern. Ecologists can better understand the scales represented by observations by incorporating autocorrelation measures, while journals can promote attentiveness to scale by implementing scale-reporting standards.
Exploring Entrainment Patterns of Human Emotion in Social Media
Luo, Chuan; Zhang, Zhu
2016-01-01
Emotion entrainment, which is generally defined as the synchronous convergence of human emotions, performs many important social functions. However, what the specific mechanisms of emotion entrainment are beyond in-person interactions, and how human emotions evolve under different entrainment patterns in large-scale social communities, are still unknown. In this paper, we aim to examine the massive emotion entrainment patterns and understand the underlying mechanisms in the context of social media. As modeling emotion dynamics on a large scale is often challenging, we elaborate a pragmatic framework to characterize and quantify the entrainment phenomenon. By applying this framework on the datasets from two large-scale social media platforms, we find that the emotions of online users entrain through social networks. We further uncover that online users often form their relations via dual entrainment, while maintain it through single entrainment. Remarkably, the emotions of online users are more convergent in nonreciprocal entrainment. Building on these findings, we develop an entrainment augmented model for emotion prediction. Experimental results suggest that entrainment patterns inform emotion proximity in dyads, and encoding their associations promotes emotion prediction. This work can further help us to understand the underlying dynamic process of large-scale online interactions and make more reasonable decisions regarding emergency situations, epidemic diseases, and political campaigns in cyberspace. PMID:26953692
Exploring Entrainment Patterns of Human Emotion in Social Media.
He, Saike; Zheng, Xiaolong; Zeng, Daniel; Luo, Chuan; Zhang, Zhu
2016-01-01
Emotion entrainment, which is generally defined as the synchronous convergence of human emotions, performs many important social functions. However, what the specific mechanisms of emotion entrainment are beyond in-person interactions, and how human emotions evolve under different entrainment patterns in large-scale social communities, are still unknown. In this paper, we aim to examine the massive emotion entrainment patterns and understand the underlying mechanisms in the context of social media. As modeling emotion dynamics on a large scale is often challenging, we elaborate a pragmatic framework to characterize and quantify the entrainment phenomenon. By applying this framework on the datasets from two large-scale social media platforms, we find that the emotions of online users entrain through social networks. We further uncover that online users often form their relations via dual entrainment, while maintain it through single entrainment. Remarkably, the emotions of online users are more convergent in nonreciprocal entrainment. Building on these findings, we develop an entrainment augmented model for emotion prediction. Experimental results suggest that entrainment patterns inform emotion proximity in dyads, and encoding their associations promotes emotion prediction. This work can further help us to understand the underlying dynamic process of large-scale online interactions and make more reasonable decisions regarding emergency situations, epidemic diseases, and political campaigns in cyberspace.
Sawata, Hiroshi; Ueshima, Kenji; Tsutani, Kiichiro
2011-04-14
Clinical evidence is important for improving the treatment of patients by health care providers. In the study of cardiovascular diseases, large-scale clinical trials involving thousands of participants are required to evaluate the risks of cardiac events and/or death. The problems encountered in conducting the Japanese Acute Myocardial Infarction Prospective (JAMP) study highlighted the difficulties involved in obtaining the financial and infrastructural resources necessary for conducting large-scale clinical trials. The objectives of the current study were: 1) to clarify the current funding and infrastructural environment surrounding large-scale clinical trials in cardiovascular and metabolic diseases in Japan, and 2) to find ways to improve the environment surrounding clinical trials in Japan more generally. We examined clinical trials examining cardiovascular diseases that evaluated true endpoints and involved 300 or more participants using Pub-Med, Ichushi (by the Japan Medical Abstracts Society, a non-profit organization), websites of related medical societies, the University Hospital Medical Information Network (UMIN) Clinical Trials Registry, and clinicaltrials.gov at three points in time: 30 November, 2004, 25 February, 2007 and 25 July, 2009. We found a total of 152 trials that met our criteria for 'large-scale clinical trials' examining cardiovascular diseases in Japan. Of these, 72.4% were randomized controlled trials (RCTs). Of 152 trials, 9.2% of the trials examined more than 10,000 participants, and 42.8% examined between 1,000 and 10,000 participants. The number of large-scale clinical trials markedly increased from 2001 to 2004, but suddenly decreased in 2007, then began to increase again. Ischemic heart disease (39.5%) was the most common target disease. Most of the larger-scale trials were funded by private organizations such as pharmaceutical companies. The designs and results of 13 trials were not disclosed. To improve the quality of clinical trials, all sponsors should register trials and disclose the funding sources before the enrolment of participants, and publish their results after the completion of each study.
NASA Technical Reports Server (NTRS)
Weinan, E.; Shu, Chi-Wang
1994-01-01
High order essentially non-oscillatory (ENO) schemes, originally designed for compressible flow and in general for hyperbolic conservation laws, are applied to incompressible Euler and Navier-Stokes equations with periodic boundary conditions. The projection to divergence-free velocity fields is achieved by fourth-order central differences through fast Fourier transforms (FFT) and a mild high-order filtering. The objective of this work is to assess the resolution of ENO schemes for large scale features of the flow when a coarse grid is used and small scale features of the flow, such as shears and roll-ups, are not fully resolved. It is found that high-order ENO schemes remain stable under such situations and quantities related to large scale features, such as the total circulation around the roll-up region, are adequately resolved.
NASA Technical Reports Server (NTRS)
Weinan, E.; Shu, Chi-Wang
1992-01-01
High order essentially non-oscillatory (ENO) schemes, originally designed for compressible flow and in general for hyperbolic conservation laws, are applied to incompressible Euler and Navier-Stokes equations with periodic boundary conditions. The projection to divergence-free velocity fields is achieved by fourth order central differences through Fast Fourier Transforms (FFT) and a mild high-order filtering. The objective of this work is to assess the resolution of ENO schemes for large scale features of the flow when a coarse grid is used and small scale features of the flow, such as shears and roll-ups, are not fully resolved. It is found that high-order ENO schemes remain stable under such situations and quantities related to large-scale features, such as the total circulation around the roll-up region, are adequately resolved.
Long time existence from interior gluing
NASA Astrophysics Data System (ADS)
Chruściel, Piotr T.
2017-07-01
We prove completeness-to-the-future of null hypersurfaces emanating outwards from large spheres, in vacuum space-times evolving from general asymptotically flat data with well-defined energy-momentum. The proof uses scaling and a gluing construction to reduce the problem to Bieri’s stability theorem.
Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that comp...
A synthesis and comparative evaluation of drainage water management
USDA-ARS?s Scientific Manuscript database
Viable large-scale crop production in the United States requires artificial drainage in humid and poorly drained agricultural regions. Excess water removal is generally achieved by installing tile drains that export water to open ditches that eventually flow into streams. Drainage water management...
NASA Astrophysics Data System (ADS)
Wang, Yuhong; Wang, Mingli; Shen, Lin; Zhu, Yanying; Sun, Xin; Shi, Guochao; Xu, Xiaona; Li, Ruifeng; Ma, Wanli
2018-01-01
Not Available Project supported by the Youth Fund Project of University Science and Technology Plan of Hebei Provincial Department of Education, China (Grant No. QN2015004) and the Doctoral Fund of Yanshan University, China (Grant No. B924).
Psychometric evaluation of the thought-action fusion scale in a large clinical sample.
Meyer, Joseph F; Brown, Timothy A
2013-12-01
This study examined the psychometric properties of the 19-item Thought-Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct.
Psychometric Evaluation of the Thought–Action Fusion Scale in a Large Clinical Sample
Meyer, Joseph F.; Brown, Timothy A.
2015-01-01
This study examined the psychometric properties of the 19-item Thought–Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct. PMID:22315482
A networked voting rule for democratic representation
Brigatti, Edgardo; Moreno, Yamir
2018-01-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process. PMID:29657817
NASA Technical Reports Server (NTRS)
Pandey, P. C.
1982-01-01
Eight subsets using two to five frequencies of the SEASAT scanning multichannel microwave radiometer are examined to determine their potential in the retrieval of atmospheric water vapor content. Analysis indicates that the information concerning the 18 and 21 GHz channels are optimum for water vapor retrieval. A comparison with radiosonde observations gave an rms accuracy of approximately 0.40 g sq cm. The rms accuracy of precipitable water using different subsets was within 10 percent. Global maps of precipitable water over oceans using two and five channel retrieval (average of two and five channel retrieval) are given. Study of these maps reveals the possibility of global moisture distribution associated with oceanic currents and large scale general circulation in the atmosphere. A stable feature of the large scale circulation is noticed. The precipitable water is maximum over the Bay of Bengal and in the North Pacific over the Kuroshio current and shows a general latitudinal pattern.
A family of dynamic models for large-eddy simulation
NASA Technical Reports Server (NTRS)
Carati, D.; Jansen, K.; Lund, T.
1995-01-01
Since its first application, the dynamic procedure has been recognized as an effective means to compute rather than prescribe the unknown coefficients that appear in a subgrid-scale model for Large-Eddy Simulation (LES). The dynamic procedure is usually used to determine the nondimensional coefficient in the Smagorinsky (1963) model. In reality the procedure is quite general and it is not limited to the Smagorinsky model by any theoretical or practical constraints. The purpose of this note is to consider a generalized family of dynamic eddy viscosity models that do not necessarily rely on the local equilibrium assumption built into the Smagorinsky model. By invoking an inertial range assumption, it will be shown that the coefficients in the new models need not be nondimensional. This additional degree of freedom allows the use of models that are scaled on traditionally unknown quantities such as the dissipation rate. In certain cases, the dynamic models with dimensional coefficients are simpler to implement, and allow for a 30% reduction in the number of required filtering operations.
NASA Astrophysics Data System (ADS)
Donoghue, John F.
2017-08-01
In the description of general covariance, the vierbein and the Lorentz connection can be treated as independent fundamental fields. With the usual gauge Lagrangian, the Lorentz connection is characterized by an asymptotically free running coupling. When running from high energy, the coupling gets large at a scale which can be called the Planck mass. If the Lorentz connection is confined at that scale, the low energy theory can have the Einstein Lagrangian induced at low energy through dimensional transmutation. However, in general there will be new divergences in such a theory and the Lagrangian basis should be expanded. I construct a conformally invariant model with a larger basis size which potentially may have the same property.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks
Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong
2011-01-01
With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed. PMID:22163808
Dynamical generalized Hurst exponent as a tool to monitor unstable periods in financial time series
NASA Astrophysics Data System (ADS)
Morales, Raffaello; Di Matteo, T.; Gramatica, Ruggero; Aste, Tomaso
2012-06-01
We investigate the use of the Hurst exponent, dynamically computed over a weighted moving time-window, to evaluate the level of stability/instability of financial firms. Financial firms bailed-out as a consequence of the 2007-2008 credit crisis show a neat increase with time of the generalized Hurst exponent in the period preceding the unfolding of the crisis. Conversely, firms belonging to other market sectors, which suffered the least throughout the crisis, show opposite behaviors. We find that the multifractality of the bailed-out firms increase at the crisis suggesting that the multi fractal properties of the time series are changing. These findings suggest the possibility of using the scaling behavior as a tool to track the level of stability of a firm. In this paper, we introduce a method to compute the generalized Hurst exponent which assigns larger weights to more recent events with respect to older ones. In this way large fluctuations in the remote past are less likely to influence the recent past. We also investigate the scaling associated with the tails of the log-returns distributions and compare this scaling with the scaling associated with the Hurst exponent, observing that the processes underlying the price dynamics of these firms are truly multi-scaling.
Xu, Jiansong; Potenza, Marc N.; Calhoun, Vince D.; Zhang, Rubin; Yip, Sarah W.; Wall, John T.; Pearlson, Godfrey D.; Worhunsky, Patrick D.; Garrison, Kathleen A.; Moran, Joseph M.
2016-01-01
Functional magnetic resonance imaging (fMRI) studies regularly use univariate general-linear-model-based analyses (GLM). Their findings are often inconsistent across different studies, perhaps because of several fundamental brain properties including functional heterogeneity, balanced excitation and inhibition (E/I), and sparseness of neuronal activities. These properties stipulate heterogeneous neuronal activities in the same voxels and likely limit the sensitivity and specificity of GLM. This paper selectively reviews findings of histological and electrophysiological studies and fMRI spatial independent component analysis (sICA) and reports new findings by applying sICA to two existing datasets. The extant and new findings consistently demonstrate several novel features of brain functional organization not revealed by GLM. They include overlap of large-scale functional networks (FNs) and their concurrent opposite modulations, and no significant modulations in activity of most FNs across the whole brain during any task conditions. These novel features of brain functional organization are highly consistent with the brain’s properties of functional heterogeneity, balanced E/I, and sparseness of neuronal activity, and may help reconcile inconsistent GLM findings. PMID:27592153
Nonextensive Entropy Approach to Space Plasma Fluctuations and Turbulence
NASA Astrophysics Data System (ADS)
Leubner, M. P.; Vörös, Z.; Baumjohann, W.
Spatial intermittency in fully developed turbulence is an established feature of astrophysical plasma fluctuations and in particular apparent in the interplanetary medium by in situ observations. In this situation, the classical Boltzmann— Gibbs extensive thermo-statistics, applicable when microscopic interactions and memory are short ranged and the environment is a continuous and differentiable manifold, fails. Upon generalization of the entropy function to nonextensivity, accounting for long-range interactions and thus for correlations in the system, it is demonstrated that the corresponding probability distribution functions (PDFs) are members of a family of specific power-law distributions. In particular, the resulting theoretical bi-κ functional reproduces accurately the observed global leptokurtic, non-Gaussian shape of the increment PDFs of characteristic solar wind variables on all scales, where nonlocality in turbulence is controlled via a multiscale coupling parameter. Gradual decoupling is obtained by enhancing the spatial separation scale corresponding to increasing κ-values in case of slow solar wind conditions where a Gaussian is approached in the limit of large scales. Contrary, the scaling properties in the high speed solar wind are predominantly governed by the mean energy or variance of the distribution, appearing as second parameter in the theory. The PDFs of solar wind scalar field differences are computed from WIND and ACE data for different time-lags and bulk speeds and analyzed within the nonextensive theory, where also a particular nonlinear dependence of the coupling parameter and variance with scale arises for best fitting theoretical PDFs. Consequently, nonlocality in fluctuations, related to both, turbulence and its large scale driving, should be related to long-range interactions in the context of nonextensive entropy generalization, providing fundamentally the physical background of the observed scale dependence of fluctuations in intermittent space plasmas.
Regional climate model sensitivity to domain size
NASA Astrophysics Data System (ADS)
Leduc, Martin; Laprise, René
2009-05-01
Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.
Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-03-01
To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. For the purposes of this Guide, large-scale Federal renewable energy projects are defined as renewable energy facilities larger than 10 megawatts (MW) that are sited on Federal property and lands and typically financed and owned by third parties.1 The U.S. Department of Energy’s Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessarymore » private capital to complete them. This Guide is intended to provide a general resource that will begin to develop the Federal employee’s awareness and understanding of the project developer’s operating environment and the private sector’s awareness and understanding of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this Guide has been organized to match Federal processes with typical phases of commercial project development. FEMP collaborated with the National Renewable Energy Laboratory (NREL) and professional project developers on this Guide to ensure that Federal projects have key elements recognizable to private sector developers and investors. The main purpose of this Guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project. This framework begins the translation between the Federal and private sector operating environments. When viewing the overall« less
NASA Astrophysics Data System (ADS)
Giese, M.; Reimann, T.; Bailly-Comte, V.; Maréchal, J.-C.; Sauter, M.; Geyer, T.
2018-03-01
Due to the duality in terms of (1) the groundwater flow field and (2) the discharge conditions, flow patterns of karst aquifer systems are complex. Estimated aquifer parameters may differ by several orders of magnitude from local (borehole) to regional (catchment) scale because of the large contrast in hydraulic parameters between matrix and conduit, their heterogeneity and anisotropy. One approach to deal with the scale effect problem in the estimation of hydraulic parameters of karst aquifers is the application of large-scale experiments such as long-term high-abstraction conduit pumping tests, stimulating measurable groundwater drawdown in both, the karst conduit system as well as the fractured matrix. The numerical discrete conduit-continuum modeling approach MODFLOW-2005 Conduit Flow Process Mode 1 (CFPM1) is employed to simulate laminar and nonlaminar conduit flow, induced by large-scale experiments, in combination with Darcian matrix flow. Effects of large-scale experiments were simulated for idealized settings. Subsequently, diagnostic plots and analyses of different fluxes are applied to interpret differences in the simulated conduit drawdown and general flow patterns. The main focus is set on the question to which extent different conduit flow regimes will affect the drawdown in conduit and matrix depending on the hydraulic properties of the conduit system, i.e., conduit diameter and relative roughness. In this context, CFPM1 is applied to investigate the importance of considering turbulent conditions for the simulation of karst conduit flow. This work quantifies the relative error that results from assuming laminar conduit flow for the interpretation of a synthetic large-scale pumping test in karst.
Li, Li; Guo, Yichuan; Sun, Yuping; Yang, Long; Qin, Liang; Guan, Shouliang; Wang, Jinfen; Qiu, Xiaohui; Li, Hongbian; Shang, Yuanyuan; Fang, Ying
2018-03-01
The capability to directly build atomically thin transition metal dichalcogenide (TMD) devices by chemical synthesis offers important opportunities to achieve large-scale electronics and optoelectronics with seamless interfaces. Here, a general approach for the chemical synthesis of a variety of TMD (e.g., MoS 2 , WS 2 , and MoSe 2 ) device arrays over large areas is reported. During chemical vapor deposition, semiconducting TMD channels and metallic TMD/carbon nanotube (CNT) hybrid electrodes are simultaneously formed on CNT-patterned substrate, and then coalesce into seamless devices. Chemically synthesized TMD devices exhibit attractive electrical and mechanical properties. It is demonstrated that chemically synthesized MoS 2 -MoS 2 /CNT devices have Ohmic contacts between MoS 2 /CNT hybrid electrodes and MoS 2 channels. In addition, MoS 2 -MoS 2 /CNT devices show greatly enhanced mechanical stability and photoresponsivity compared with conventional gold-contacted devices, which makes them suitable for flexible optoelectronics. Accordingly, a highly flexible pixel array based on chemically synthesized MoS 2 -MoS 2 /CNT photodetectors is applied for image sensing. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
Statistical mechanics of soft-boson phase transitions
NASA Technical Reports Server (NTRS)
Gupta, Arun K.; Hill, Christopher T.; Holman, Richard; Kolb, Edward W.
1991-01-01
The existence of structure on large (100 Mpc) scales, and limits to anisotropies in the cosmic microwave background radiation (CMBR), have imperiled models of structure formation based solely upon the standard cold dark matter scenario. Novel scenarios, which may be compatible with large scale structure and small CMBR anisotropies, invoke nonlinear fluctuations in the density appearing after recombination, accomplished via the use of late time phase transitions involving ultralow mass scalar bosons. Herein, the statistical mechanics are studied of such phase transitions in several models involving naturally ultralow mass pseudo-Nambu-Goldstone bosons (pNGB's). These models can exhibit several interesting effects at high temperature, which is believed to be the most general possibilities for pNGB's.
Large-eddy simulation of a turbulent mixing layer
NASA Technical Reports Server (NTRS)
Mansour, N. N.; Ferziger, J. H.; Reynolds, W. C.
1978-01-01
The three dimensional, time dependent (incompressible) vorticity equations were used to simulate numerically the decay of isotropic box turbulence and time developing mixing layers. The vorticity equations were spatially filtered to define the large scale turbulence field, and the subgrid scale turbulence was modeled. A general method was developed to show numerical conservation of momentum, vorticity, and energy. The terms that arise from filtering the equations were treated (for both periodic boundary conditions and no stress boundary conditions) in a fast and accurate way by using fast Fourier transforms. Use of vorticity as the principal variable is shown to produce results equivalent to those obtained by use of the primitive variable equations.
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Borowka, Karsten; Winkler, Antje
2010-01-01
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.
NASA Technical Reports Server (NTRS)
Reynolds, W. C.
1983-01-01
The capabilities and limitations of large eddy simulation (LES) and full turbulence simulation (FTS) are outlined. It is pointed out that LES, although limited at the present time by the need for periodic boundary conditions, produces large-scale flow behavior in general agreement with experiments. What is more, FTS computations produce small-scale behavior that is consistent with available experiments. The importance of the development work being done on the National Aerodynamic Simulator is emphasized. Studies at present are limited to situations in which periodic boundary conditions can be applied on boundaries of the computational domain where the flow is turbulent.
Extracellular matrix motion and early morphogenesis
Loganathan, Rajprasad; Rongish, Brenda J.; Smith, Christopher M.; Filla, Michael B.; Czirok, Andras; Bénazéraf, Bertrand
2016-01-01
For over a century, embryologists who studied cellular motion in early amniotes generally assumed that morphogenetic movement reflected migration relative to a static extracellular matrix (ECM) scaffold. However, as we discuss in this Review, recent investigations reveal that the ECM is also moving during morphogenesis. Time-lapse studies show how convective tissue displacement patterns, as visualized by ECM markers, contribute to morphogenesis and organogenesis. Computational image analysis distinguishes between cell-autonomous (active) displacements and convection caused by large-scale (composite) tissue movements. Modern quantification of large-scale ‘total’ cellular motion and the accompanying ECM motion in the embryo demonstrates that a dynamic ECM is required for generation of the emergent motion patterns that drive amniote morphogenesis. PMID:27302396
Four-center bubbled BPS solutions with a Gibbons-Hawking base
NASA Astrophysics Data System (ADS)
Heidmann, Pierre
2017-10-01
We construct four-center bubbled BPS solutions with a Gibbons-Hawking base space. We give a systematic procedure to build scaling solutions: starting from three-supertube configurations and using generalized spectral flows and gauge transformations to extend to solutions with four Gibbons-Hawking centers. This allows us to construct very large families of smooth horizonless solutions that have the same charges and angular momentum as supersymmetric black holes with a macroscopically large horizon area. Our construction reveals that all scaling solutions with four Gibbons Hawking centers have an angular momentum at around 99% of the cosmic censorship bound. We give both an analytical and a numerical explanation for this unexpected feature.
Graph-based linear scaling electronic structure theory.
Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Graph-based linear scaling electronic structure theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Cosmological consistency tests of gravity theory and cosmic acceleration
NASA Astrophysics Data System (ADS)
Ishak-Boushaki, Mustapha B.
2017-01-01
Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among the important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use statistical measures, the rate of cosmic expansion, the growth rate of large scale structure, and the physical consistency of these probes with one another.
Radiometer requirements for Earth-observation systems using large space antennas
NASA Technical Reports Server (NTRS)
Keafer, L. S., Jr.; Harrington, R. F.
1983-01-01
Requirements are defined for Earth observation microwave radiometry for the decade of the 1990's by using large space antenna (LSA) systems with apertures in the range from 50 to 200 m. General Earth observation needs, specific measurement requirements, orbit mission guidelines and constraints, and general radiometer requirements are defined. General Earth observation needs are derived from NASA's basic space science program. Specific measurands include soil moisture, sea surface temperature, salinity, water roughness, ice boundaries, and water pollutants. Measurements are required with spatial resolution from 10 to 1 km and with temporal resolution from 3 days to 1 day. The primary orbit altitude and inclination ranges are 450 to 2200 km and 60 to 98 deg, respectively. Contiguous large scale coverage of several land and ocean areas over the globe dictates large (several hundred kilometers) swaths. Radiometer measurements are made in the bandwidth range from 1 to 37 GHz, preferably with dual polarization radiometers with a minimum of 90 percent beam efficiency. Reflector surface, root mean square deviation tolerances are in the wavelength range from 1/30 to 1/100.
Correlation between UV and IR cutoffs in quantum field theory and large extra dimensions
NASA Astrophysics Data System (ADS)
Cortés, J. L.
1999-04-01
A recently conjectured relationship between UV and IR cutoffs in an effective field theory without quantum gravity is generalized in the presence of large extra dimensions. Estimates for the corrections to the usual calculation of observables within quantum field theory are used to put very stringent limits, in some cases, on the characteristic scale of the additional compactified dimensions. Implications for the cosmological constant problem are also discussed.
Biodiversity and ecosystem stability across scales in metacommunities.
Wang, Shaopeng; Loreau, Michel
2016-05-01
Although diversity-stability relationships have been extensively studied in local ecosystems, the global biodiversity crisis calls for an improved understanding of these relationships in a spatial context. Here, we use a dynamical model of competitive metacommunities to study the relationships between species diversity and ecosystem variability across scales. We derive analytic relationships under a limiting case; these results are extended to more general cases with numerical simulations. Our model shows that, while alpha diversity decreases local ecosystem variability, beta diversity generally contributes to increasing spatial asynchrony among local ecosystems. Consequently, both alpha and beta diversity provide stabilising effects for regional ecosystems, through local and spatial insurance effects respectively. We further show that at the regional scale, the stabilising effect of biodiversity increases as spatial environmental correlation increases. Our findings have important implications for understanding the interactive effects of global environmental changes (e.g. environmental homogenisation) and biodiversity loss on ecosystem sustainability at large scales. © 2016 John Wiley & Sons Ltd/CNRS.
Towards large-scale plasma-assisted synthesis of nanowires
NASA Astrophysics Data System (ADS)
Cvelbar, U.
2011-05-01
Large quantities of nanomaterials, e.g. nanowires (NWs), are needed to overcome the high market price of nanomaterials and make nanotechnology widely available for general public use and applications to numerous devices. Therefore, there is an enormous need for new methods or routes for synthesis of those nanostructures. Here plasma technologies for synthesis of NWs, nanotubes, nanoparticles or other nanostructures might play a key role in the near future. This paper presents a three-dimensional problem of large-scale synthesis connected with the time, quantity and quality of nanostructures. Herein, four different plasma methods for NW synthesis are presented in contrast to other methods, e.g. thermal processes, chemical vapour deposition or wet chemical processes. The pros and cons are discussed in detail for the case of two metal oxides: iron oxide and zinc oxide NWs, which are important for many applications.
Genetics of Resistant Hypertension: the Missing Heritability and Opportunities.
Teixeira, Samantha K; Pereira, Alexandre C; Krieger, Jose E
2018-05-19
Blood pressure regulation in humans has long been known to be a genetically determined trait. The identification of causal genetic modulators for this trait has been unfulfilling at the least. Despite the recent advances of genome-wide genetic studies, loci associated with hypertension or blood pressure still explain a very low percentage of the overall variation of blood pressure in the general population. This has precluded the translation of discoveries in the genetics of human hypertension to clinical use. Here, we propose the combined use of resistant hypertension as a trait for mapping genetic determinants in humans and the integration of new large-scale technologies to approach in model systems the multidimensional nature of the problem. New large-scale efforts in the genetic and genomic arenas are paving the way for an increased and granular understanding of genetic determinants of hypertension. New technologies for whole genome sequence and large-scale forward genetic screens can help prioritize gene and gene-pathways for downstream characterization and large-scale population studies, and guided pharmacological design can be used to drive discoveries to the translational application through better risk stratification and new therapeutic approaches. Although significant challenges remain in the mapping and identification of genetic determinants of hypertension, new large-scale technological approaches have been proposed to surpass some of the shortcomings that have limited progress in the area for the last three decades. The incorporation of these technologies to hypertension research may significantly help in the understanding of inter-individual blood pressure variation and the deployment of new phenotyping and treatment approaches for the condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dymarsky, Anatoly; Farnsworth, Kara; Komargodski, Zohar
This paper addresses the question of whether there are 4D Lorentz invariant unitary quantum fi eld theories with scale invariance but not conformal invariance. We present an important loophole in the arguments of Luty-Polchinski-Rattazzi and Dymarsky-Komargodski-Schwimmer-Theisen that is the trace of the energy-momentum tensor T could be a generalized free field. In this paper we rule out this possibility. The key ingredient is the observation that a unitary theory with scale but not conformal invariance necessarily has a non-vanishing anomaly for global scale transformations. We show that this anomaly cannot be reproduced if T is a generalized free field unlessmore » the theory also contains a dimension-2 scalar operator. In the special case where such an operator is present it can be used to redefine ("improve") the energy-momentum tensor, and we show that there is at least one energy-momentum tensor that is not a generalized free field. In addition, we emphasize that, in general, large momentum limits of correlation functions cannot be understood from the leading terms of the coordinate space OPE. This invalidates a recent argument by Farnsworth-Luty-Prilepina (FLP). Finally, despite the invalidity of the general argument of FLP, some of the techniques turn out to be useful in the present context.« less
Ecological Effects of Weather Modification: A Problem Analysis.
ERIC Educational Resources Information Center
Cooper, Charles F.; Jolly, William C.
This publication reviews the potential hazards to the environment of weather modification techniques as they eventually become capable of producing large scale weather pattern modifications. Such weather modifications could result in ecological changes which would generally require several years to be fully evident, including the alteration of…
Anxiety, Depression, Hostility and General Psychopathology: An Arabian Study.
ERIC Educational Resources Information Center
Ibrahim, Abdel-Sattar; Ibrahim, Radwa M.
In Arabian cultures, the psychosocial characteristics of psychopathological trends, including depression, anxiety, and hostility remain largely unknown. Scales measuring depression, anxiety, and hostility were administered to a voluntary sample of 989 Saudi Arabian men and 1,024 Saudi women coming from different social, economical, and educational…
Ecological Regional Analysis Applied to Campus Sustainability Performance
ERIC Educational Resources Information Center
Weber, Shana; Newman, Julie; Hill, Adam
2017-01-01
Purpose: Sustainability performance in higher education is often evaluated at a generalized large scale. It remains unknown to what extent campus efforts address regional sustainability needs. This study begins to address this gap by evaluating trends in performance through the lens of regional environmental characteristics.…
ERIC Educational Resources Information Center
Baird, Jo-Anne; Andrich, David; Hopfenbeck, Therese N.; Stobart, Gordon
2017-01-01
In response to the commentaries to their original article, the authors thank the commentators for their remarks and note that there is some general consensus across the commentaries around some major themes: (1) the lack of articulation between assessment and learning theories, particularly in relation to large-scale testing used for…
Systems and Cascades in Cognitive Development and Academic Achievement
ERIC Educational Resources Information Center
Bornstein, Marc H.; Hahn, Chun-Shin; Wolke, Dieter
2013-01-01
A large-scale ("N" = 552) controlled multivariate prospective 14-year longitudinal study of a developmental cascade embedded in a developmental system showed that information-processing efficiency in infancy (4 months), general mental development in toddlerhood (18 months), behavior difficulties in early childhood (36 months),…
NASA Astrophysics Data System (ADS)
Zhang, M.; Liu, S.
2017-12-01
Despite extensive studies on hydrological responses to forest cover change in small watersheds, the hydrological responses to forest change and associated mechanisms across multiple spatial scales have not been fully understood. This review thus examined about 312 watersheds worldwide to provide a generalized framework to evaluate hydrological responses to forest cover change and to identify the contribution of spatial scale, climate, forest type and hydrological regime in determining the intensity of forest change related hydrological responses in small (<1000 km2) and large watersheds (≥1000 km2). Key findings include: 1) the increase in annual runoff associated with forest cover loss is statistically significant at multiple spatial scales whereas the effect of forest cover gain is statistically inconsistent; 2) the sensitivity of annual runoff to forest cover change tends to attenuate as watershed size increases only in large watersheds; 3) annual runoff is more sensitive to forest cover change in water-limited watersheds than in energy-limited watersheds across all spatial scales; and 4) small mixed forest-dominated watersheds or large snow-dominated watersheds are more hydrologically resilient to forest cover change. These findings improve the understanding of hydrological response to forest cover change at different spatial scales and provide a scientific underpinning to future watershed management in the context of climate change and increasing anthropogenic disturbances.
Scale-invariance underlying the logistic equation and its social applications
NASA Astrophysics Data System (ADS)
Hernando, A.; Plastino, A.
2013-01-01
On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.
Montresor, Antonio; Cong, Dai Tran; Sinuon, Mouth; Tsuyuoka, Reiko; Chanthavisouk, Chitsavang; Strandgaard, Hanne; Velayudhan, Raman; Capuano, Corinne M.; Le Anh, Tuan; Tee Dató, Ah S.
2008-01-01
In 2001, Urbani and Palmer published a review of the epidemiological situation of helminthiases in the countries of the Western Pacific Region of the World Health Organization indicating the control needs in the region. Six years after this inspiring article, large-scale preventive chemotherapy for the control of helminthiasis has scaled up dramatically in the region. This paper analyzes the most recent published and unpublished country information on large-scale preventive chemotherapy and summarizes the progress made since 2000. Almost 39 million treatments were provided in 2006 in the region for the control of helminthiasis: nearly 14 million for the control of lymphatic filariasis, more than 22 million for the control of soil-transmitted helminthiasis, and over 2 million for the control of schistosomiasis. In general, control of these helminthiases is progressing well in the Mekong countries and Pacific Islands. In China, despite harboring the majority of the helminth infections of the region, the control activities have not reached the level of coverage of countries with much more limited financial resources. The control of food-borne trematodes is still limited, but pilot activities have been initiated in China, Lao People's Democratic Republic, and Vietnam. PMID:18846234
Trust in the Medical Profession: Conceptual and Measurement Issues
Hall, Mark A; Camacho, Fabian; Dugan, Elizabeth; Balkrishnan, Rajesh
2002-01-01
Objective To develop and test a multi-item measure for general trust in physicians, in contrast with trust in a specific physician. Data Sources Random national telephone survey of 502 adult subjects with a regular physician and source of payment. Study Design Based on a multidimensional conceptual model, a large pool of candidate items was generated, tested, and revised using focus groups, expert reviewers, and pilot testing. The scale was analyzed for its factor structure, internal consistency, construct validity, and other psychometric properties. Principal Findings The resulting 11-item scale measuring trust in physicians generally is consistent with most aspects of the conceptual model except that it does not include the dimension of confidentiality. This scale has a single-factor structure, good internal consistency (alpha=.89), and good response variability (range=11–54; mean=33.5; SD=6.9). This scale is related to satisfaction with care, trust in one's physician, following doctors' recommendations, having no prior disputes with physicians, not having sought second opinions, and not having changed doctors. No association was found with race/ethnicity. While general trust and interpersonal trust are qualitatively similar, they are only moderately correlated with each other and general trust is substantially lower. Conclusions Emerging research on patients' trust has focused on interpersonal trust in a specific, known physician. Trust in physicians in general is also important and differs significantly from interpersonal physician trust. General physician trust potentially has a strong influence on important behaviors and attitudes, and on the formation of interpersonal physician trust. PMID:12479504
Dark energy and modified gravity in the Effective Field Theory of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo
2018-04-01
We develop an approach to compute observables beyond the linear regime of dark matter perturbations for general dark energy and modified gravity models. We do so by combining the Effective Field Theory of Dark Energy and Effective Field Theory of Large-Scale Structure approaches. In particular, we parametrize the linear and nonlinear effects of dark energy on dark matter clustering in terms of the Lagrangian terms introduced in a companion paper [1], focusing on Horndeski theories and assuming the quasi-static approximation. The Euler equation for dark matter is sourced, via the Newtonian potential, by new nonlinear vertices due to modified gravity and, as in the pure dark matter case, by the effects of short-scale physics in the form of the divergence of an effective stress tensor. The effective fluid introduces a counterterm in the solution to the matter continuity and Euler equations, which allows a controlled expansion of clustering statistics on mildly nonlinear scales. We use this setup to compute the one-loop dark-matter power spectrum.
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system
Jensen, Tue V.; Pinson, Pierre
2017-01-01
Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation. PMID:29182600
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.
Jensen, Tue V; Pinson, Pierre
2017-11-28
Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system
NASA Astrophysics Data System (ADS)
Jensen, Tue V.; Pinson, Pierre
2017-11-01
Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.
NASA Technical Reports Server (NTRS)
Canuto, V. M.
1978-01-01
A review of big-bang cosmology is presented, emphasizing the big-bang model, hypotheses on the origin of galaxies, observational tests of the big-bang model that may be possible with the Large Space Telescope, and the scale-covariant theory of gravitation. Detailed attention is given to the equations of general relativity, the redshift-distance relation for extragalactic objects, expansion of the universe, the initial singularity, the discovery of the 3-K blackbody radiation, and measurements of the amount of deuterium in the universe. The curvature of the expanding universe is examined along with the magnitude-redshift relation for quasars and galaxies. Several models for the origin of galaxies are evaluated, and it is suggested that a model of galaxy formation via the formation of black holes is consistent with the model of an expanding universe. Scale covariance is discussed, a scale-covariant theory is developed which contains invariance under scale transformation, and it is shown that Dirac's (1937) large-numbers hypothesis finds a natural role in this theory by relating the atomic and Einstein units.
Scale-space measures for graph topology link protein network architecture to function.
Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen
2014-06-15
The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.
Energy transfer, pressure tensor, and heating of kinetic plasma
NASA Astrophysics Data System (ADS)
Yang, Yan; Matthaeus, William H.; Parashar, Tulasi N.; Haggerty, Colby C.; Roytershteyn, Vadim; Daughton, William; Wan, Minping; Shi, Yipeng; Chen, Shiyi
2017-07-01
Kinetic plasma turbulence cascade spans multiple scales ranging from macroscopic fluid flow to sub-electron scales. Mechanisms that dissipate large scale energy, terminate the inertial range cascade, and convert kinetic energy into heat are hotly debated. Here, we revisit these puzzles using fully kinetic simulation. By performing scale-dependent spatial filtering on the Vlasov equation, we extract information at prescribed scales and introduce several energy transfer functions. This approach allows highly inhomogeneous energy cascade to be quantified as it proceeds down to kinetic scales. The pressure work, - ( P . ∇ ) . u , can trigger a channel of the energy conversion between fluid flow and random motions, which contains a collision-free generalization of the viscous dissipation in collisional fluid. Both the energy transfer and the pressure work are strongly correlated with velocity gradients.
Optimization and large scale computation of an entropy-based moment closure
NASA Astrophysics Data System (ADS)
Kristopher Garrett, C.; Hauck, Cory; Hill, Judith
2015-12-01
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.
NASA Astrophysics Data System (ADS)
Piazzi, L.; Bonaviri, C.; Castelli, A.; Ceccherelli, G.; Costa, G.; Curini-Galletti, M.; Langeneck, J.; Manconi, R.; Montefalcone, M.; Pipitone, C.; Rosso, A.; Pinna, S.
2018-07-01
In the Mediterranean Sea, Cystoseira species are the most important canopy-forming algae in shallow rocky bottoms, hosting high biodiverse sessile and mobile communities. A large-scale study has been carried out to investigate the structure of the Cystoseira-dominated assemblages at different spatial scales and to test the hypotheses that alpha and beta diversity of the assemblages, the abundance and the structure of epiphytic macroalgae, epilithic macroalgae, sessile macroinvertebrates and mobile macroinvertebrates associated to Cystoseira beds changed among scales. A hierarchical sampling design in a total of five sites across the Mediterranean Sea (Croatia, Montenegro, Sardinia, Tuscany and Balearic Islands) was used. A total of 597 taxa associated to Cystoseira beds were identified with a mean number per sample ranging between 141.1 ± 6.6 (Tuscany) and 173.9 ± 8.5(Sardinia). A high variability at small (among samples) and large (among sites) scale was generally highlighted, but the studied assemblages showed different patterns of spatial variability. The relative importance of the different scales of spatial variability should be considered to optimize sampling designs and propose monitoring plans of this habitat.
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less
Marzinelli, Ezequiel M; Williams, Stefan B; Babcock, Russell C; Barrett, Neville S; Johnson, Craig R; Jordan, Alan; Kendrick, Gary A; Pizarro, Oscar R; Smale, Dan A; Steinberg, Peter D
2015-01-01
Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia's Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km) and depths (15-60 m) across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.
Large-angle cosmic microwave background anisotropies in an open universe
NASA Technical Reports Server (NTRS)
Kamionkowski, Marc; Spergel, David N.
1994-01-01
If the universe is open, scales larger than the curvature scale may be probed by observation of large-angle fluctuations in the cosmic microwave background (CMB). We consider primordial adiabatic perturbations and discuss power spectra that are power laws in volume, wavelength, and eigenvalue of the Laplace operator. Such spectra may have arisen if, for example, the universe underwent a period of `frustated' inflation. The resulting large-angle anisotropies of the CMB are computed. The amplitude generally increases as Omega is decreased but decreases as h is increased. Interestingly enough, for all three Ansaetze, anisotropies on angular scales larger than the curvature scale are suppressed relative to the anisotropies on scales smaller than the curvature scale, but cosmic variance makes discrimination between various models difficult. Models with 0.2 approximately less than Omega h approximately less than 0.3 appear compatible with CMB fluctuations detected by Cosmic Background Explorer Satellite (COBE) and the Tenerife experiment and with the amplitude and spectrum of fluctuations of galaxy counts in the APM, CfA, and 1.2 Jy IRAS surveys. COBE normalization for these models yields sigma(sub 8) approximately = 0.5 - 0.7. Models with smaller values of Omega h when normalized to COBE require bias factors in excess of 2 to be compatible with the observed galaxy counts on the 8/h Mpc scale. Requiring that the age of the universe exceed 10 Gyr implies that Omega approximately greater than 0.25, while requiring that from the last-scattering term in the Sachs-Wolfe formula, large-angle anisotropies come primarily from the decay of potential fluctuations at z approximately less than 1/Omega. Thus, if the universe is open, COBE has been detecting temperature fluctuations produced at moderate redshift rather than at z approximately 1300.
Large-scale numerical simulations of polydisperse particle flow in a silo
NASA Astrophysics Data System (ADS)
Rubio-Largo, S. M.; Maza, D.; Hidalgo, R. C.
2017-10-01
Very recently, we have examined experimentally and numerically the micro-mechanical details of monodisperse particle flows through an orifice placed at the bottom of a silo (Rubio-Largo et al. in Phys Rev Lett 114:238002, 2015). Our findings disentangled the paradoxical ideas associated to the free-fall arch concept, which has historically served to justify the dependence of the flow rate on the outlet size. In this work, we generalize those findings examining large-scale polydisperse particle flows in silos. In the range of studied apertures, both velocity and density profiles at the aperture are self-similar, and the obtained scaling functions confirm that the relevant scale of the problem is the size of the aperture. Moreover, we find that the contact stress monotonically decreases when the particles approach the exit and vanish at the outlet. The behavior of this magnitude is practically independent of the size of the orifice. However, the total and partial kinetic stress profiles suggest that the outlet size controls the propagation of the velocity fluctuations inside the silo. Examining this magnitude, we conclusively argue that indeed there is a well-defined transition region where the particle flow changes its nature. The general trend of the partial kinetic pressure profiles and the location of the transition region results the same for all particle types. We find that the partial kinetic stress is larger for bigger particles. However, the small particles carry a higher fraction of kinetic stress respect to their concentration, which suggest that the small particles have larger velocity fluctuations than the large ones and showing lower strength of correlation with the global flow. Our outcomes explain why the free-fall arch picture has served to describe the polydisperse flow rate in the discharge of silos.
Was Dick Tracy Right? Do Magnetic Fields Rule the Cosmos?
NASA Astrophysics Data System (ADS)
Bartlett, David F.
2007-12-01
Astronomers generally subordinate magnetic forces to gravitational ones at all but the smallest scales. The 'Dual Proposal', however, introduces a new scale, λo=400 pc [1]. Here the photon has a real mass and the graviton an imaginary one, both of mc2=hc/λo = 10 - 25 eV. The resulting sinusoidal gravitational potential (φ = - (GM/r) Cos[kor], ko=2 π/λo) does not compromise solar system dynamics, explains the large tidal forces observed in the Milky Way, and predicts that the Galaxy has a central, physical stationary bar. The sinusoidal potential is powerless to bind large amorphous objects such as clusters of galaxies (or the Universe itself). Here one needs the massive photon (φ = (Q/r) Exp[- kor]). Chibisov (1976) has shown that at large scales (s>>λo), a massive photon will generally provide an attractive force rather than the usual repulsive one of the massless photon. At recent meetings of the AAS I have shown how the new cosmic magnetic fields can bind the Coma cluster or strip the gas (and plasma) from the stars in the Bullet Collision (Clowe et al 2006). In this poster, I demonstrate how magnetic fields can replace gravitational ones in cosmology. Two elements are critical. The Dark Ages are needed to explain the evolution of the scale factor a(t) from the time of nucleosynthesis to the present. Gravitational energy densities (ΔW/ΔV= (1/2) ρφ ) and magnetic energy densities (ΔW/ΔV= (1/2) J.A ) are now absolute and thus meaningful. Ref [1]: "Analogies between electricity and gravity", Metrologia 41 (2004) S115-S124.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
Role of absorbing aerosols on hot extremes in India in a GCM
NASA Astrophysics Data System (ADS)
Mondal, A.; Sah, N.; Venkataraman, C.; Patil, N.
2017-12-01
Temperature extremes and heat waves in North-Central India during the summer months of March through June are known for causing significant impact in terms of human health, productivity and mortality. While greenhouse gas-induced global warming is generally believed to intensify the magnitude and frequency of such extremes, aerosols are usually associated with an overall cooling, by virtue of their dominant radiation scattering nature, in most world regions. Recently, large-scale atmospheric conditions leading to heat wave and extreme temperature conditions have been analysed for the North-Central Indian region. However, the role of absorbing aerosols, including black carbon and dust, is still not well understood, in mediating hot extremes in the region. In this study, we use 30-year simulations from a chemistry-coupled atmosphere-only General Circulation Model (GCM), ECHAM6-HAM2, forced with evolving aerosol emissions in an interactive aerosol module, along with observed sea surface temperatures, to examine large-scale and mesoscale conditions during hot extremes in India. The model is first validated with observed gridded temperature and reanalysis data, and is found to represent observed variations in temperature in the North-Central region and concurrent large-scale atmospheric conditions during high temperature extremes realistically. During these extreme events, changes in near surface properties include a reduction in single scattering albedo and enhancement in short-wave solar heating rate, compared to climatological conditions. This is accompanied by positive anomalies of black carbon and dust aerosol optical depths. We conclude that the large-scale atmospheric conditions such as the presence of anticyclones and clear skies, conducive to heat waves and high temperature extremes, are exacerbated by absorbing aerosols in North-Central India. Future air quality regulations are expected to reduce sulfate particles and their masking of GHG warming. It is concurrently important to mitigate emissions of warming black carbon particles, to manage future climate change-induced hot extremes.
NASA Astrophysics Data System (ADS)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
Fault Tolerant Frequent Pattern Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan
FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less
The use of impact force as a scale parameter for the impact response of composite laminates
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Poe, C. C., Jr.
1992-01-01
The building block approach is currently used to design composite structures. With this approach, the data from coupon tests is scaled up to determine the design of a structure. Current standard impact tests and methods of relating test data to other structures are not generally understood and are often used improperly. A methodology is outlined for using impact force as a scale parameter for delamination damage for impacts of simple plates. Dynamic analyses were used to define ranges of plate parameters and impact parameters where quasi-static analyses are valid. These ranges include most low velocity impacts where the mass of the impacter is large and the size of the specimen is small. For large mass impacts of moderately thick (0.35 to 0.70 cm) laminates, the maximum extent of delamination damage increased with increasing impact force and decreasing specimen thickness. For large mass impact tests at a given kinetic energy, impact force and hence delamination size depends on specimen size, specimen thickness, boundary conditions, and indenter size and shape. If damage is reported in terms of impact force instead of kinetic energy, large mass test results can be applied directly to other plates of the same size.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
The use of impact force as a scale parameter for the impact response of composite laminates
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Poe, C. C., Jr.
1992-01-01
The building block approach is currently used to design composite structures. With this approach, the data from coupon tests are scaled up to determine the design of a structure. Current standard impact tests and methods of relating test data to other structures are not generally understood and are often used improperly. A methodology is outlined for using impact force as a scale parameter for delamination damage for impacts of simple plates. Dynamic analyses were used to define ranges of plate parameters and impact parameters where quasi-static analyses are valid. These ranges include most low-velocity impacts where the mass of the impacter is large, and the size of the specimen is small. For large-mass impacts of moderately thick (0.35-0.70 cm) laminates, the maximum extent of delamination damage increased with increasing impact force and decreasing specimen thickness. For large-mass impact tests at a given kinetic energy, impact force and hence delamination size depends on specimen size, specimen thickness, boundary conditions, and indenter size and shape. If damage is reported in terms of impact force instead of kinetic energy, large-mass test results can be applied directly to other plates of the same thickness.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Anomalies in the GRBs' distribution
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Horvath, Istvan; Hakkila, Jon; Toth, Viktor
2015-08-01
Gamma-ray bursts (GRBs) are the most luminous objects known: they outshine their host galaxies making them ideal candidates for probing large-scale structure. Earlier, the angular distribution of different GRBs (long, intermediate and short) has been studied in detail with different methods and it has been found that the short and intermediate groups showed deviation from the full randomness at different levels (e.g. Vavrek, R., et al. 2008). However these result based only angular measurements of the BATSE experiment, without any spatial distance indicator involved.Currently we have more than 361 GRBs with measured precise position, optical afterglow and redshift, mainly due to the observations of the Swift mission. This sample is getting large enough that it its homogeneous and isotropic distribution a large scale can be checked. We have recently (Horvath, I. et al., 2014) identified a large clustering of gamma-ray bursts at redshift z ~ 2 in the general direction of the constellations of Hercules and Corona Borealis. This angular excess cannot be entirely attributed to known selection biases, making its existence due to chance unlikely. The scale on which the clustering occurs is disturbingly large, about 2-3 Gpc: the underlying distribution of matter suggested by this cluster is big enough to question standard assumptions about Universal homogeneity and isotropy.
Controls on hillslope stability in a mountain river catchment
NASA Astrophysics Data System (ADS)
Golly, Antonius; Turowski, Jens; Hovius, Niels; Badoux, Alexandre
2015-04-01
Sediment transport in fluvial systems accounts for a large fraction of natural hazard damage costs in mountainous regions and is an important factor for risk mitigation, engineering and ecology. Although sediment transport in high-gradient channels gathered research interest over the last decades, sediment dynamics in steep streams are generally not well understood. For instance, the sourcing of the sediment and when and how it is actually mobilized is largely undescribed. In the Erlenbach, a mountain torrent in the Swiss Prealps, we study the mechanistic relations between in-channel hydrology, channel morphology, external climatic controls and the surrounding sediment sources to identify relevant process domains for sediment input and their characteristic scales. Here, we analyze the motion of a slow-moving landslide complex that was permanently monitored by time-lapse cameras over a period of 70 days at a 30 minutes interval. In addition, data sets for stream discharge, air temperature and precipitation rates are available. Apparent changes in the channel morphology, e.g. the destruction of channel-spanning bed forms, were manually determined from the time-lapse images and were treated as event marks in the time series. We identify five relevant types of sediment displacement processes emerging during the hillslope motion: concentrated mud flows, deep seated hillslope failure, catastrophic cavity failure, hillslope bank erosion and individual grain loss. Generally, sediment displacement occurs on a large range of temporal and spatial scales and sediment dynamics in steep streams not only depend on large floods with long recurrence intervals. We find that each type of displacement acts in a specific temporal and spatial domain with their characteristic scales. Different external climatic forcing (e.g. high-intensity vs. long-lasting precipitation events) promote different displacement processes. Stream morphology and the presence of boulders have a large effect on sediment input through deep seated failures and cavity failures while they have only minor impact on the other process types. In addition to large floods, which are generally recognized to produce huge amounts of sediment, we identify two relevant climatic regimes that play an important role for the sediment dynamics: a) long-lasting but low-intensity rainfall that explicitly trigger specific sediment displacement processes on the hillslopes and b) smaller discharge events with recurrence intervals of approximately one year that mobilize sediments from the hillslope's toes along the channel.
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
Characteristics of wilderness users in outdoor recreation assessments
Alan E. Watson; H. Ken Cordell; Lawrence A. Hartmann
1989-01-01
Wilderness use is often subsumed under outdoor recreation participation in large-scale assessments. Participation monitoring has indicated, however, that wilderness use has been increasing faster than outdoor recreation use in general. In a sample of Forest Service wilderness and nonwildemess users during the summer of 1985, detailed expenditure, activity, and travel...
Identifying Country-Specific Cultures of Physics Education: A Differential Item Functioning Approach
ERIC Educational Resources Information Center
Mesic, Vanes
2012-01-01
In international large-scale assessments of educational outcomes, student achievement is often represented by unidimensional constructs. This approach allows for drawing general conclusions about country rankings with respect to the given achievement measure, but it typically does not provide specific diagnostic information which is necessary for…
Finite element meshing of ANSYS (trademark) solid models
NASA Technical Reports Server (NTRS)
Kelley, F. S.
1987-01-01
A large scale, general purpose finite element computer program, ANSYS, developed and marketed by Swanson Analysis Systems, Inc. is discussed. ANSYS was perhaps the first commercially available program to offer truly interactive finite element model generation. ANSYS's purpose is for solid modeling. This application is briefly discussed and illustrated.
General Purpose Sampling in the Domain of Higher Education.
ERIC Educational Resources Information Center
Creager, John A.
The experience of the American Council on Education's Cooperative Institutional Research Program indicates that large-scale national surveys in the domain of higher education can be performed with scientific integrity within the constraints of costs, logistics, and technical resources. The purposes of this report are to provide complete and…
Development of Learning to Learn Skills in Primary School
ERIC Educational Resources Information Center
Vainikainen, Mari-Pauliina; Wüstenberg, Sascha; Kupiainen, Sirkku; Hotulainen, Risto; Hautamäki, Jarkko
2015-01-01
In Finland, schools' effectiveness in fostering the development of transversal skills is evaluated through large-scale learning to learn (LTL) assessments. This article presents how LTL skills--general cognitive competences and learning-related motivational beliefs--develop during primary school and how they predict pupils' CPS skills at the end…
"Fast Track" and "Traditional Path" Coaches: Affordances, Agency and Social Capital
ERIC Educational Resources Information Center
Rynne, Steven
2014-01-01
A recent development in large-scale coach accreditation (certification) structures has been the "fast tracking" of former elite athletes. Former elite athletes are often exempted from entry-level qualifications and are generally granted access to fast track courses that are shortened versions of the accreditation courses undertaken by…
The U.S. Army in Southeast Asia: Near-Term and Long-Term Roles
2013-01-01
clashes sparking naval skirmishes.10 That said, the prospect of the SCS disputes trig- gering a major conflagration with large-scale casualties appears...However, there is a general consensus within ASEAN that these problems constitute concrete security concerns and that defense establish- ments will
USDA-ARS?s Scientific Manuscript database
Higher-level relationships within the Lepidoptera, and particularly within the species-rich subclade Ditrysia, are generally not well understood, although recent studies have yielded progress. 483 taxa spanning 115 of 124 families were sampled for 19 protein-coding nuclear genes. Their aligned nucle...
EDUCATION AND THE VICTORIAN MIND OF ENGLAND.
ERIC Educational Resources Information Center
ELLSWORTH, EDWARD W.
THE RELATION OF THE ATTITUDES OF LEADING PUBLIC MEN IN BRITAIN CONCERNING LARGE-SCALE EDUCATIONAL OPPORTUNITY TO THE GENERAL PHILOSOPHY OF LIFE IN THE VICTORIAN PERIOD WAS STUDIED. THE EDUCATIONAL IDEOLOGIES OF BENJAMIN DISRAELI, WILLIAM E. GLADSTONE, LORD JOHN RUSSELL, AND WILLIAM LOVETT WERE ASCERTAINED. ADULT EDUCATION IN 19TH-CENTURY BRITAIN…
NASA Technical Reports Server (NTRS)
Lee, Sam; Addy, Harold; Broeren, Andy P.; Orchard, David M.
2017-01-01
A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two scaling methods based on Weber number were compared against a method based on the Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel. The Weber number based scaling methods resulted in smaller runback ice mass than the Reynolds number based scaling method. The ice accretions from the Weber number based scaling method also formed farther upstream. However there were large differences in the accreted ice mass between the two Weber number based scaling methods. The difference became greater when the speed was increased. This indicated that there may be some Reynolds number effects that isnt fully accounted for and warrants further study.
Adaptation of a general circulation model to ocean dynamics
NASA Technical Reports Server (NTRS)
Turner, R. E.; Rees, T. H.; Woodbury, G. E.
1976-01-01
A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.
Sequestering the standard model vacuum energy.
Kaloper, Nemanja; Padilla, Antonio
2014-03-07
We propose a very simple reformulation of general relativity, which completely sequesters from gravity all of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the Universe. For them to be nonzero, the Universe should be finite in spacetime. If this matter is the standard model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the Universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale, and curvature scale, and early Universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that w(DE) ≃ -1 is a transient, and that the Universe will collapse in the future.
Vasconcelos-Raposo, José; Fernandes, Helder Miguel; Teixeira, Carla M
2013-01-01
The purpose of the present study was to assess the factor structure and reliability of the Depression, Anxiety and Stress Scales (DASS-21) in a large Portuguese community sample. Participants were 1020 adults (585 women and 435 men), with a mean age of 36.74 (SD = 11.90) years. All scales revealed good reliability, with Cronbach's alpha values between .80 (anxiety) and .84 (depression). The internal consistency of the total score was .92. Confirmatory factor analysis revealed that the best-fitting model (*CFI = .940, *RMSEA = .038) consisted of a latent component of general psychological distress (or negative affectivity) plus orthogonal depression, anxiety and stress factors. The Portuguese version of the DASS-21 showed good psychometric properties (factorial validity and reliability) and thus can be used as a reliable and valid instrument for measuring depression, anxiety and stress symptoms.
A Preliminary Model Study of the Large-Scale Seasonal Cycle in Bottom Pressure Over the Global Ocean
NASA Technical Reports Server (NTRS)
Ponte, Rui M.
1998-01-01
Output from the primitive equation model of Semtner and Chervin is used to examine the seasonal cycle in bottom pressure (Pb) over the global ocean. Effects of the volume-conserving formulation of the model on the calculation Of Pb are considered. The estimated seasonal, large-scale Pb signals have amplitudes ranging from less than 1 cm over most of the deep ocean to several centimeters over shallow, boundary regions. Variability generally increases toward the western sides of the basins, and is also larger in some Southern Ocean regions. An oscillation between subtropical and higher latitudes in the North Pacific is clear. Comparison with barotropic simulations indicates that, on basin scales, seasonal Pb variability is related to barotropic dynamics and the seasonal cycle in Ekman pumping, and results from a small, net residual in mass divergence from the balance between Ekman and Sverdrup flows.
Cosmological Ohm's law and dynamics of non-minimal electromagnetism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollenstein, Lukas; Jain, Rajeev Kumar; Urban, Federico R., E-mail: lukas.hollenstein@cea.fr, E-mail: jain@cp3.dias.sdu.dk, E-mail: furban@ulb.ac.be
2013-01-01
The origin of large-scale magnetic fields in cosmic structures and the intergalactic medium is still poorly understood. We explore the effects of non-minimal couplings of electromagnetism on the cosmological evolution of currents and magnetic fields. In this context, we revisit the mildly non-linear plasma dynamics around recombination that are known to generate weak magnetic fields. We use the covariant approach to obtain a fully general and non-linear evolution equation for the plasma currents and derive a generalised Ohm law valid on large scales as well as in the presence of non-minimal couplings to cosmological (pseudo-)scalar fields. Due to the sizeablemore » conductivity of the plasma and the stringent observational bounds on such couplings, we conclude that modifications of the standard (adiabatic) evolution of magnetic fields are severely limited in these scenarios. Even at scales well beyond a Mpc, any departure from flux freezing behaviour is inhibited.« less
Small Scale Response and Modeling of Periodically Forced Turbulence
NASA Technical Reports Server (NTRS)
Bos, Wouter; Clark, Timothy T.; Rubinstein, Robert
2007-01-01
The response of the small scales of isotropic turbulence to periodic large scale forcing is studied using two-point closures. The frequency response of the turbulent kinetic energy and dissipation rate, and the phase shifts between production, energy and dissipation are determined as functions of Reynolds number. It is observed that the amplitude and phase of the dissipation exhibit nontrivial frequency and Reynolds number dependence that reveals a filtering effect of the energy cascade. Perturbation analysis is applied to understand this behavior which is shown to depend on distant interactions between widely separated scales of motion. Finally, the extent to which finite dimensional models (standard two-equation models and various generalizations) can reproduce the observed behavior is discussed.
Transient analysis of 1D inhomogeneous media by dynamic inhomogeneous finite element method
NASA Astrophysics Data System (ADS)
Yang, Zailin; Wang, Yao; Hei, Baoping
2013-12-01
The dynamic inhomogeneous finite element method is studied for use in the transient analysis of onedimensional inhomogeneous media. The general formula of the inhomogeneous consistent mass matrix is established based on the shape function. In order to research the advantages of this method, it is compared with the general finite element method. A linear bar element is chosen for the discretization tests of material parameters with two fictitious distributions. And, a numerical example is solved to observe the differences in the results between these two methods. Some characteristics of the dynamic inhomogeneous finite element method that demonstrate its advantages are obtained through comparison with the general finite element method. It is found that the method can be used to solve elastic wave motion problems with a large element scale and a large number of iteration steps.
The small-scale dynamo: breaking universality at high Mach numbers
NASA Astrophysics Data System (ADS)
Schleicher, Dominik R. G.; Schober, Jennifer; Federrath, Christoph; Bovino, Stefano; Schmidt, Wolfram
2013-02-01
The small-scale dynamo plays a substantial role in magnetizing the Universe under a large range of conditions, including subsonic turbulence at low Mach numbers, highly supersonic turbulence at high Mach numbers and a large range of magnetic Prandtl numbers Pm, i.e. the ratio of kinetic viscosity to magnetic resistivity. Low Mach numbers may, in particular, lead to the well-known, incompressible Kolmogorov turbulence, while for high Mach numbers, we are in the highly compressible regime, thus close to Burgers turbulence. In this paper, we explore whether in this large range of conditions, universal behavior can be expected. Our starting point is previous investigations in the kinematic regime. Here, analytic studies based on the Kazantsev model have shown that the behavior of the dynamo depends significantly on Pm and the type of turbulence, and numerical simulations indicate a strong dependence of the growth rate on the Mach number of the flow. Once the magnetic field saturates on the current amplification scale, backreactions occur and the growth is shifted to the next-larger scale. We employ a Fokker-Planck model to calculate the magnetic field amplification during the nonlinear regime, and find a resulting power-law growth that depends on the type of turbulence invoked. For Kolmogorov turbulence, we confirm previous results suggesting a linear growth of magnetic energy. For more general turbulent spectra, where the turbulent velocity scales with the characteristic length scale as uℓ∝ℓϑ, we find that the magnetic energy grows as (t/Ted)2ϑ/(1-ϑ), with t being the time coordinate and Ted the eddy-turnover time on the forcing scale of turbulence. For Burgers turbulence, ϑ = 1/2, quadratic rather than linear growth may thus be expected, as the spectral energy increases from smaller to larger scales more rapidly. The quadratic growth is due to the initially smaller growth rates obtained for Burgers turbulence. Similarly, we show that the characteristic length scale of the magnetic field grows as t1/(1-ϑ) in the general case, implying t3/2 for Kolmogorov and t2 for Burgers turbulence. Overall, we find that high Mach numbers, as typically associated with steep spectra of turbulence, may break the previously postulated universality, and introduce a dependence on the environment also in the nonlinear regime.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
NASA Astrophysics Data System (ADS)
Tang, Shuaiqi; Zhang, Minghua
2015-08-01
Atmospheric vertical velocities and advective tendencies are essential large-scale forcing data to drive single-column models (SCMs), cloud-resolving models (CRMs), and large-eddy simulations (LESs). However, they cannot be directly measured from field measurements or easily calculated with great accuracy. In the Atmospheric Radiation Measurement Program (ARM), a constrained variational algorithm (1-D constrained variational analysis (1DCVA)) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). The 1DCVA algorithm is now extended into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data, diabatic heating sources (Q1), and moisture sinks (Q2). Results are presented for a midlatitude cyclone case study on 3 March 2000 at the ARM Southern Great Plains site. These results are used to evaluate the diabatic heating fields in the available products such as Rapid Update Cycle, ERA-Interim, National Centers for Environmental Prediction Climate Forecast System Reanalysis, Modern-Era Retrospective Analysis for Research and Applications, Japanese 55-year Reanalysis, and North American Regional Reanalysis. We show that although the analysis/reanalysis generally captures the atmospheric state of the cyclone, their biases in the derivative terms (Q1 and Q2) at regional scale of a few hundred kilometers are large and all analyses/reanalyses tend to underestimate the subgrid-scale upward transport of moist static energy in the lower troposphere. The 3DCVA-gridded large-scale forcing data are physically consistent with the spatial distribution of surface and TOA measurements of radiation, precipitation, latent and sensible heat fluxes, and clouds that are better suited to force SCMs, CRMs, and LESs. Possible applications of the 3DCVA are discussed.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, Tony; Keyser, David; Tegen, Suzanne
This analysis examines the employment and potential economic impacts of large-scale deployment of offshore wind technology off the coast of Oregon. This analysis examines impacts within the seven Oregon coastal counties: Clatsop, Tillamook, Lincoln, Lane, Douglas, Coos, and Curry. The impacts highlighted here can be used in county, state, and regional planning discussions and can be scaled to get a general sense of the economic development opportunities associated with other deployment scenarios.
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...
2018-02-06
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
2014-03-01
wind turbines from General Electric. China recognizes the issues with IPR but it is something that will take time to fix. It will be a significant...Large aircraft Large-scale oil and gas exploration Manned space, including lunar exploration Next-generation broadband wireless ...circuits, and building an innovation system for China’s integrated circuit (IC) manufacturing industry. 3. New generation broadband wireless mobile
Birkhofer, Klaus; Schöning, Ingo; Alt, Fabian; Herold, Nadine; Klarner, Bernhard; Maraun, Mark; Marhan, Sven; Oelmann, Yvonne; Wubet, Tesfaye; Yurkov, Andrey; Begerow, Dominik; Berner, Doreen; Buscot, François; Daniel, Rolf; Diekötter, Tim; Ehnes, Roswitha B.; Erdmann, Georgia; Fischer, Christiane; Foesel, Bärbel; Groh, Janine; Gutknecht, Jessica; Kandeler, Ellen; Lang, Christa; Lohaus, Gertrud; Meyer, Annabel; Nacke, Heiko; Näther, Astrid; Overmann, Jörg; Polle, Andrea; Pollierer, Melanie M.; Scheu, Stefan; Schloter, Michael; Schulze, Ernst-Detlef; Schulze, Waltraud; Weinert, Jan; Weisser, Wolfgang W.; Wolters, Volkmar; Schrumpf, Marion
2012-01-01
Very few principles have been unraveled that explain the relationship between soil properties and soil biota across large spatial scales and different land-use types. Here, we seek these general relationships using data from 52 differently managed grassland and forest soils in three study regions spanning a latitudinal gradient in Germany. We hypothesize that, after extraction of variation that is explained by location and land-use type, soil properties still explain significant proportions of variation in the abundance and diversity of soil biota. If the relationships between predictors and soil organisms were analyzed individually for each predictor group, soil properties explained the highest amount of variation in soil biota abundance and diversity, followed by land-use type and sampling location. After extraction of variation that originated from location or land-use, abiotic soil properties explained significant amounts of variation in fungal, meso- and macrofauna, but not in yeast or bacterial biomass or diversity. Nitrate or nitrogen concentration and fungal biomass were positively related, but nitrate concentration was negatively related to the abundances of Collembola and mites and to the myriapod species richness across a range of forest and grassland soils. The species richness of earthworms was positively correlated with clay content of soils independent of sample location and land-use type. Our study indicates that after accounting for heterogeneity resulting from large scale differences among sampling locations and land-use types, soil properties still explain significant proportions of variation in fungal and soil fauna abundance or diversity. However, soil biota was also related to processes that act at larger spatial scales and bacteria or soil yeasts only showed weak relationships to soil properties. We therefore argue that more general relationships between soil properties and soil biota can only be derived from future studies that consider larger spatial scales and different land-use types. PMID:22937029
Gravitational field of static p -branes in linearized ghost-free gravity
NASA Astrophysics Data System (ADS)
Boos, Jens; Frolov, Valeri P.; Zelnikov, Andrei
2018-04-01
We study the gravitational field of static p -branes in D -dimensional Minkowski space in the framework of linearized ghost-free (GF) gravity. The concrete models of GF gravity we consider are parametrized by the nonlocal form factors exp (-□/μ2) and exp (□2/μ4) , where μ-1 is the scale of nonlocality. We show that the singular behavior of the gravitational field of p -branes in general relativity is cured by short-range modifications introduced by the nonlocalities, and we derive exact expressions of the regularized gravitational fields, whose geometry can be written as a warped metric. For large distances compared to the scale of nonlocality, μ r →∞ , our solutions approach those found in linearized general relativity.
Deter, J; Berthier, K; Chaval, Y; Cosson, J F; Morand, S; Charbonnel, N
2006-04-01
Infection by the cestode Taenia taeniaeformis was investigated within numerous cyclic populations of the fossorial water vole Arvicola terrestris sampled during 4 years in Franche-Comté (France). The relative influence of different rodent demographic parameters on the presence of this cestode was assessed by considering (1) the demographic phase of the cycle; (2) density at the local geographical scale (<0.1 km2); (3) mean density at a larger scale (>10 km2). The local scale corresponded to the rodent population (intermediate host), while the large scale corresponded to the definitive host population (wild and feral cats). General linear models based on analyses of 1804 voles revealed the importance of local density but also of year, rodent age, season and interactions between year and season and between age and season. Prevalence was significantly higher in low vole densities than during local outbreaks. By contrast, the large geographical scale density and the demographic phase had less influence on infection by the cestode. The potential impacts of the cestode on the fitness of the host were assessed and infection had no effect on the host body mass, litter size or sexual activity of voles.
NASA Astrophysics Data System (ADS)
Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca
2018-06-01
We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.
Turning Ocean Mixing Upside Down
NASA Astrophysics Data System (ADS)
Ferrari, Raffaele; Mashayek, Ali; Campin, Jean-Michael; McDougall, Trevor; Nikurashin, Maxim
2015-11-01
It is generally understood that small-scale mixing, such as is caused by breaking internal waves, drives upwelling of the densest ocean waters that sink to the ocean bottom at high latitudes. However the observational evidence that small-scale mixing is more vigorous close to the ocean bottom than above implies that small-scale mixing converts light waters into denser ones, thus driving a net sinking of abyssal water. It is shown that abyssal waters return to the surface along weakly stratified boundary layers, where the small-scale mixing of density decays to zero. The net ocean meridional overturning circulation is thus the small residual of a large sinking of waters, driven by small-scale mixing in the stratified interior, and an equally large upwelling, driven by the reduced small-scale mixing along the ocean boundaries. Thus whether abyssal waters upwell or sink in the net cannot be inferred simply from the vertical profile of mixing intensity, but depends also on the ocean hypsometry, i.e. the shape of the bottom topography. The implications of this result for our understanding of the abyssal ocean circulation will be presented with a combination of numerical models and observations.
NASA Astrophysics Data System (ADS)
Separovic, Leo; Husain, Syed Zahid; Yu, Wei
2015-09-01
Internal variability (IV) in dynamical downscaling with limited-area models (LAMs) represents a source of error inherent to the downscaled fields, which originates from the sensitive dependence of the models to arbitrarily small modifications. If IV is large it may impose the need for probabilistic verification of the downscaled information. Atmospheric spectral nudging (ASN) can reduce IV in LAMs as it constrains the large-scale components of LAM fields in the interior of the computational domain and thus prevents any considerable penetration of sensitively dependent deviations into the range of large scales. Using initial condition ensembles, the present study quantifies the impact of ASN on IV in LAM simulations in the range of fine scales that are not controlled by spectral nudging. Four simulation configurations that all include strong ASN but differ in the nudging settings are considered. In the fifth configuration, grid nudging of land surface variables toward high-resolution surface analyses is applied. The results show that the IV at scales larger than 300 km can be suppressed by selecting an appropriate ASN setup. At scales between 300 and 30 km, however, in all configurations, the hourly near-surface temperature, humidity, and winds are only partly reproducible. Nudging the land surface variables is found to have the potential to significantly reduce IV, particularly for fine-scale temperature and humidity. On the other hand, hourly precipitation accumulations at these scales are generally irreproducible in all configurations, and probabilistic approach to downscaling is therefore recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seljak, Uroš, E-mail: useljak@berkeley.edu
On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less
NASA Astrophysics Data System (ADS)
Fernández, V.; Dietrich, D. E.; Haney, R. L.; Tintoré, J.
In situ and satellite data obtained during the last ten years have shown that the circula- tion in the Mediterranean Sea is extremely complex in space, with significant features ranging from mesoscale to sub-basin and basin scale, and highly variable in time, with mesoscale to seasonal and interannual signals. Also, the steep bottom topography and the variable atmospheric conditions from one sub-basin to another, make the circula- tion to be composed of numerous energetic and narrow coastal currents, density fronts and mesoscale structures that interact at sub-basin scale with the large scale circula- tion. To simulate numerically and better understand these features, besides high grid resolution, a low numerical dispersion and low physical dissipation ocean model is required. We present the results from a 1/8z horizontal resolution numerical simula- tion of the Mediterranean Sea using DieCAST ocean model, which meets the above requirements since it is stable with low general dissipation and uses accurate fourth- order-accurate approximations with low numerical dispersion. The simulations are carried out with climatological surface forcing using monthly mean winds and relax- ation towards climatological values of temperature and salinity. The model reproduces the main features of the large basin scale circulation, as well as the seasonal variabil- ity of sub-basin scale currents that are well documented by observations in straits and channels. In addition, DieCAST brings out natural fronts and eddies that usually do not appear in numerical simulations of the Mediterranean and that lead to a natural interannual variability. The role of this intrinsic variability in the general circulation will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renk, Janina; Zumalacárregui, Miguel; Montanari, Francesco, E-mail: renk@thphys.uni-heidelberg.de, E-mail: miguel.zumalacarregui@nordita.org, E-mail: francesco.montanari@helsinki.fi
2016-07-01
We address the impact of consistent modifications of gravity on the largest observable scales, focusing on relativistic effects in galaxy number counts and the cross-correlation between the matter large scale structure (LSS) distribution and the cosmic microwave background (CMB). Our analysis applies to a very broad class of general scalar-tensor theories encoded in the Horndeski Lagrangian and is fully consistent on linear scales, retaining the full dynamics of the scalar field and not assuming quasi-static evolution. As particular examples we consider self-accelerating Covariant Galileons, Brans-Dicke theory and parameterizations based on the effective field theory of dark energy, using the himore » class code to address the impact of these models on relativistic corrections to LSS observables. We find that especially effects which involve integrals along the line of sight (lensing convergence, time delay and the integrated Sachs-Wolfe effect—ISW) can be considerably modified, and even lead to O(1000%) deviations from General Relativity in the case of the ISW effect for Galileon models, for which standard probes such as the growth function only vary by O(10%). These effects become dominant when correlating galaxy number counts at different redshifts and can lead to ∼ 50% deviations in the total signal that might be observable by future LSS surveys. Because of their integrated nature, these deep-redshift cross-correlations are sensitive to modifications of gravity even when probing eras much before dark energy domination. We further isolate the ISW effect using the cross-correlation between LSS and CMB temperature anisotropies and use current data to further constrain Horndeski models. Forthcoming large-volume galaxy surveys using multiple-tracers will search for all these effects, opening a new window to probe gravity and cosmic acceleration at the largest scales available in our universe.« less
Nightside Detection of a Large-Scale Thermospheric Wave Generated by a Solar Eclipse
NASA Astrophysics Data System (ADS)
Harding, B. J.; Drob, D. P.; Buriti, R. A.; Makela, J. J.
2018-04-01
The generation of a large-scale wave in the upper atmosphere caused by a solar eclipse was first predicted in the 1970s, but the experimental evidence remains sparse and comprises mostly indirect observations. This study presents observations of the wind component of a large-scale thermospheric wave generated by the 21 August 2017 total solar eclipse. In contrast with previous studies, the observations are made on the nightside, after the eclipse ended. A ground-based interferometer located in northeastern Brazil is used to monitor the Doppler shift of the 630.0-nm airglow emission, providing direct measurements of the wind and temperature in the thermosphere, where eclipse effects are expected to be the largest. A disturbance is seen in the zonal and meridional wind which is at or above the 90% significance level based on the measured 30-day variability. These observations are compared with a first principles numerical model calculation from the Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model, which predicted the propagation of a large-scale wave well into the nightside. The modeled disturbance matches well the difference between the wind measurements and the 30-day median, though the measured perturbation (˜60 m/s) is larger than the prediction (38 m/s) for the meridional wind. No clear evidence for the wave is seen in the temperature data, however.
The build up of the correlation between halo spin and the large-scale structure
NASA Astrophysics Data System (ADS)
Wang, Peng; Kang, Xi
2018-01-01
Both simulations and observations have confirmed that the spin of haloes/galaxies is correlated with the large-scale structure (LSS) with a mass dependence such that the spin of low-mass haloes/galaxies tend to be parallel with the LSS, while that of massive haloes/galaxies tend to be perpendicular with the LSS. It is still unclear how this mass dependence is built up over time. We use N-body simulations to trace the evolution of the halo spin-LSS correlation and find that at early times the spin of all halo progenitors is parallel with the LSS. As time goes on, mass collapsing around massive halo is more isotropic, especially the recent mass accretion along the slowest collapsing direction is significant and it brings the halo spin to be perpendicular with the LSS. Adopting the fractional anisotropy (FA) parameter to describe the degree of anisotropy of the large-scale environment, we find that the spin-LSS correlation is a strong function of the environment such that a higher FA (more anisotropic environment) leads to an aligned signal, and a lower anisotropy leads to a misaligned signal. In general, our results show that the spin-LSS correlation is a combined consequence of mass flow and halo growth within the cosmic web. Our predicted environmental dependence between spin and large-scale structure can be further tested using galaxy surveys.
Large-scale vegetation responses to terrestrial moisture storage changes
NASA Astrophysics Data System (ADS)
Andrew, Robert L.; Guan, Huade; Batelaan, Okke
2017-09-01
The normalised difference vegetation index (NDVI) is a useful tool for studying vegetation activity and ecosystem performance at a large spatial scale. In this study we use the Gravity Recovery and Climate Experiment (GRACE) total water storage (TWS) estimates to examine temporal variability of the NDVI across Australia. We aim to demonstrate a new method that reveals the moisture dependence of vegetation cover at different temporal resolutions. Time series of monthly GRACE TWS anomalies are decomposed into different temporal frequencies using a discrete wavelet transform and analysed against time series of the NDVI anomalies in a stepwise regression. The results show that combinations of different frequencies of decomposed GRACE TWS data explain NDVI temporal variations better than raw GRACE TWS alone. Generally, the NDVI appears to be more sensitive to interannual changes in water storage than shorter changes, though grassland-dominated areas are sensitive to higher-frequencies of water-storage changes. Different types of vegetation, defined by areas of land use type, show distinct differences in how they respond to the changes in water storage, which is generally consistent with our physical understanding. This unique method provides useful insight into how the NDVI is affected by changes in water storage at different temporal scales across land use types.
Recognising Axionic Dark Matter by Compton and de-Broglie Scale Modulation of Pulsar Timing
NASA Astrophysics Data System (ADS)
De Martino, Ivan; Broadhurst, Tom; Tye, S.-H. Henry; Chiueh, Tzihong; Schive, Hsi-Yu; Lazkoz, Ruth
2017-11-01
Light Axionic Dark Matter, motivated by string theory, is increasingly favored for the "no-WIMP era". Galaxy formation is suppressed below a Jeans scale, of ≃ 10^8 M_⊙ by setting the axion mass to, m_B ˜ 10^{-22}eV, and the large dark cores of dwarf galaxies are explained as solitons on the de-Broglie scale. This is persuasive, but detection of the inherent scalar field oscillation at the Compton frequency, ω_B= (2.5 months)^{-1}(m_B/10^{-22}eV), would be definitive. By evolving the coupled Schrödinger-Poisson equation for a Bose-Einstein condensate, we predict the dark matter is fully modulated by de-Broglie interference, with a dense soliton core of size ≃ 150pc, at the Galactic center. The oscillating field pressure induces General Relativistic time dilation in proportion to the local dark matter density and pulsars within this dense core have detectably large timing residuals, of ≃ 400nsec/(m_B/10^{-22}eV). This is encouraging as many new pulsars should be discovered near the Galactic center with planned radio surveys. More generally, over the whole Galaxy, differences in dark matter density between pairs of pulsars imprints a pairwise Galactocentric signature that can be distinguished from an isotropic gravitational wave background.
Evaluating large-scale health programmes at a district level in resource-limited countries.
Svoronos, Theodore; Mate, Kedar S
2011-11-01
Recent experience in evaluating large-scale global health programmes has highlighted the need to consider contextual differences between sites implementing the same intervention. Traditional randomized controlled trials are ill-suited for this purpose, as they are designed to identify whether an intervention works, not how, when and why it works. In this paper we review several evaluation designs that attempt to account for contextual factors that contribute to intervention effectiveness. Using these designs as a base, we propose a set of principles that may help to capture information on context. Finally, we propose a tool, called a driver diagram, traditionally used in implementation that would allow evaluators to systematically monitor changing dynamics in project implementation and identify contextual variation across sites. We describe an implementation-related example from South Africa to underline the strengths of the tool. If used across multiple sites and multiple projects, the resulting driver diagrams could be pooled together to form a generalized theory for how, when and why a widely-used intervention works. Mechanisms similar to the driver diagram are urgently needed to complement existing evaluations of large-scale implementation efforts.
Ward identities and consistency relations for the large scale structure with multiple species
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peloso, Marco; Pietroni, Massimo, E-mail: peloso@physics.umn.edu, E-mail: pietroni@pd.infn.it
2014-04-01
We present fully nonlinear consistency relations for the squeezed bispectrum of Large Scale Structure. These relations hold when the matter component of the Universe is composed of one or more species, and generalize those obtained in [1,2] in the single species case. The multi-species relations apply to the standard dark matter + baryons scenario, as well as to the case in which some of the fields are auxiliary quantities describing a particular population, such as dark matter halos or a specific galaxy class. If a large scale velocity bias exists between the different populations new terms appear in the consistencymore » relations with respect to the single species case. As an illustration, we discuss two physical cases in which such a velocity bias can exist: (1) a new long range scalar force in the dark matter sector (resulting in a violation of the equivalence principle in the dark matter-baryon system), and (2) the distribution of dark matter halos relative to that of the underlying dark matter field.« less
Large-scale immigration and political response: popular reaction in California.
Clark, W A
1998-03-01
Over the past 3 years, the level of political debate has grown over the nature and extent of the recent large-scale immigration to the US in general, and to California in particular. California's Proposition 187 to deny welfare benefits to illegal immigrants brought national attention to the immigration debate, and no doubt influenced recent decisions to significantly change the US's welfare program. The author studied the vote on Proposition 187 in the November 1994 California election to better understand the nature of reaction to large-scale immigration and recent arguments about anti-immigrant sentiment and nativism. The only counties which voted against the proposition were Sonoma, Marin, San Mateo, Santa Cruz, Yolo, Alameda, and Santa Clara, as well as the population of San Francisco. The vote generated political responses from across the border as well as within California. Statements from Mexican and other Central American governments reflected their concern over the possibility of returning populations, for whom there are neither jobs nor public services in their countries of origin. Findings are presented from a spatial analysis of the vote by census tracts in Los Angeles County.
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
States of mind: emotions, body feelings, and thoughts share distributed neural networks.
Oosterwijk, Suzanne; Lindquist, Kristen A; Anderson, Eric; Dautoff, Rebecca; Moriguchi, Yoshiya; Barrett, Lisa Feldman
2012-09-01
Scientists have traditionally assumed that different kinds of mental states (e.g., fear, disgust, love, memory, planning, concentration, etc.) correspond to different psychological faculties that have domain-specific correlates in the brain. Yet, growing evidence points to the constructionist hypothesis that mental states emerge from the combination of domain-general psychological processes that map to large-scale distributed brain networks. In this paper, we report a novel study testing a constructionist model of the mind in which participants generated three kinds of mental states (emotions, body feelings, or thoughts) while we measured activity within large-scale distributed brain networks using fMRI. We examined the similarity and differences in the pattern of network activity across these three classes of mental states. Consistent with a constructionist hypothesis, a combination of large-scale distributed networks contributed to emotions, thoughts, and body feelings, although these mental states differed in the relative contribution of those networks. Implications for a constructionist functional architecture of diverse mental states are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.
GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations
Cardall, Christian Y.; Budiardja, Reuben D.
2015-06-11
Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less
Large-scale thermal energy storage using sodium hydroxide /NaOH/
NASA Technical Reports Server (NTRS)
Turner, R. H.; Truscello, V. C.
1977-01-01
A technique employing NaOH phase change material for large-scale thermal energy storage to 900 F (482 C) is described; the concept consists of 12-foot diameter by 60-foot long cylindrical steel shell with closely spaced internal tubes similar to a shell and tube heat exchanger. The NaOH heat storage medium fills the space between the tubes and outer shell. To charge the system, superheated steam flowing through the tubes melts and raises the temperature of NaOH; for discharge, pressurized water flows through the same tube bundle. A technique for system design and cost estimation is shown. General technical and economic properties of the storage unit integrated into a solar power plant are discussed.
Applications of the ram accelerator to hypervelocity aerothermodynamic testing
NASA Technical Reports Server (NTRS)
Bruckner, A. P.; Knowlen, C.; Hertzberg, A.
1992-01-01
A ram accelerator used as a hypervelocity launcher for large-scale aeroballistic range applications in hypersonics and aerodynamics research is presented. It is an in-bore ramjet device in which a projectile shaped like the centerbody of a supersonic ramjet is propelled down a stationary tube filled with a tailored combustible gas mixture. Ram accelerator operation has been demonstrated at 39 mm and 90 mm bores, supporting the proposition that this launcher concept can be scaled up to very large bore diameters of the order of 30-60 cm. It is concluded that high quality data obtained from the tube wall and projectile during the aceleration process itself are very useful for understanding aerothermodynamics of hypersonic flow in general, and for providing important CFD validation benchmarks.
Large scale cryogenic fluid systems testing
NASA Technical Reports Server (NTRS)
1992-01-01
NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.
Extracellular matrix motion and early morphogenesis.
Loganathan, Rajprasad; Rongish, Brenda J; Smith, Christopher M; Filla, Michael B; Czirok, Andras; Bénazéraf, Bertrand; Little, Charles D
2016-06-15
For over a century, embryologists who studied cellular motion in early amniotes generally assumed that morphogenetic movement reflected migration relative to a static extracellular matrix (ECM) scaffold. However, as we discuss in this Review, recent investigations reveal that the ECM is also moving during morphogenesis. Time-lapse studies show how convective tissue displacement patterns, as visualized by ECM markers, contribute to morphogenesis and organogenesis. Computational image analysis distinguishes between cell-autonomous (active) displacements and convection caused by large-scale (composite) tissue movements. Modern quantification of large-scale 'total' cellular motion and the accompanying ECM motion in the embryo demonstrates that a dynamic ECM is required for generation of the emergent motion patterns that drive amniote morphogenesis. © 2016. Published by The Company of Biologists Ltd.
Integral criteria for large-scale multiple fingerprint solutions
NASA Astrophysics Data System (ADS)
Ushmaev, Oleg S.; Novikov, Sergey O.
2004-08-01
We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
Large-scale additive manufacturing with bioinspired cellulosic materials.
Sanandiya, Naresh D; Vijay, Yadunund; Dimopoulou, Marina; Dritsas, Stylianos; Fernandez, Javier G
2018-06-05
Cellulose is the most abundant and broadly distributed organic compound and industrial by-product on Earth. However, despite decades of extensive research, the bottom-up use of cellulose to fabricate 3D objects is still plagued with problems that restrict its practical applications: derivatives with vast polluting effects, use in combination with plastics, lack of scalability and high production cost. Here we demonstrate the general use of cellulose to manufacture large 3D objects. Our approach diverges from the common association of cellulose with green plants and it is inspired by the wall of the fungus-like oomycetes, which is reproduced introducing small amounts of chitin between cellulose fibers. The resulting fungal-like adhesive material(s) (FLAM) are strong, lightweight and inexpensive, and can be molded or processed using woodworking techniques. We believe this first large-scale additive manufacture with ubiquitous biological polymers will be the catalyst for the transition to environmentally benign and circular manufacturing models.
Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame
NASA Astrophysics Data System (ADS)
Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank
2017-10-01
This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.
Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.
Gaertner, Jean-Claude; Maiorano, Porzia; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro
2013-01-01
Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.
NASA Astrophysics Data System (ADS)
Burov, E.; Guillou-Frottier, L.
2005-05-01
Current debates on the existence of mantle plumes largely originate from interpretations of supposed signatures of plume-induced surface topography that are compared with predictions of geodynamic models of plume-lithosphere interactions. These models often inaccurately predict surface evolution: in general, they assume a fixed upper surface and consider the lithosphere as a single viscous layer. In nature, the surface evolution is affected by the elastic-brittle-ductile deformation, by a free upper surface and by the layered structure of the lithosphere. We make a step towards reconciling mantle- and tectonic-scale studies by introducing a tectonically realistic continental plate model in large-scale plume-lithosphere interaction. This model includes (i) a natural free surface boundary condition, (ii) an explicit elastic-viscous(ductile)-plastic(brittle) rheology and (iii) a stratified structure of continental lithosphere. The numerical experiments demonstrate a number of important differences from predictions of conventional models. In particular, this relates to plate bending, mechanical decoupling of crustal and mantle layers and tension-compression instabilities, which produce transient topographic signatures such as uplift and subsidence at large (>500 km) and small scale (300-400, 200-300 and 50-100 km). The mantle plumes do not necessarily produce detectable large-scale topographic highs but often generate only alternating small-scale surface features that could otherwise be attributed to regional tectonics. A single large-wavelength deformation, predicted by conventional models, develops only for a very cold and thick lithosphere. Distinct topographic wavelengths or temporarily spaced events observed in the East African rift system, as well as over French Massif Central, can be explained by a single plume impinging at the base of the continental lithosphere, without evoking complex asthenospheric upwelling.
Basin scale permeability and thermal evolution of a magmatic hydrothermal system
NASA Astrophysics Data System (ADS)
Taron, J.; Hickman, S. H.; Ingebritsen, S.; Williams, C.
2013-12-01
Large-scale hydrothermal systems are potentially valuable energy resources and are of general scientific interest due to extreme conditions of stress, temperature, and reactive chemistry that can act to modify crustal rheology and composition. With many proposed sites for Enhanced Geothermal Systems (EGS) located on the margins of large-scale hydrothermal systems, understanding the temporal evolution of these systems contributes to site selection, characterization and design of EGS. This understanding is also needed to address the long-term sustainability of EGS once they are created. Many important insights into heat and mass transfer within natural hydrothermal systems can be obtained through hydrothermal modeling assuming that stress and permeability structure do not evolve over time. However, this is not fully representative of natural systems, where the effects of thermo-elastic stress changes, chemical fluid-rock interactions, and rock failure on fluid flow and thermal evolution can be significant. The quantitative importance of an evolving permeability field within the overall behavior of a large-scale hydrothermal system is somewhat untested, and providing such a parametric understanding is one of the goals of this study. We explore the thermal evolution of a sedimentary basin hydrothermal system following the emplacement of a magma body. The Salton Sea geothermal field and its associated magmatic system in southern California is utilized as a general backdrop to define the initial state. Working within the general framework of the open-source scientific computing initiative OpenGeoSys (www.opengeosys.org), we introduce full treatment of thermodynamic properties at the extreme conditions following magma emplacement. This treatment utilizes a combination of standard Galerkin and control-volume finite elements to balance fluid mass, mechanical deformation, and thermal energy with consideration of local thermal non-equilibrium (LTNE) between fluids and solids. Permeability is allowed to evolve under several constitutive models tailored to both porous media and fractures, considering the influence of both mechanical stress and diagenesis. In this first analysis, a relatively simple mechanical model is used; complexity will be added incrementally to represent specific characteristics of the Salton Sea hydrothermal field.
The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models
1988-07-27
auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the
Conservation laws in the quantum Hall Liouvillian theory and its generalizations
NASA Astrophysics Data System (ADS)
Moore, Joel E.
2003-06-01
It is known that the localization length scaling of noninteracting electrons near the quantum Hall plateau transition can be described in a theory of the bosonic density operators, with no reference to the underlying fermions. The resulting "Liouvillian" theory has a U(1|1) global supersymmetry as well as a hierarchy of geometric conservation laws related to the noncommutative geometry of the lowest Landau level (LLL). Approximations to the Liouvillian theory contain quite different physics from standard approximations to the underlying fermionic theory. Mean-field and large- N generalizations of the Liouvillian are shown to describe problems of noninteracting bosons that enlarge the U(1|1) supersymmetry to U(1|1)× SO( N) or U(1|1)× SU( N). These noninteracting bosonic problems are studied numerically for 2⩽ N⩽8 by Monte Carlo simulation and compared to the original N=1 Liouvillian theory. The N>1 generalizations preserve the first two of the hierarchy of geometric conservation laws, leading to logarithmic corrections at order 1/ N to the diffusive large- N limit, but do not preserve the remaining conservation laws. The emergence of nontrivial scaling at the plateau transition, in the Liouvillian approach, is shown to depend sensitively on the unusual geometry of Landau levels.
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1977-01-01
The general problem of bistatic scattering from a two scale surface was evaluated. The treatment was entirely two-dimensional and in a vector formulation independent of any particular coordinate system. The two scale scattering model was then applied to backscattering from the sea surface. In particular, the model was used in conjunction with the JONSWAP 1975 aircraft scatterometer measurements to determine the sea surface's two scale roughness distributions, namely the probability density of the large scale surface slope and the capillary wavenumber spectrum. Best fits yield, on the average, a 0.7 dB rms difference between the model computations and the vertical polarization measurements of the normalized radar cross section. Correlations between the distribution parameters and the wind speed were established from linear, least squares regressions.
Moving contact lines on vibrating surfaces
NASA Astrophysics Data System (ADS)
Solomenko, Zlatko; Spelt, Peter; Scott, Julian
2017-11-01
Large-scale simulations of flows with moving contact lines for realistic conditions generally requires a subgrid scale model (analyses based on matched asymptotics) to account for the unresolved part of the flow, given the large range of length scales involved near contact lines. Existing models for the interface shape in the contact-line region are primarily for steady flows on homogeneous substrates, with encouraging results in 3D simulations. Introduction of complexities would require further investigation of the contact-line region, however. Here we study flows with moving contact lines on planar substrates subject to vibrations, with applications in controlling wetting/dewetting. The challenge here is to determine the change in interface shape near contact lines due to vibrations. To develop further insight, 2D direct numerical simulations (wherein the flow is resolved down to an imposed slip length) have been performed to enable comparison with asymptotic theory, which is also developed further. Perspectives will also be presented on the final objective of the work, which is to develop a subgrid scale model that can be utilized in large-scale simulations. The authors gratefully acknowledge the ANR for financial support (ANR-15-CE08-0031) and the meso-centre FLMSN for use of computational resources. This work was Granted access to the HPC resources of CINES under the allocation A0012B06893 made by GENCI.
Eyjafjallajökull and 9/11: The Impact of Large-Scale Disasters on Worldwide Mobility
Woolley-Meza, Olivia; Grady, Daniel; Thiemann, Christian; Bagrow, James P.; Brockmann, Dirk
2013-01-01
Large-scale disasters that interfere with globalized socio-technical infrastructure, such as mobility and transportation networks, trigger high socio-economic costs. Although the origin of such events is often geographically confined, their impact reverberates through entire networks in ways that are poorly understood, difficult to assess, and even more difficult to predict. We investigate how the eruption of volcano Eyjafjallajökull, the September 11th terrorist attacks, and geographical disruptions in general interfere with worldwide mobility. To do this we track changes in effective distance in the worldwide air transportation network from the perspective of individual airports. We find that universal features exist across these events: airport susceptibilities to regional disruptions follow similar, strongly heterogeneous distributions that lack a scale. On the other hand, airports are more uniformly susceptible to attacks that target the most important hubs in the network, exhibiting a well-defined scale. The statistical behavior of susceptibility can be characterized by a single scaling exponent. Using scaling arguments that capture the interplay between individual airport characteristics and the structural properties of routes we can recover the exponent for all types of disruption. We find that the same mechanisms responsible for efficient passenger flow may also keep the system in a vulnerable state. Our approach can be applied to understand the impact of large, correlated disruptions in financial systems, ecosystems and other systems with a complex interaction structure between heterogeneous components. PMID:23950904
A minimum distance estimation approach to the two-sample location-scale problem.
Zhang, Zhiyi; Yu, Qiqing
2002-09-01
As reported by Kalbfleisch and Prentice (1980), the generalized Wilcoxon test fails to detect a difference between the lifetime distributions of the male and female mice died from Thymic Leukemia. This failure is a result of the test's inability to detect a distributional difference when a location shift and a scale change exist simultaneously. In this article, we propose an estimator based on the minimization of an average distance between two independent quantile processes under a location-scale model. Large sample inference on the proposed estimator, with possible right-censorship, is discussed. The mouse leukemia data are used as an example for illustration purpose.
NASA Astrophysics Data System (ADS)
Loppini, Alessandro
2018-03-01
Complex network theory represents a comprehensive mathematical framework to investigate biological systems, ranging from sub-cellular and cellular scales up to large-scale networks describing species interactions and ecological systems. In their exhaustive and comprehensive work [1], Gosak et al. discuss several scenarios in which the network approach was able to uncover general properties and underlying mechanisms of cells organization and regulation, tissue functions and cell/tissue failure in pathology, by the study of chemical reaction networks, structural networks and functional connectivities.
Kinetic energy budgets during the life cycle of intense convective activity
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Scoggins, J. R.
1978-01-01
Synoptic-scale data at three- and six-hour intervals are employed to study the relationship between changing kinetic energy variables and the life cycles of two severe squall lines. The kinetic energy budgets indicate a high degree of kinetic energy generation, especially pronounced near the jet-stream level. Energy losses in the storm environment are due to the transfer of kinetic energy from grid to subgrid scales of motion; large-scale upward vertical motion carries aloft the kinetic energy generated by storm activity at lower levels. In general, the time of maximum storm intensity is also the time of maximum energy conversion and transport.
Computation of large-scale statistics in decaying isotropic turbulence
NASA Technical Reports Server (NTRS)
Chasnov, Jeffrey R.
1993-01-01
We have performed large-eddy simulations of decaying isotropic turbulence to test the prediction of self-similar decay of the energy spectrum and to compute the decay exponents of the kinetic energy. In general, good agreement between the simulation results and the assumption of self-similarity were obtained. However, the statistics of the simulations were insufficient to compute the value of gamma which corrects the decay exponent when the spectrum follows a k(exp 4) wave number behavior near k = 0. To obtain good statistics, it was found necessary to average over a large ensemble of turbulent flows.
Polarization transformation as an algorithm for automatic generalization and quality assessment
NASA Astrophysics Data System (ADS)
Qian, Haizhong; Meng, Liqiu
2007-06-01
Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.
Application of Three Cognitive Diagnosis Models to ESL Reading and Listening Assessments
ERIC Educational Resources Information Center
Lee, Yong-Won; Sawaki, Yasuyo
2009-01-01
The present study investigated the functioning of three psychometric models for cognitive diagnosis--the general diagnostic model, the fusion model, and latent class analysis--when applied to large-scale English as a second language listening and reading comprehension assessments. Data used in this study were scored item responses and incidence…
Integrating History of Mathematics into the Classroom: Was Aristotle Wrong?
ERIC Educational Resources Information Center
Panasuk, Regina M.; Horton, Leslie Bolinger
2013-01-01
This article describes a part of a large scale study which helped to gain understanding of the high school mathematics teachers' perceptions related to the integration of history of mathematics into instruction. There is obvious lack of correspondence between general perception about possible benefits of students learning the history of…
A Longitudinal Study of Self Concept From Grade 5 to Grade 9.
ERIC Educational Resources Information Center
Kohr, Richard L.
This study examined five subscales of the Pennsylvania Educational Quality Assessment self-concept scale, composed largely of items from the Coopersmith Self Esteem Inventory, in terms of socioeconomic status (SES) and sex differences in internal consistency, stability, across time changes in means, and relationship with achievement. In general,…
ERIC Educational Resources Information Center
International Federation of Library Associations and Institutions, The Hague (Netherlands).
Four papers on information technology were presented at the 1986 International Federation of Library Associations (IFLA) conference. In the paper "Optical Disc Technology Used for Large-Scale Data Base," Naoto Nakayama (Japan) considers the rapid development of optical technology and the role of applications such as optical discs,…
Evidence-Based Practice for Teachers of Children with Autism: A Dynamic Approach
ERIC Educational Resources Information Center
Lubas, Margaret; Mitchell, Jennifer; De Leo, Gianluca
2016-01-01
Evidence-based practice related to autism research is a controversial topic. Governmental entities and national agencies are defining evidence-based practice as a specific set of interventions that educators should implement; however, large-scale efforts to generalize autism research, which are often single-subject case designs, may be a setback…
A Chronicle of School Music Education in Hungary, 1700-2012
ERIC Educational Resources Information Center
Kiss, Boglarka
2013-01-01
This inquiry is a chronological overview of the history of school music education in Hungary. The study explores the topic from a large-scale humanistic perspective, in which historical context, general education laws, individual institutions and music educators, as well as music curriculum, textbooks, and teaching methods serve as evidence. The…
Using the ACRL Framework to Develop a Student-Centered Model for Program-Level Assessment
ERIC Educational Resources Information Center
Gammons, Rachel Wilder; Inge, Lindsay Taylor
2017-01-01
Information literacy instruction presents a difficult balance between quantity and quality, particularly for large-scale general education courses. This paper discusses the overhaul of the freshman composition instruction program at the University of Maryland Libraries, focusing on the transition from survey assessments to a student-centered and…
Updated generalized biomass equations for North American tree species
David C. Chojnacky; Linda S. Heath; Jennifer C. Jenkins
2014-01-01
Historically, tree biomass at large scales has been estimated by applying dimensional analysis techniques and field measurements such as diameter at breast height (dbh) in allometric regression equations. Equations often have been developed using differing methods and applied only to certain species or isolated areas. We previously had compiled and combined (in meta-...
Interdisciplinary Collaboration in Launching a Large-Scale Research Study in Schools
ERIC Educational Resources Information Center
DeLoach, Kendra P.; Dvorsky, Melissa; George, Mellissa R. W.; Miller, Elaine; Weist, Mark D.; Kern, Lee
2012-01-01
Interdisciplinary collaboration (IC) is a critically important theme generally, and of particular significance in school mental health (SMH), given the range of people from different disciplines who work in schools and the various systems in place. Reflecting the move to a true shared school-family-community system agenda, the collaborative…
A general predictive model for estimating monthly ecosystem evapotranspiration
Ge Sun; Karrin Alstad; Jiquan Chen; Shiping Chen; Chelcy R. Ford; al. et.
2011-01-01
Accurately quantifying evapotranspiration (ET) is essential for modelling regional-scale ecosystem water balances. This study assembled an ET data set estimated from eddy flux and sapflow measurements for 13 ecosystems across a large climatic and management gradient from the United States, China, and Australia. Our objectives were to determine the relationships among...
Heritability in Cognitive Performance: Evidence Using Computer-Based Testing
ERIC Educational Resources Information Center
Hervey, Aaron S.; Greenfield, Kathryn; Gualtieri, C. Thomas
2012-01-01
There is overwhelming evidence of genetic influence on cognition. The effect is seen in general cognitive ability, as well as in specific cognitive domains. A conventional assessment approach using face-to-face paper and pencil testing is difficult for large-scale studies. Computerized neurocognitive testing is a suitable alternative. A total of…
ERIC Educational Resources Information Center
Mashood, K. K.; Singh, Vijay A.
2013-01-01
Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…
Vocational Education and Training in India: Challenges, Status and Labour Market Outcomes
ERIC Educational Resources Information Center
Agrawal, Tushar
2012-01-01
This paper provides an overview of vocational education and training (VET) system in India, and discusses various challenges and difficulties in the Indian VET system. The paper also examines labour market outcomes of vocational graduates and compares these with those of general secondary graduates using a large-scale nationally representative…
Transition from lognormal to χ2-superstatistics for financial time series
NASA Astrophysics Data System (ADS)
Xu, Dan; Beck, Christian
2016-07-01
Share price returns on different time scales can be well modelled by a superstatistical dynamics. Here we provide an investigation which type of superstatistics is most suitable to properly describe share price dynamics on various time scales. It is shown that while χ2-superstatistics works well on a time scale of days, on a much smaller time scale of minutes the price changes are better described by lognormal superstatistics. The system dynamics thus exhibits a transition from lognormal to χ2 superstatistics as a function of time scale. We discuss a more general model interpolating between both statistics which fits the observed data very well. We also present results on correlation functions of the extracted superstatistical volatility parameter, which exhibits exponential decay for returns on large time scales, whereas for returns on small time scales there are long-range correlations and power-law decay.
Will-Nordtvedt PPN formalism applied to renormalization group extensions of general relativity
NASA Astrophysics Data System (ADS)
Toniato, Júnior D.; Rodrigues, Davi C.; de Almeida, Álefe O. F.; Bertini, Nicolas
2017-09-01
We apply the full Will-Nordtvedt version of the parametrized post-Newtonian (PPN) formalism to a class of general relativity extensions that are based on nontrivial renormalization group (RG) effects at large scales. We focus on a class of models in which the gravitational coupling constant G is correlated with the Newtonian potential. A previous PPN analysis considered a specific realization of the RG effects, and only within the Eddington-Robertson-Schiff version of the PPN formalism, which is a less complete and robust PPN formulation. Here we find stronger, more precise bounds, and with less assumptions. We also consider the external potential effect (EPE), which is an effect that is intrinsic to this framework and depends on the system environment (it has some qualitative similarities to the screening mechanisms of modified gravity theories). We find a single particular RG realization that is not affected by the EPE. Some physical systems have been pointed out as candidates for measuring the possible RG effects in gravity at large scales; for any of them the Solar System bounds need to be considered.
Numerical Simulation of the Large-Scale North American Monsoon Water Sources
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Sud, Yogesh C.; Schubert, Siegfried D.; Walker, Gregory K.
2002-01-01
A general circulation model (GCM) that includes water vapor tracer (WVT) diagnostics is used to delineate the dominant sources of water vapor for precipitation during the North American monsoon. A 15-year model simulation carried out with one-degree horizontal resolution and time varying sea surface temperature is able to produce reasonable large-scale features of the monsoon precipitation. Within the core of the Mexican monsoon, continental sources provide much of the water for precipitation. Away from the Mexican monsoon (eastern Mexico and Texas), continental sources generally decrease with monsoon onset. Tropical Atlantic Ocean sources of water gain influence in the southern Great Plains states where the total precipitation decreases during the monsoon onset. Pacific ocean sources do contribute to the monsoon, but tend to be weaker after onset. Evaluating the development of the monsoons, soil water and surface evaporation prior to monsoon onset do not correlate with the eventual monsoon intensity. However, the most intense monsoons do use more local sources of water than the least intense monsoons, but only after the onset. This suggests that precipitation recycling is an important factor in monsoon intensity.
NASA Astrophysics Data System (ADS)
Wallace, Colin; Prather, Edward; Duncan, Douglas
2011-10-01
We recently completed a large-scale, systematic study of general education introductory astronomy students' conceptual and reasoning difficulties related to cosmology. As part of this study, we analyzed a total of 4359 surveys (pre- and post-instruction) containing students' responses to questions about the Big Bang, the evolution and expansion of the universe, using Hubble plots to reason about the age and expansion rate of the universe, and using galaxy rotation curves to infer the presence of dark matter. We also designed, piloted, and validated a new suite of five cosmology Lecture-Tutorials. We found that students who use the new Lecture-Tutorials can achieve larger learning gains than their peers who did not. This material is based in part upon work supported by the National Science Foundation under Grant Nos. 0833364 and 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
NASA Astrophysics Data System (ADS)
Wallace, Colin Scott; Prather, E. E.; Duncan, D. K.; Collaboration of Astronomy Teaching Scholars CATS
2012-01-01
We recently completed a large-scale, systematic study of general education introductory astronomy students’ conceptual and reasoning difficulties related to cosmology. As part of this study, we analyzed a total of 4359 surveys (pre- and post-instruction) containing students’ responses to questions about the Big Bang, the evolution and expansion of the universe, using Hubble plots to reason about the age and expansion rate of the universe, and using galaxy rotation curves to infer the presence of dark matter. We also designed, piloted, and validated a new suite of five cosmology Lecture-Tutorials. We found that students who use the new Lecture-Tutorials can achieve larger learning gains than their peers who did not. This material is based in part upon work supported by the National Science Foundation under Grant Nos. 0833364 and 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
2016-09-01
Laboratory Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS) by JL Cogan...analysis. As expected, accuracy generally tended to decline as the large-scale data aged , but appeared to improve slightly as the age of the large...19 Table 7 Minimum and maximum mean RMDs for each WRF time (or GFS data age ) category. Minimum and
Compiler-directed cache management in multiprocessors
NASA Technical Reports Server (NTRS)
Cheong, Hoichi; Veidenbaum, Alexander V.
1990-01-01
The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.
Dying like rabbits: general determinants of spatio-temporal variability in survival.
Tablado, Zulima; Revilla, Eloy; Palomares, Francisco
2012-01-01
1. Identifying general patterns of how and why survival rates vary across space and time is necessary to truly understand population dynamics of a species. However, this is not an easy task given the complexity and interactions of processes involved, and the interpopulation differences in main survival determinants. 2. Here, using European rabbits (Oryctolagus cuniculus) as a model and information from local studies, we investigated whether we could make inferences about trends and drivers of survival of a species that are generalizable to large spatio-temporal scales. To do this, we first focused on overall survival and then examined cause-specific mortalities, mainly predation and diseases, which may lead to those patterns. 3. Our results show that within the large-scale variability in rabbit survival, there exist general patterns that are explained by the integration of factors previously known to be important at the local level (i.e. age, climate, diseases, predation or density dependence). We found that both inter- and intrastudy survival rates increased in magnitude and decreased in variability as rabbits grow old, although this tendency was less pronounced in populations with epidemic diseases. Some causes leading to these higher mortalities in young rabbits could be the stronger effect of rainfall at those ages, as well as, other death sources like malnutrition or infanticide. 4. Predation is also greater for newborns and juveniles, especially in population without diseases. Apart from the effect of diseases, predation patterns also depended on factors, such as, density, season, and type and density of predators. Finally, we observed that infectious diseases also showed general relationships with climate, breeding (i.e. new susceptible rabbits) and age, although the association type varied between myxomatosis and rabbit haemorrhagic disease. 5. In conclusion, large-scale patterns of spatio-temporal variability in rabbit survival emerge from the combination of different factors that interrelate both directly and through density dependence. This highlights the importance of performing more comprehensive studies to reveal combined effects and complex relationships that help us to better understand the mechanisms underlying population dynamics. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.
Influence of land use on water quality in a tropical landscape: a multi-scale analysis
Yackulic, Charles B.; Lim, Yili; Arce-Nazario, Javier A.
2015-01-01
There is a pressing need to understand the consequences of human activities, such as land transformations, on watershed ecosystem services. This is a challenging task because different indicators of water quality and yield are expected to vary in their responsiveness to large versus local-scale heterogeneity in land use and land cover (LUC). Here we rely on water quality data collected between 1977 and 2000 from dozens of gauge stations in Puerto Rico together with precipitation data and land cover maps to (1) quantify impacts of spatial heterogeneity in LUC on several water quality indicators; (2) determine the spatial scale at which this heterogeneity influences water quality; and (3) examine how antecedent precipitation modulates these impacts. Our models explained 30–58% of observed variance in water quality metrics. Temporal variation in antecedent precipitation and changes in LUC between measurements periods rather than spatial variation in LUC accounted for the majority of variation in water quality. Urbanization and pasture development generally degraded water quality while agriculture and secondary forest re-growth had mixed impacts. The spatial scale over which LUC influenced water quality differed across indicators. Turbidity and dissolved oxygen (DO) responded to LUC in large-scale watersheds, in-stream nitrogen concentrations to LUC in riparian buffers of large watersheds, and fecal matter content and in-stream phosphorus concentration to LUC at the sub-watershed scale. Stream discharge modulated impacts of LUC on water quality for most of the metrics. Our findings highlight the importance of considering multiple spatial scales for understanding the impacts of human activities on watershed ecosystem services. PMID:26146455
Observations and Modeling of the Transient General Circulation of the North Pacific Basin
NASA Technical Reports Server (NTRS)
McWilliams, James C.
2000-01-01
Because of recent progress in satellite altimetry and numerical modeling and the accumulation and archiving of long records of hydrographic and meteorological variables, it is becoming feasible to describe and understand the transient general circulation of the ocean (i.e., variations with spatial scales larger than a few hundred kilometers and time scales of seasonal and longer-beyond the mesoscale). We have carried out various studies in investigation of the transient general circulation of the Pacific Ocean from a coordinated analysis of satellite altimeter data, historical hydrographic gauge data, scatterometer wind observations, reanalyzed operational wind fields, and a variety of ocean circulation models. Broadly stated, our goal was to achieve a phenomenological catalogue of different possible types of large-scale, low-frequency variability, as a context for understanding the observational record. The approach is to identify the simplest possible model from which particular observed phenomena can be isolated and understood dynamically and then to determine how well these dynamical processes are represented in more complex Oceanic General Circulation Models (OGCMs). Research results have been obtained on Rossby wave propagation and transformation, oceanic intrinsic low-frequency variability, effects of surface gravity waves, pacific data analyses, OGCM formulation and developments, and OGCM simulations of forced variability.
On the Scaling Laws for Jet Noise in Subsonic and Supersonic Flow
NASA Technical Reports Server (NTRS)
Vu, Bruce; Kandula, Max
2003-01-01
The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are examined with regard to their applicability to deduce full scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full-scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. It is shown that the jet Mach number (jet exit velocity/sound speed at jet exit) is a more general and convenient parameter for noise scaling purposes than the ratio of jet exit velocity to ambient speed of sound. A similarity spectrum is also proposed, which accounts for jet Mach number, angle to the jet axis, and jet density ratio. The proposed spectrum reduces nearly to the well-known similarity spectra proposed by Tam for the large-scale and the fine-scale turbulence noise in the appropriate limit.
Alignments of Dark Matter Halos with Large-scale Tidal Fields: Mass and Redshift Dependence
NASA Astrophysics Data System (ADS)
Chen, Sijie; Wang, Huiyuan; Mo, H. J.; Shi, Jingjing
2016-07-01
Large-scale tidal fields estimated directly from the distribution of dark matter halos are used to investigate how halo shapes and spin vectors are aligned with the cosmic web. The major, intermediate, and minor axes of halos are aligned with the corresponding tidal axes, and halo spin axes tend to be parallel with the intermediate axes and perpendicular to the major axes of the tidal field. The strengths of these alignments generally increase with halo mass and redshift, but the dependence is only on the peak height, ν \\equiv {δ }{{c}}/σ ({M}{{h}},z). The scaling relations of the alignment strengths with the value of ν indicate that the alignment strengths remain roughly constant when the structures within which the halos reside are still in a quasi-linear regime, but decreases as nonlinear evolution becomes more important. We also calculate the alignments in projection so that our results can be compared directly with observations. Finally, we investigate the alignments of tidal tensors on large scales, and use the results to understand alignments of halo pairs separated at various distances. Our results suggest that the coherent structure of the tidal field is the underlying reason for the alignments of halos and galaxies seen in numerical simulations and in observations.
Inner-outer predictive wall model for wall-bounded turbulence in hypersonic flow
NASA Astrophysics Data System (ADS)
Martin, M. Pino; Helm, Clara M.
2017-11-01
The inner-outer predictive wall model of Mathis et al. is modified for hypersonic turbulent boundary layers. The model is based on a modulation of the energized motions in the inner layer by large scale momentum fluctuations in the logarithmic layer. Using direct numerical simulation (DNS) data of turbulent boundary layers with free stream Mach number 3 to 10, it is shown that the variation of the fluid properties in the compressible flows leads to large Reynolds number (Re) effects in the outer layer and facilitate the modulation observed in high Re incompressible flows. The modulation effect by the large scale increases with increasing free-stream Mach number. The model is extended to include spanwise and wall-normal velocity fluctuations and is generalized through Morkovin scaling. Temperature fluctuations are modeled using an appropriate Reynolds Analogy. Density fluctuations are calculated using an equation of state and a scaling with Mach number. DNS data are used to obtain the universal signal and parameters. The model is tested by using the universal signal to reproduce the flow conditions of Mach 3 and Mach 7 turbulent boundary layer DNS data and comparing turbulence statistics between the modeled flow and the DNS data. This work is supported by the Air Force Office of Scientific Research under Grant FA9550-17-1-0104.
Polarization Radiation with Turbulent Magnetic Fields from X-Ray Binaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jian-Fu; Xiang, Fu-Yuan; Lu, Ju-Fu, E-mail: jfzhang@xtu.edu.cn, E-mail: fyxiang@xtu.edu.cn, E-mail: lujf@xmu.edu.cn
2017-02-10
We study the properties of polarized radiation in turbulent magnetic fields from X-ray binary jets. These turbulent magnetic fields are composed of large- and small-scale configurations, which result in the polarized jitter radiation when the characteristic length of turbulence is less than the non-relativistic Larmor radius. On the contrary, the polarized synchrotron emission occurs, corresponding to a large-scale turbulent environment. We calculate the spectral energy distributions and the degree of polarization for a general microquasar. Numerical results show that turbulent magnetic field configurations can indeed provide a high degree of polarization, which does not mean that a uniform, large-scale magneticmore » field structure exists. The model is applied to investigate the properties of polarized radiation of the black-hole X-ray binary Cygnus X-1. Under the constraint of multiband observations of this source, our studies demonstrate that the model can explain the high polarization degree at the MeV tail and predict the highly polarized properties at the high-energy γ -ray region, and that the dominant small-scale turbulent magnetic field plays an important role for explaining the highly polarized observation at hard X-ray/soft γ -ray bands. This model can be tested by polarization observations of upcoming polarimeters at high-energy γ -ray bands.« less
Earthquakes in the Laboratory: Continuum-Granular Interactions
NASA Astrophysics Data System (ADS)
Ecke, Robert; Geller, Drew; Ward, Carl; Backhaus, Scott
2013-03-01
Earthquakes in nature feature large tectonic plate motion at large scales of 10-100 km and local properties of the earth on the scale of the rupture width, of the order of meters. Fault gouge often fills the gap between the large slipping plates and may play an important role in the nature and dynamics of earthquake events. We have constructed a laboratory scale experiment that represents a similitude scale model of this general earthquake description. Two photo-elastic plates (50 cm x 25 cm x 1 cm) confine approximately 3000 bi-disperse nylon rods (diameters 0.12 and 0.16 cm, height 1 cm) in a gap of approximately 1 cm. The plates are held rigidly along their outer edges with one held fixed while the other edge is driven at constant speed over a range of about 5 cm. The local stresses exerted on the plates are measured using their photo-elastic response, the local relative motions of the plates, i.e., the local strains, are determined by the relative motion of small ball bearings attached to the top surface, and the configurations of the nylon rods are investigated using particle tracking tools. We find that this system has properties similar to real earthquakes and are exploring these ``lab-quake'' events with the quantitative tools we have developed.
Cyclonic circulation of Saturn's atmosphere due to tilted convection
NASA Astrophysics Data System (ADS)
Afanasyev, Y. D.; Zhang, Y.
2018-03-01
Saturn displays cyclonic vortices at its poles and the general atmospheric circulation at other latitudes is dominated by embedded zonal jets that display cyclonic circulation. The abundance of small-scale convective storms suggests that convection plays a role in producing and maintaining Saturn's atmospheric circulation. However, the dynamical influence of small-scale convection on Saturn's general circulation is not well understood. Here we present laboratory analogue experiments and propose that Saturn's cyclonic circulation can be explained by tilted convection in which buoyancy forces do not align with the planet's rotation axis. In our experiments—conducted with a cylindrical water tank that is heated at the bottom, cooled at the top and spun on a rotating table—warm rising plumes and cold sinking water generate small anticyclonic and cyclonic vortices that are qualitatively similar to Saturn's convective storms. Numerical simulations complement the experiments and show that this small-scale convection leads to large-scale cyclonic flow at the surface and anticyclonic circulation at the base of the fluid layer, with a polar vortex forming from the merging of smaller cyclonic storms that are driven polewards.
Aspects of AdS/CFT: Conformal Deformations and the Goldstone Equivalence Theorem
NASA Astrophysics Data System (ADS)
Cantrell, Sean Andrew
The AdS/CFT correspondence provides a map from the states of theories situated in AdSd+1 to those in dual conformal theories in a d-dimensional space. The correspondence can be used to establish certain universal properties of some theories in one space by examining the behave of general objects in the other. In this thesis, we develop various formal aspects of AdS/CFT. Conformal deformations manifest in the AdS/CFT correspondence as boundary conditions on the AdS field. Heretofore, double-trace deformations have been the primary focus in this context. To better understand multitrace deformations, we revisit the relationship between the generating AdS partition function for a free bulk theory and the boundary CFT partition function subject to arbitrary conformal deformations. The procedure leads us to a formalism that constructs bulk fields from boundary operators. We independently replicate the holographic RG flow narrative to go on to interpret the brane used to regulate the AdS theory as a renormalization scale. The scale-dependence of the dilatation spectrum of a boundary theory in the presence of general deformations can be thus understood on the AdS side using this formalism. The Goldstone equivalence theorem allows one to relate scattering amplitudes of massive gauge fields to those of scalar fields in the limit of large scattering energies. We generalize this theorem under the framework of the AdS/CFT correspondence. First, we obtain an expression of the equivalence theorem in terms of correlation functions of creation and annihilation operators by using an AdS wave function approach to the AdS/CFT dictionary. It is shown that the divergence of the non-conserved conformal current dual to the bulk gauge field is approximately primary when computing correlators for theories in which the masses of all the exchanged particles are sufficiently large. The results are then generalized to higher spin fields. We then go on to generalize the theorem using conformal blocks in two and four-dimensional CFTs. We show that when the scaling dimensions of the exchanged operators are large compared to both their spins and the dimension of the current, the conformal blocks satisfy an equivalence theorem.
The factor structure of the illness attitude scales in a German population.
Weck, Florian; Bleichhardt, Gaby; Hiller, Wolfgang
2009-01-01
The illness attitudes scales (IAS) were developed to identify different dimensions of hypochondrical attitudes, fears, beliefs, and abnormal illness behavior (Kellner 1986). Although there are several studies which focus on the scale structure of the IAS, the factor structure has not yet been made quite clear. Therefore, the aim of this study was to investigate the factor structure of the IAS on a large representative sample. Participants (N = 1,575) comparable with the general German population regarding sex, age, and education level completed the IAS. For the data analyses, a principal components analyses with subsequent oblique rotations was used. The minimum average partial method suggested a three-factor solution. The three factors were named (1) health anxiety, (2) health behavior, and (3) health habits. Internal consistency (Cronbach's alpha) for the three scales were (1) alpha = 0.88, (2) alpha = 0.75, and (3) alpha = 0.56. The results support previous findings, namely that the IAS factor structure appears to be less complex than originally suggested by the author. For a sample of the general German population, a three-factor solution fit best. Further items should be added to improve the internal consistency, especially for the third scale (health habits).
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
Ingber, Lester; Nunez, Paul L
2011-02-01
The dynamic behavior of scalp potentials (EEG) is apparently due to some combination of global and local processes with important top-down and bottom-up interactions across spatial scales. In treating global mechanisms, we stress the importance of myelinated axon propagation delays and periodic boundary conditions in the cortical-white matter system, which is topologically close to a spherical shell. By contrast, the proposed local mechanisms are multiscale interactions between cortical columns via short-ranged non-myelinated fibers. A mechanical model consisting of a stretched string with attached nonlinear springs demonstrates the general idea. The string produces standing waves analogous to large-scale coherent EEG observed in some brain states. The attached springs are analogous to the smaller (mesoscopic) scale columnar dynamics. Generally, we expect string displacement and EEG at all scales to result from both global and local phenomena. A statistical mechanics of neocortical interactions (SMNI) calculates oscillatory behavior consistent with typical EEG, within columns, between neighboring columns via short-ranged non-myelinated fibers, across cortical regions via myelinated fibers, and also derives a string equation consistent with the global EEG model. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Couderc, F.; Duran, A.; Vila, J.-P.
2017-08-01
We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.
Numerical simulation using vorticity-vector potential formulation
NASA Technical Reports Server (NTRS)
Tokunaga, Hiroshi
1993-01-01
An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.
Gray, B.R.; Shi, W.; Houser, J.N.; Rogala, J.T.; Guan, Z.; Cochran-Biederman, J. L.
2011-01-01
Ecological restoration efforts in large rivers generally aim to ameliorate ecological effects associated with large-scale modification of those rivers. This study examined whether the effects of restoration efforts-specifically those of island construction-within a largely open water restoration area of the Upper Mississippi River (UMR) might be seen at the spatial scale of that 3476ha area. The cumulative effects of island construction, when observed over multiple years, were postulated to have made the restoration area increasingly similar to a positive reference area (a proximate area comprising contiguous backwater areas) and increasingly different from two negative reference areas. The negative reference areas represented the Mississippi River main channel in an area proximate to the restoration area and an open water area in a related Mississippi River reach that has seen relatively little restoration effort. Inferences on the effects of restoration were made by comparing constrained and unconstrained models of summer chlorophyll a (CHL), summer inorganic suspended solids (ISS) and counts of benthic mayfly larvae. Constrained models forced trends in means or in both means and sampling variances to become, over time, increasingly similar to those in the positive reference area and increasingly dissimilar to those in the negative reference areas. Trends were estimated over 12- (mayflies) or 14-year sampling periods, and were evaluated using model information criteria. Based on these methods, restoration effects were observed for CHL and mayflies while evidence in favour of restoration effects on ISS was equivocal. These findings suggest that the cumulative effects of island building at relatively large spatial scales within large rivers may be estimated using data from large-scale surveillance monitoring programs. Published in 2010 by John Wiley & Sons, Ltd.
General Entanglement Scaling Laws from Time Evolution
NASA Astrophysics Data System (ADS)
Eisert, Jens; Osborne, Tobias J.
2006-10-01
We establish a general scaling law for the entanglement of a large class of ground states and dynamically evolving states of quantum spin chains: we show that the geometric entropy of a distinguished block saturates, and hence follows an entanglement-boundary law. These results apply to any ground state of a gapped model resulting from dynamics generated by a local Hamiltonian, as well as, dually, to states that are generated via a sudden quench of an interaction as recently studied in the case of dynamics of quantum phase transitions. We achieve these results by exploiting ideas from quantum information theory and tools provided by Lieb-Robinson bounds. We also show that there exist noncritical fermionic systems and equivalent spin chains with rapidly decaying interactions violating this entanglement-boundary law. Implications for the classical simulatability are outlined.
Simulation of the planetary boundary layer with the UCLA general circulation model
NASA Technical Reports Server (NTRS)
Suarez, M. J.; Arakawa, A.; Randall, D. A.
1981-01-01
A planetary boundary layer (PBL) model is presented which employs a mixed layer entrainment formulation to describe the mass exchange between the mixed layer with the upper, laminar atmosphere. A modified coordinate system couples the mixed layer model with large scale and sub-grid scale processes of a general circulation model. The vertical coordinate is configured as a sigma coordinate with the lower boundary, the top of the PBL, and the prescribed pressure level near the tropopause expressed as coordinate surfaces. The entrainment mass flux is parameterized by assuming the dissipation rate of turbulent kinetic energy to be proportional to the positive part of the generation by convection or mechanical production. The results of a simulation of July are presented for the entire globe.
NASA Astrophysics Data System (ADS)
Alexeyev, S. O.; Latosh, B. N.; Echeistov, V. A.
2017-12-01
Predictions of the f( R)-gravity model with a disappearing cosmological constant (Starobinsky's model) on scales characteristic of galaxies and their clusters are considered. The absence of a difference in the mass dependence of the turnaround radius between Starobinsky's model and General Relativity accessible to observation at the current accuracy of measurements has been established. This is true both for small masses (from 109 M Sun) corresponding to an individual galaxy and for masses corresponding to large galaxy clusters (up to 1015 M Sun). The turnaround radius increases with parameter n for all masses. Despite the fact that some models give a considerably smaller turnaround radius than does General Relativity, none of the models goes beyond the bounds specified by the observational data.
NASA Technical Reports Server (NTRS)
Globus, Al; Biegel, Bryan A.; Traugott, Steve
2004-01-01
AsterAnts is a concept calling for a fleet of solar sail powered spacecraft to retrieve large numbers of small (1/2-1 meter diameter) Near Earth Objects (NEOs) for orbital processing. AsterAnts could use the International Space Station (ISS) for NEO processing, solar sail construction, and to test NEO capture hardware. Solar sails constructed on orbit are expected to have substantially better performance than their ground built counterparts [Wright 1992]. Furthermore, solar sails may be used to hold geosynchronous communication satellites out-of-plane [Forward 1981] increasing the total number of slots by at least a factor of three. potentially generating $2 billion worth of orbital real estate over North America alone. NEOs are believed to contain large quantities of water, carbon, other life-support materials and metals. Thus. with proper processing, NEO materials could in principle be used to resupply the ISS, produce rocket propellant, manufacture tools, and build additional ISS working space. Unlike proposals requiring massive facilities, such as lunar bases, before returning any extraterrestrial larger than a typical inter-planetary mission. Furthermore, AsterAnts could be scaled up to deliver large amounts of material by building many copies of the same spacecraft, thereby achieving manufacturing economies of scale. Because AsterAnts would capture NEOs whole, NEO composition details, which are generally poorly characterized, are relatively unimportant and no complex extraction equipment is necessary. In combination with a materials processing facility at the ISS, AsterAnts might inaugurate an era of large-scale orbital construction using extraterrestrial materials.
Large-Scale Weather Disturbances in Mars’ Southern Extratropics
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.
2015-11-01
Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.
Eddy diffusivity of quasi-neutrally-buoyant inertial particles
NASA Astrophysics Data System (ADS)
Martins Afonso, Marco; Muratore-Ginanneschi, Paolo; Gama, Sílvio M. A.; Mazzino, Andrea
2018-04-01
We investigate the large-scale transport properties of quasi-neutrally-buoyant inertial particles carried by incompressible zero-mean periodic or steady ergodic flows. We show how to compute large-scale indicators such as the inertial-particle terminal velocity and eddy diffusivity from first principles in a perturbative expansion around the limit of added-mass factor close to unity. Physically, this limit corresponds to the case where the mass density of the particles is constant and close in value to the mass density of the fluid, which is also constant. Our approach differs from the usual over-damped expansion inasmuch as we do not assume a separation of time scales between thermalization and small-scale convection effects. For a general flow in the class of incompressible zero-mean periodic velocity fields, we derive closed-form cell equations for the auxiliary quantities determining the terminal velocity and effective diffusivity. In the special case of parallel flows these equations admit explicit analytic solution. We use parallel flows to show that our approach sheds light onto the behavior of terminal velocity and effective diffusivity for Stokes numbers of the order of unity.
Optimizing the scale of markets for water quality trading
NASA Astrophysics Data System (ADS)
Doyle, Martin W.; Patterson, Lauren A.; Chen, Yanyou; Schnier, Kurt E.; Yates, Andrew J.
2014-09-01
Applying market approaches to environmental regulations requires establishing a spatial scale for trading. Spatially large markets usually increase opportunities for abatement cost savings but increase the potential for pollution damages (hot spots), vice versa for spatially small markets. We develop a coupled hydrologic-economic modeling approach for application to point source emissions trading by a large number of sources and apply this approach to the wastewater treatment plants (WWTPs) within the watershed of the second largest estuary in the U.S. We consider two different administrative structures that govern the trade of emission permits: one-for-one trading (the number of permits required for each unit of emission is the same for every WWTP) and trading ratios (the number of permits required for each unit of emissions varies across WWTP). Results show that water quality regulators should allow trading to occur at the river basin scale as an appropriate first-step policy, as is being done in a limited number of cases via compliance associations. Larger spatial scales may be needed under conditions of increased abatement costs. The optimal scale of the market is generally the same regardless of whether one-for-one trading or trading ratios are employed.
Culmination of the inverse cascade - mean flow and fluctuations
NASA Astrophysics Data System (ADS)
Frishman, Anna; Herbert, Corentin
2017-11-01
An inverse cascade-energy transfer to progressively larger scales - is a salient feature of two-dimensional turbulence. If the cascade reaches the system scale, it terminates in the self organization of the turbulence into a large scale coherent structure, on top of small scale fluctuations. A recent theoretical framework in which this coherent mean flow can be obtained will be discussed. Assuming that the quasi-linear approximation applies, the forcing acts at small scales, and a strong shear, the theory gives an inverse relation between the average momentum flux and the mean shear rate. It will be argued that this relation is quite general, being independent of the dissipation mechanism and largely insensitive to the type of forcing. Furthermore, in the special case of a homogeneous forcing, the relation between the momentum flux and mean shear rate is completely determined by dimensional analysis and symmetry arguments. The subject of the average energy of the fluctuations will also be touched upon, focusing on a vortex mean flow. In contrast to the momentum flux, we find that the energy of the fluctuations is determined by zero modes of the mean-flow advection operator. Using an analytic derivation for the zero mo.
Cortical circuitry implementing graphical models.
Litvak, Shai; Ullman, Shimon
2009-11-01
In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.
Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.
2007-01-01
In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921
Rayapuram, Channabasavangowda; Idänheimo, Niina; Hunter, Kerri; Kimura, Sachie; Merilo, Ebe; Vaattovaara, Aleksia; Oracz, Krystyna; Kaufholdt, David; Pallon, Andres; Anggoro, Damar Tri; Glów, Dawid; Lowe, Jennifer; Zhou, Ji; Mohammadi, Omid; Puukko, Tuomas; Albert, Andreas; Lang, Hans; Ernst, Dieter; Kollist, Hannes; Brosché, Mikael; Durner, Jörg; Borst, Jan Willem; Collinge, David B.; Karpiński, Stanisław; Lyngkjær, Michael F.; Robatzek, Silke; Wrzaczek, Michael; Kangasjärvi, Jaakko
2015-01-01
Cysteine-rich receptor-like kinases (CRKs) are transmembrane proteins characterized by the presence of two domains of unknown function 26 (DUF26) in their ectodomain. The CRKs form one of the largest groups of receptor-like protein kinases in plants, but their biological functions have so far remained largely uncharacterized. We conducted a large-scale phenotyping approach of a nearly complete crk T-DNA insertion line collection showing that CRKs control important aspects of plant development and stress adaptation in response to biotic and abiotic stimuli in a non-redundant fashion. In particular, the analysis of reactive oxygen species (ROS)-related stress responses, such as regulation of the stomatal aperture, suggests that CRKs participate in ROS/redox signalling and sensing. CRKs play general and fine-tuning roles in the regulation of stomatal closure induced by microbial and abiotic cues. Despite their great number and high similarity, large-scale phenotyping identified specific functions in diverse processes for many CRKs and indicated that CRK2 and CRK5 play predominant roles in growth regulation and stress adaptation, respectively. As a whole, the CRKs contribute to specificity in ROS signalling. Individual CRKs control distinct responses in an antagonistic fashion suggesting future potential for using CRKs in genetic approaches to improve plant performance and stress tolerance. PMID:26197346
NASA Astrophysics Data System (ADS)
Bronstert, Axel; Heistermann, Maik; Francke, Till
2017-04-01
Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on first principals, partly pde-type, are available for several processes (but not for all), because measurement and modelling scale are compatible (-) the spatial model domain are hardly representative for larger spatial entities, including regions for which water resources management decisions are to be taken; straightforward upsizing is also limited by data availability and computational requirements. Meso scale (e.g. extent of a small to large catchment or region): (+) the spatial extent of the model domain has approximately the same extent as the regions for which water resources management decisions are to be taken. I.e., such models enable water resources quantification at the scale of most water management decisions; (+) data of some state conditions (e.g. vegetation cover, topography, river network and cross sections) are available; (+) data of some boundary fluxes (in particular surface runoff / channel flow) are directly measurable with mostly sufficient certainty; (+) equations, partly based on simple water budgeting, partly variants of pde-type equations, are available for most hydrological processes. This enables the construction of meso-scale distributed models reflecting the spatial heterogeneity of regions/landscapes; (-) process scale, measurement scale, and modelling scale differ from each other for a number of processes, e.g., such as runoff generation; (-) the process formulation (usually derived from micro-scale studies) cannot directly be transferred to the modelling domain. Upscaling procedures for this purpose are not readily and generally available. Macro scale (e.g. extent of a continent up to global): (+) the spatial extent of the model may cover the whole Earth. This enables an attractive global display of model results; (+) model results might be technically interchangeable or at least comparable with results from other global models, such as global climate models; (-) process scale, measurement scale, and modelling scale differ heavily from each other for all hydrological and associated processes; (-) the model domain and its results are not representative regions for which water resources management decisions are to be taken. (-) both state condition and boundary flux data are hardly available for the whole model domain. Water management data and discharge data from remote regions are particular incomplete / unavailable for this scale. This undermines the model's verifiability; (-) since process formulation and resulting modelling reliability at this scale is very limited, such models can hardly show any explanatory skills or prognostic power; (-) since both the entire model domain and the spatial sub-units cover large areas, model results represent values averaged over at least the spatial sub-unit's extent. In many cases, the applied time scale implies a long-term averaging in time, too. We emphasize the importance to be aware of the above mentioned strengths and weaknesses of those scale-specific models. (Many of the) results of the current global model studies do not reflect such limitations. In particular, we consider the averaging over large model entities in space and/or time inadequate. Many hydrological processes are of a non-linear nature, including threshold-type behaviour. Such features cannot be reflected by such large scale entities. The model results therefore can be of little or no use for water resources decisions and/or even misleading for public debates or decision making. Some rather newly developed sustainability concepts, e.g. "Planetary Boundaries" in which humanity may "continue to develop and thrive for generations to come" are based on such global-scale approaches and models. However, many of the major problems regarding sustainability on Earth, e.g. water scarcity, do not exhibit on a global but on a regional scale. While on a global scale water might look like being available in sufficient quantity and quality, there are many regions where water problems already have very harmful or even devastating effects. Therefore, it is the challenge to derive models and observation programmes for regional scales. In case a global display is desired future efforts should be directed towards the development of a global picture based on a mosaic of regional sound assessments, rather than "zooming into" the results of large-scale simulations. Still, a key question remains to be discussed, i.e. for which purpose models at this (global) scale can be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Liang; Jain, Nitin; Cheng, Xiaolin
Protein function often depends on global, collective internal motions. However, the simultaneous quantitative experimental determination of the forms, amplitudes, and time scales of these motions has remained elusive. We demonstrate that a complete description of these large-scale dynamic modes can be obtained using coherent neutron-scattering experiments on perdeuterated samples. With this approach, a microscopic relationship between the structure, dynamics, and function in a protein, cytochrome P450cam, is established. The approach developed here should be of general applicability to protein systems.
Hong, Liang; Jain, Nitin; Cheng, Xiaolin; ...
2016-10-14
Protein function often depends on global, collective internal motions. However, the simultaneous quantitative experimental determination of the forms, amplitudes, and time scales of these motions has remained elusive. We demonstrate that a complete description of these large-scale dynamic modes can be obtained using coherent neutron-scattering experiments on perdeuterated samples. With this approach, a microscopic relationship between the structure, dynamics, and function in a protein, cytochrome P450cam, is established. The approach developed here should be of general applicability to protein systems.
Measuring young children's language abilities.
Zink, I; Schaerlaekens, A
2000-01-01
This article deals with the new challenges put on language diagnosis, and the growing need for good diagnostic instruments for young children. Particularly for Dutch, the original English Reynell Developmental Language Scales were adapted not only to the Dutch idiom, but some general ameliorations and changes in the original scales resulted in a new instrument named the RTOS. The new instrument was standardized on a large population, and psychometrically evaluated. In communicating the experiences with such a language/cultural/psychometric adaptation, we hope that other language-minority groups will be encouraged to undertake similar adaptations.
Predicting spatio-temporal failure in large scale observational and micro scale experimental systems
NASA Astrophysics Data System (ADS)
de las Heras, Alejandro; Hu, Yong
2006-10-01
Forecasting has become an essential part of modern thought, but the practical limitations still are manifold. We addressed future rates of change by comparing models that take into account time, and models that focus more on space. Cox regression confirmed that linear change can be safely assumed in the short-term. Spatially explicit Poisson regression, provided a ceiling value for the number of deforestation spots. With several observed and estimated rates, it was decided to forecast using the more robust assumptions. A Markov-chain cellular automaton thus projected 5-year deforestation in the Amazonian Arc of Deforestation, showing that even a stable rate of change would largely deplete the forest area. More generally, resolution and implementation of the existing models could explain many of the modelling difficulties still affecting forecasting.
NASA Astrophysics Data System (ADS)
Hanna, Steven R.; Young, George S.
2017-01-01
What do the terms "top-down", "inverse", "backwards", "adjoint", "sensor data fusion", "receptor", "source term estimation (STE)", to name several appearing in the current literature, have in common? These varied terms are used by different disciplines to describe the same general methodology - the use of observations of air pollutant concentrations and knowledge of wind fields to identify air pollutant source locations and/or magnitudes. Academic journals are publishing increasing numbers of papers on this topic. Examples of scenarios related to this growing interest, ordered from small scale to large scale, are: use of real-time samplers to quickly estimate the location of a toxic gas release by a terrorist at a large public gathering (e.g., Haupt et al., 2009);
Anber, Usama; Gentine, Pierre; Wang, Shuguang; Sobel, Adam H.
2015-01-01
The diurnal and seasonal water cycles in the Amazon remain poorly simulated in general circulation models, exhibiting peak evapotranspiration in the wrong season and rain too early in the day. We show that those biases are not present in cloud-resolving simulations with parameterized large-scale circulation. The difference is attributed to the representation of the morning fog layer, and to more accurate characterization of convection and its coupling with large-scale circulation. The morning fog layer, present in the wet season but absent in the dry season, dramatically increases cloud albedo, which reduces evapotranspiration through its modulation of the surface energy budget. These results highlight the importance of the coupling between the energy and hydrological cycles and the key role of cloud albedo feedback for climates over tropical continents. PMID:26324902
Clustering fossils in solid inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhshik, Mohammad, E-mail: m.akhshik@ipm.ir
In solid inflation the single field non-Gaussianity consistency condition is violated. As a result, the long tenor perturbation induces observable clustering fossils in the form of quadrupole anisotropy in large scale structure power spectrum. In this work we revisit the bispectrum analysis for the scalar-scalar-scalar and tensor-scalar-scalar bispectrum for the general parameter space of solid. We consider the parameter space of the model in which the level of non-Gaussianity generated is consistent with the Planck constraints. Specializing to this allowed range of model parameter we calculate the quadrupole anisotropy induced from the long tensor perturbations on the power spectrum ofmore » the scalar perturbations. We argue that the imprints of clustering fossil from primordial gravitational waves on large scale structures can be detected from the future galaxy surveys.« less
Anomalies of the Asian Monsoon Induced by Aerosol Forcings
NASA Technical Reports Server (NTRS)
Lau, William K. M.; Kim, M. K.
2004-01-01
Impacts of aerosols on the Asian summer monsoon are studied using the NASA finite volume General Circulation Model (fvGCM), with radiative forcing derived from three-dimensional distributions of five aerosol species i.e., black carbon, organic carbon, soil dust, and sea salt from the Goddard Chemistry Aerosol Radiation and Transport Model (GOCART). Results show that absorbing aerosols, i.e., black carbon and dust, induce large-scale upper-level heating anomaly over the Tibetan Plateau in April and May, ushering in & early onset of the Indian summer monsoon. Absorbing aerosols also I i enhance lower-level heating and anomalous ascent over northern India, intensifying the Indian monsoon. Overall, the aerosol-induced large-scale surface' temperature cooling leads to a reduction of monsoon rainfall over the East Asia continent, and adjacent oceanic regions.
Climate Dynamics and Hysteresis at Low and High Obliquity
NASA Astrophysics Data System (ADS)
Colose, C.; Del Genio, A. D.; Way, M.
2017-12-01
We explore the large-scale climate dynamics at low and high obliquity for an Earth-like planet using the ROCKE-3D (Resolving Orbital and Climate Keys of Earth and Extraterrestrial Environments with Dynamics) 3-D General Circulation model being developed at NASA GISS as part of the Nexus for Exoplanet System Science (NExSS) initiative. We highlight the role of ocean heat storage and transport in determining the seasonal cycle at high obliquity, and describe the large-scale circulation and resulting regional climate patterns using both aquaplanet and Earth topographical boundary conditions. Finally, we contrast the hysteresis structure to varying CO2 concentration for a low and high obliquity planet near the outer edge of the habitable zone. We discuss the prospects for habitability for a high obliquity planet susceptible to global glaciation.
Large-scale variations in observed Antarctic Sea ice extent and associated atmospheric circulation
NASA Technical Reports Server (NTRS)
Cavalieri, D. J.; Parkinson, C. L.
1981-01-01
The 1974 Antarctic large scale sea ice extent is studied from data from Nimbus 2 and 5 and temperature and sea level pressure fields from the Australian Meteorological Data Set. Electrically Scanning Microwave Radiometer data were three-day averaged and compared with 1000 mbar atmospheric pressure and sea level pressure data, also in three-day averages. Each three-day period was subjected to a Fourier analysis and included the mean latitude of the ice extent and the phases and percent variances in terms of the first six Fourier harmonics. Centers of low pressure were found to be generally east of regions which displayed rapid ice growth, and winds acted to extend the ice equatorward. An atmospheric response was also noted as caused by the changing ice cover.
Anber, Usama; Gentine, Pierre; Wang, Shuguang; ...
2015-08-31
The diurnal and seasonal water cycles in the Amazon remain poorly simulated in general circulation models, exhibiting peak evapotranspiration in the wrong season and rain too early in the day. We show that those biases are not present in cloud-resolving simulations with parameterized large-scale circulation. The difference is attributed to the representation of the morning fog layer, and to more accurate characterization of convection and its coupling with large-scale circulation. The morning fog layer, present in the wet season but absent in the dry season, dramatically increases cloud albedo, which reduces evapotranspiration through its modulation of the surface energy budget.more » Finally, these results highlight the importance of the coupling between the energy and hydrological cycles and the key role of cloud albedo feedback for climates over tropical continents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anber, Usama; Gentine, Pierre; Wang, Shuguang
The diurnal and seasonal water cycles in the Amazon remain poorly simulated in general circulation models, exhibiting peak evapotranspiration in the wrong season and rain too early in the day. We show that those biases are not present in cloud-resolving simulations with parameterized large-scale circulation. The difference is attributed to the representation of the morning fog layer, and to more accurate characterization of convection and its coupling with large-scale circulation. The morning fog layer, present in the wet season but absent in the dry season, dramatically increases cloud albedo, which reduces evapotranspiration through its modulation of the surface energy budget.more » Finally, these results highlight the importance of the coupling between the energy and hydrological cycles and the key role of cloud albedo feedback for climates over tropical continents.« less
Transport Coefficients from Large Deviation Functions
NASA Astrophysics Data System (ADS)
Gao, Chloe; Limmer, David
2017-10-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
Robustness of serial clustering of extra-tropical cyclones to the choice of tracking method
NASA Astrophysics Data System (ADS)
Pinto, Joaquim G.; Ulbrich, Sven; Karremann, Melanie K.; Stephenson, David B.; Economou, Theodoros; Shaffrey, Len C.
2016-04-01
Cyclone families are a frequent synoptic weather feature in the Euro-Atlantic area in winter. Given appropriate large-scale conditions, the occurrence of such series (clusters) of storms may lead to large socio-economic impacts and cumulative losses. Recent studies analyzing Reanalysis data using single cyclone tracking methods have shown that serial clustering of cyclones occurs on both flanks and downstream regions of the North Atlantic storm track. This study explores the sensitivity of serial clustering to the choice of tracking method. With this aim, the IMILAST cyclone track database based on ERA-interim data is analysed. Clustering is estimated by the dispersion (ratio of variance to mean) of winter (DJF) cyclones passages near each grid point over the Euro-Atlantic area. Results indicate that while the general pattern of clustering is identified for all methods, there are considerable differences in detail. This can primarily be attributed to the differences in the variance of cyclone counts between the methods, which range up to one order of magnitude. Nevertheless, clustering over the Eastern North Atlantic and Western Europe can be identified for all methods and can thus be generally considered as a robust feature. The statistical links between large-scale patterns like the NAO and clustering are obtained for all methods, though with different magnitudes. We conclude that the occurrence of cyclone clustering over the Eastern North Atlantic and Western Europe is largely independent from the choice of tracking method and hence from the definition of a cyclone.
NASA Astrophysics Data System (ADS)
Lothet, Emilie H.; Shaw, Kendrick M.; Horn, Charles C.; Lu, Hui; Wang, Yves T.; Jansen, E. Duco; Chiel, Hillel J.; Jenkins, Michael W.
2016-03-01
Sensory information is conveyed to the central nervous system via small diameter unmyelinated fibers. In general, smaller diameter axons have slower conduction velocities. Selective control of such fibers could create new clinical treatments for chronic pain, nausea in response to chemo-therapeutic agents, or hypertension. Electrical stimulation can control axonal activity, but induced axonal current is proportional to cross-sectional area, so that large diameter fibers are affected first. Physiologically, however, synaptic inputs generally affect small diameter fibers before large diameter fibers (the size principle). A more physiological modality that first affected small diameter fibers could have fewer side effects (e.g., not recruiting motor axons). A novel mathematical analysis of the cable equation demonstrates that the minimum length along the axon for inducing block scales with the square root of axon diameter. This implies that the minimum length along an axon for inhibition will scale as the square root of axon diameter, so that lower radiant exposures of infrared light will selectively affect small diameter, slower conducting fibers before those of large diameter. This prediction was tested in identified neurons from the marine mollusk Aplysia californica. Radiant exposure to block a neuron with a slower conduction velocity (B43) was consistently lower than that needed to block a faster conduction velocity neuron (B3). Furthermore, in the vagus nerve of the musk shrew, lower radiant exposure blocked slow conducting fibers before blocking faster conducting fibers. Infrared light can selectively control smaller diameter fibers, suggesting many novel clinical treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco
Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less
Power suppression at large scales in string inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cicoli, Michele; Downes, Sean; Dutta, Bhaskar, E-mail: mcicoli@ictp.it, E-mail: sddownes@physics.tamu.edu, E-mail: dutta@physics.tamu.edu
2013-12-01
We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflationmore » is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.« less
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; ...
2018-03-15
Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less
Power suppression at large scales in string inflation
NASA Astrophysics Data System (ADS)
Cicoli, Michele; Downes, Sean; Dutta, Bhaskar
2013-12-01
We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.
Late-time cosmological phase transitions
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.
Performance Assessment of a Large Scale Pulsejet- Driven Ejector System
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Litke, Paul J.; Schauer, Frederick R.; Bradley, Royce P.; Hoke, John L.
2006-01-01
Unsteady thrust augmentation was measured on a large scale driver/ejector system. A 72 in. long, 6.5 in. diameter, 100 lb(sub f) pulsejet was tested with a series of straight, cylindrical ejectors of varying length, and diameter. A tapered ejector configuration of varying length was also tested. The objectives of the testing were to determine the dimensions of the ejectors which maximize thrust augmentation, and to compare the dimensions and augmentation levels so obtained with those of other, similarly maximized, but smaller scale systems on which much of the recent unsteady ejector thrust augmentation studies have been performed. An augmentation level of 1.71 was achieved with the cylindrical ejector configuration and 1.81 with the tapered ejector configuration. These levels are consistent with, but slightly lower than the highest levels achieved with the smaller systems. The ejector diameter yielding maximum augmentation was 2.46 times the diameter of the pulsejet. This ratio closely matches those of the small scale experiments. For the straight ejector, the length yielding maximum augmentation was 10 times the diameter of the pulsejet. This was also nearly the same as the small scale experiments. Testing procedures are described, as are the parametric variations in ejector geometry. Results are discussed in terms of their implications for general scaling of pulsed thrust ejector systems
Spin diffusion from an inhomogeneous quench in an integrable system.
Ljubotina, Marko; Žnidarič, Marko; Prosen, Tomaž
2017-07-13
Generalized hydrodynamics predicts universal ballistic transport in integrable lattice systems when prepared in generic inhomogeneous initial states. However, the ballistic contribution to transport can vanish in systems with additional discrete symmetries. Here we perform large scale numerical simulations of spin dynamics in the anisotropic Heisenberg XXZ spin 1/2 chain starting from an inhomogeneous mixed initial state which is symmetric with respect to a combination of spin reversal and spatial reflection. In the isotropic and easy-axis regimes we find non-ballistic spin transport which we analyse in detail in terms of scaling exponents of the transported magnetization and scaling profiles of the spin density. While in the easy-axis regime we find accurate evidence of normal diffusion, the spin transport in the isotropic case is clearly super-diffusive, with the scaling exponent very close to 2/3, but with universal scaling dynamics which obeys the diffusion equation in nonlinearly scaled time.
Testing the equivalence principle on cosmological scales
NASA Astrophysics Data System (ADS)
Bonvin, Camille; Fleury, Pierre
2018-05-01
The equivalence principle, that is one of the main pillars of general relativity, is very well tested in the Solar system; however, its validity is more uncertain on cosmological scales, or when dark matter is concerned. This article shows that relativistic effects in the large-scale structure can be used to directly test whether dark matter satisfies Euler's equation, i.e. whether its free fall is characterised by geodesic motion, just like baryons and light. After having proposed a general parametrisation for deviations from Euler's equation, we perform Fisher-matrix forecasts for future surveys like DESI and the SKA, and show that such deviations can be constrained with a precision of order 10%. Deviations from Euler's equation cannot be tested directly with standard methods like redshift-space distortions and gravitational lensing, since these observables are not sensitive to the time component of the metric. Our analysis shows therefore that relativistic effects bring new and complementary constraints to alternative theories of gravity.
NASA Technical Reports Server (NTRS)
Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
Landsurface hydrological parameterizations are implemented in the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: (1) runoff and evapotranspiration functions that include the effects of subgrid scale spatial variability and use physically based equations of hydrologic flux at the soil surface, and (2) a realistic soil moisture diffusion scheme for the movement of water in the soil column. A one dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three dimensional GCM. Results of the final simulation with the GISS GCM and the new landsurface hydrology indicate that the runoff rate, especially in the tropics is significantly improved. As a result, the remaining components of the heat and moisture balance show comparable improvements when compared to observations. The validation of model results is carried from the large global (ocean and landsurface) scale, to the zonal, continental, and finally the finer river basin scales.
Gender Perspectives on Spatial Tasks in a National Assessment: A Secondary Data Analysis
ERIC Educational Resources Information Center
Logan, Tracy; Lowrie, Tom
2017-01-01
Most large-scale summative assessments present results in terms of cumulative scores. Although such descriptions can provide insights into general trends over time, they do not provide detail of how students solved the tasks. Less restrictive access to raw data from these summative assessments has occurred in recent years, resulting in…
Remote analysis of biological invasion and the impact of enemy release
James R. Kellner; Gregory P. Asner; Kealoha M. Kinney; Scott R. Loarie; David E. Knapp; Ty Kennedy-Bowdoin; Erin J. Questad; Susan Cordell; Jarrod M. Thaxton
2011-01-01
Escape from natural enemies is a widely held generalization for the success of exotic plants. We conducted a large-scale experiment in Hawaii (USA) to quantify impacts of ungulate removal on plant growth and performance, and to test whether elimination of an exotic generalist herbivore facilitated exotic success. Assessment of impacted and control sites before and...