Large- and small-scale constraints on power spectra in Omega = 1 universes
NASA Technical Reports Server (NTRS)
Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.
1993-01-01
The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
A Functional Model for Management of Large Scale Assessments.
ERIC Educational Resources Information Center
Banta, Trudy W.; And Others
This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
Large-scale modeling of rain fields from a rain cell deterministic model
NASA Astrophysics Data System (ADS)
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel
2017-11-01
We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Anisotropies of the cosmic microwave background in nonstandard cold dark matter models
NASA Technical Reports Server (NTRS)
Vittorio, Nicola; Silk, Joseph
1992-01-01
Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.
Tropospheric transport differences between models using the same large-scale meteorological fields
NASA Astrophysics Data System (ADS)
Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.
2017-01-01
The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.
ERIC Educational Resources Information Center
Burstein, Leigh
Two specific methods of analysis in large-scale evaluations are considered: structural equation modeling and selection modeling/analysis of non-equivalent control group designs. Their utility in large-scale educational program evaluation is discussed. The examination of these methodological developments indicates how people (evaluators,…
Large-Scale Aerosol Modeling and Analysis
2009-09-30
Modeling of Burning Emissions ( FLAMBE ) project, and other related parameters. Our plans to embed NAAPS inside NOGAPS may need to be put on hold...AOD, FLAMBE and FAROP at FNMOC are supported by 6.4 funding from PMW-120 for “Large-scale Atmospheric Models”, “Small-scale Atmospheric Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Gang
Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less
Large-scale motions in the universe: Using clusters of galaxies as tracers
NASA Technical Reports Server (NTRS)
Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard
1995-01-01
Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).
Large scale anomalies in the microwave background: causation and correlation.
Aslanyan, Grigor; Easther, Richard
2013-12-27
Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.
Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations
NASA Astrophysics Data System (ADS)
Choi, Suk-Jin; Lee, Dong-Kyou
2016-06-01
This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.
State of the Art in Large-Scale Soil Moisture Monitoring
NASA Technical Reports Server (NTRS)
Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.;
2013-01-01
Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.
An interactive display system for large-scale 3D models
NASA Astrophysics Data System (ADS)
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
USDA-ARS?s Scientific Manuscript database
Water quality modeling requires across-scale support of combined digital soil elements and simulation parameters. This paper presents the unprecedented development of a large spatial scale (1:250,000) ArcGIS geodatabase coverage designed as a functional repository of soil-parameters for modeling an...
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
NASA Astrophysics Data System (ADS)
Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy
2017-09-01
An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.
Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge
NASA Astrophysics Data System (ADS)
Park, Heon-Joon; Lee, Changyeol
2017-04-01
Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).
NASA Astrophysics Data System (ADS)
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang
2013-01-01
Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2016-03-18
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Prototype Vector Machine for Large Scale Semi-Supervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less
Gravity versus radiation models: on the importance of scale and heterogeneity in commuting flows.
Masucci, A Paolo; Serras, Joan; Johansson, Anders; Batty, Michael
2013-08-01
We test the recently introduced radiation model against the gravity model for the system composed of England and Wales, both for commuting patterns and for public transportation flows. The analysis is performed both at macroscopic scales, i.e., at the national scale, and at microscopic scales, i.e., at the city level. It is shown that the thermodynamic limit assumption for the original radiation model significantly underestimates the commuting flows for large cities. We then generalize the radiation model, introducing the correct normalization factor for finite systems. We show that even if the gravity model has a better overall performance the parameter-free radiation model gives competitive results, especially for large scales.
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
A new energy transfer model for turbulent free shear flow
NASA Technical Reports Server (NTRS)
Liou, William W.-W.
1992-01-01
A new model for the energy transfer mechanism in the large-scale turbulent kinetic energy equation is proposed. An estimate of the characteristic length scale of the energy containing large structures is obtained from the wavelength associated with the structures predicted by a weakly nonlinear analysis for turbulent free shear flows. With the inclusion of the proposed energy transfer model, the weakly nonlinear wave models for the turbulent large-scale structures are self-contained and are likely to be independent flow geometries. The model is tested against a plane mixing layer. Reasonably good agreement is achieved. Finally, it is shown by using the Liapunov function method, the balance between the production and the drainage of the kinetic energy of the turbulent large-scale structures is asymptotically stable as their amplitude saturates. The saturation of the wave amplitude provides an alternative indicator for flow self-similarity.
Downscaling ocean conditions: Experiments with a quasi-geostrophic model
NASA Astrophysics Data System (ADS)
Katavouta, A.; Thompson, K. R.
2013-12-01
The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.
Analysis and modeling of subgrid scalar mixing using numerical data
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
Double inflation - A possible resolution of the large-scale structure problem
NASA Technical Reports Server (NTRS)
Turner, Michael S.; Villumsen, Jens V.; Vittorio, Nicola; Silk, Joseph; Juszkiewicz, Roman
1987-01-01
A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Omega = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of about 100 Mpc, while the small-scale structure over less than about 10 Mpc resembles that in a low-density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations.
Coupling large scale hydrologic-reservoir-hydraulic models for impact studies in data sparse regions
NASA Astrophysics Data System (ADS)
O'Loughlin, Fiachra; Neal, Jeff; Wagener, Thorsten; Bates, Paul; Freer, Jim; Woods, Ross; Pianosi, Francesca; Sheffied, Justin
2017-04-01
As hydraulic modelling moves to increasingly large spatial domains it has become essential to take reservoirs and their operations into account. Large-scale hydrological models have been including reservoirs for at least the past two decades, yet they cannot explicitly model the variations in spatial extent of reservoirs, and many reservoirs operations in hydrological models are not undertaken during the run-time operation. This requires a hydraulic model, yet to-date no continental scale hydraulic model has directly simulated reservoirs and their operations. In addition to the need to include reservoirs and their operations in hydraulic models as they move to global coverage, there is also a need to link such models to large scale hydrology models or land surface schemes. This is especially true for Africa where the number of river gauges has consistently declined since the middle of the twentieth century. In this study we address these two major issues by developing: 1) a coupling methodology for the VIC large-scale hydrological model and the LISFLOOD-FP hydraulic model, and 2) a reservoir module for the LISFLOOD-FP model, which currently includes four sets of reservoir operating rules taken from the major large-scale hydrological models. The Volta Basin, West Africa, was chosen to demonstrate the capability of the modelling framework as it is a large river basin ( 400,000 km2) and contains the largest man-made lake in terms of area (8,482 km2), Lake Volta, created by the Akosombo dam. Lake Volta also experiences a seasonal variation in water levels of between two and six metres that creates a dynamic shoreline. In this study, we first run our coupled VIC and LISFLOOD-FP model without explicitly modelling Lake Volta and then compare these results with those from model runs where the dam operations and Lake Volta are included. The results show that we are able to obtain variation in the Lake Volta water levels and that including the dam operations and Lake Volta has significant impacts on the water levels across the domain.
Highly efficient model updating for structural condition assessment of large-scale bridges.
DOT National Transportation Integrated Search
2015-02-01
For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
IS THE SMALL-SCALE MAGNETIC FIELD CORRELATED WITH THE DYNAMO CYCLE?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karak, Bidya Binay; Brandenburg, Axel, E-mail: bbkarak@nordita.org
2016-01-01
The small-scale magnetic field is ubiquitous at the solar surface—even at high latitudes. From observations we know that this field is uncorrelated (or perhaps even weakly anticorrelated) with the global sunspot cycle. Our aim is to explore the origin, and particularly the cycle dependence, of such a phenomenon using three-dimensional dynamo simulations. We adopt a simple model of a turbulent dynamo in a shearing box driven by helically forced turbulence. Depending on the dynamo parameters, large-scale (global) and small-scale (local) dynamos can be excited independently in this model. Based on simulations in different parameter regimes, we find that, when onlymore » the large-scale dynamo is operating in the system, the small-scale magnetic field generated through shredding and tangling of the large-scale magnetic field is positively correlated with the global magnetic cycle. However, when both dynamos are operating, the small-scale field is produced from both the small-scale dynamo and the tangling of the large-scale field. In this situation, when the large-scale field is weaker than the equipartition value of the turbulence, the small-scale field is almost uncorrelated with the large-scale magnetic cycle. On the other hand, when the large-scale field is stronger than the equipartition value, we observe an anticorrelation between the small-scale field and the large-scale magnetic cycle. This anticorrelation can be interpreted as a suppression of the small-scale dynamo. Based on our studies we conclude that the observed small-scale magnetic field in the Sun is generated by the combined mechanisms of a small-scale dynamo and tangling of the large-scale field.« less
Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method
NASA Astrophysics Data System (ADS)
Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru
2015-05-01
Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2017-08-05
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
The global reference atmospheric model, mod 2 (with two scale perturbation model)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Hargraves, W. R.
1976-01-01
The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.
Wave models for turbulent free shear flows
NASA Technical Reports Server (NTRS)
Liou, W. W.; Morris, P. J.
1991-01-01
New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.
NASA Astrophysics Data System (ADS)
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2010-05-01
In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
A first large-scale flood inundation forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie
2013-11-04
At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less
Effects of large-scale wind driven turbulence on sound propagation
NASA Technical Reports Server (NTRS)
Noble, John M.; Bass, Henry E.; Raspet, Richard
1990-01-01
Acoustic measurements made in the atmosphere have shown significant fluctuations in amplitude and phase resulting from the interaction with time varying meteorological conditions. The observed variations appear to have short term and long term (1 to 5 minutes) variations at least in the phase of the acoustic signal. One possible way to account for this long term variation is the use of a large scale wind driven turbulence model. From a Fourier analysis of the phase variations, the outer scales for the large scale turbulence is 200 meters and greater, which corresponds to turbulence in the energy-containing subrange. The large scale turbulence is assumed to be elongated longitudinal vortex pairs roughly aligned with the mean wind. Due to the size of the vortex pair compared to the scale of the present experiment, the effect of the vortex pair on the acoustic field can be modeled as the sound speed of the atmosphere varying with time. The model provides results with the same trends and variations in phase observed experimentally.
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David
2015-04-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).
A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
USDA-ARS?s Scientific Manuscript database
In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...
Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.
Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S
2016-09-26
In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Imprint of thawing scalar fields on the large scale galaxy overdensity
NASA Astrophysics Data System (ADS)
Dinda, Bikash R.; Sen, Anjan A.
2018-04-01
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1993-01-01
Modified cold dark matter (CDM) models were recently suggested to account for large-scale optical data, which fix the power spectrum on large scales, and the COBE results, which would then fix the bias parameter, b. We point out that all such models have deficit of small-scale power where density fluctuations are presently nonlinear, and should then lead to late epochs of collapse of scales M between 10 exp 9 - 10 exp 10 solar masses and (1-5) x 10 exp 14 solar masses. We compute the probabilities and comoving space densities of various scale objects at high redshifts according to the CDM models and compare these with observations of high-z QSOs, high-z galaxies and the protocluster-size object found recently by Uson et al. (1992) at z = 3.4. We show that the modified CDM models are inconsistent with the observational data on these objects. We thus suggest that in order to account for the high-z objects, as well as the large-scale and COBE data, one needs a power spectrum with more power on small scales than CDM models allow and an open universe.
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
Spatiotemporal property and predictability of large-scale human mobility
NASA Astrophysics Data System (ADS)
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Resurrecting hot dark matter - Large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1988-01-01
These are the results of a numerical simulation of the formation of large-scale structure from cosmic-string loops in a universe dominated by massive neutrinos (hot dark matter). This model has several desirable features. The final matter distribution contains isolated density peaks embedded in a smooth background, producing a natural bias in the distribution of luminous matter. Because baryons can accrete onto the cosmic strings before the neutrinos, the galaxies will have baryon cores and dark neutrino halos. Galaxy formation in this model begins much earlier than in random-phase models. On large scales the distribution of clustered matter visually resembles the CfA survey, with large voids and filaments.
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...
2015-01-20
Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Large-scale derived flood frequency analysis based on continuous simulation
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.
A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2016-02-01
Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.
ERIC Educational Resources Information Center
Wendt, Heike; Bos, Wilfried; Goy, Martin
2011-01-01
Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…
An Alternative Way to Model Population Ability Distributions in Large-Scale Educational Surveys
ERIC Educational Resources Information Center
Wetzel, Eunike; Xu, Xueli; von Davier, Matthias
2015-01-01
In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…
ERIC Educational Resources Information Center
de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.
2010-01-01
We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…
Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh
2011-01-01
Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...
NASA Astrophysics Data System (ADS)
Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.
2018-04-01
Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are underestimated. On the other hand, many G2 models are able to represent most of large-scale circulation over Indo-Pacific region associated with El Niño and hence provide more realistic ENSO-ISM teleconnections. Therefore, this study advocates the importance of representation/simulation of large-scale circulation patterns during El Niño years in coupled models in order to capture El Niño-monsoon teleconnections well.
Connecting the large- and the small-scale magnetic fields of solar-like stars
NASA Astrophysics Data System (ADS)
Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.
2018-05-01
A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.
Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station
NASA Technical Reports Server (NTRS)
Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.
1987-01-01
The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
NASA Astrophysics Data System (ADS)
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
Optimizing BAO measurements with non-linear transformations of the Lyman-α forest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xinkang; Font-Ribera, Andreu; Seljak, Uroš, E-mail: xinkang.wang@berkeley.edu, E-mail: afont@lbl.gov, E-mail: useljak@berkeley.edu
2015-04-01
We explore the effect of applying a non-linear transformation to the Lyman-α forest transmitted flux F=e{sup −τ} and the ability of analytic models to predict the resulting clustering amplitude. Both the large-scale bias of the transformed field (signal) and the amplitude of small scale fluctuations (noise) can be arbitrarily modified, but we were unable to find a transformation that increases significantly the signal-to-noise ratio on large scales using Taylor expansion up to the third order. In particular, however, we achieve a 33% improvement in signal to noise for Gaussianized field in transverse direction. On the other hand, we explore anmore » analytic model for the large-scale biasing of the Lyα forest, and present an extension of this model to describe the biasing of the transformed fields. Using hydrodynamic simulations we show that the model works best to describe the biasing with respect to velocity gradients, but is less successful in predicting the biasing with respect to large-scale density fluctuations, especially for very nonlinear transformations.« less
A simple model of intraseasonal oscillations
NASA Astrophysics Data System (ADS)
Fuchs, Željka; Raymond, David J.
2017-06-01
The intraseasonal oscillations and in particular the MJO have been and still remain a "holy grail" of today's atmospheric science research. Why does the MJO propagate eastward? What makes it unstable? What is the scaling for the MJO, i.e., why does it prefer long wavelengths or planetary wave numbers 1-3? What is the westward moving component of the intraseasonal oscillation? Though linear WISHE has long been discounted as a plausible model for intraseasonal oscillations and the MJO, the version we have developed explains many of the observed features of those phenomena, in particular, the preference for large zonal scale. In this model version, the moisture budget and the increase of precipitation with tropospheric humidity lead to a "moisture mode." The destabilization of the large-scale moisture mode occurs via WISHE only and there is no need to postulate large-scale radiatively induced instability or negative effective gross moist stability. Our WISHE-moisture theory leads to a large-scale unstable eastward propagating mode in n = -1 case and a large-scale unstable westward propagating mode in n = 1 case. We suggest that the n = -1 case might be connected to the MJO and the observed westward moving disturbance to the observed equatorial Rossby mode.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2018-03-01
Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.
Large-scale structure in superfluid Chaplygin gas cosmology
NASA Astrophysics Data System (ADS)
Yang, Rongjia
2014-03-01
We investigate the growth of the large-scale structure in the superfluid Chaplygin gas (SCG) model. Both linear and nonlinear growth, such as σ8 and the skewness S3, are discussed. We find the growth factor of SCG reduces to the Einstein-de Sitter case at early times while it differs from the cosmological constant model (ΛCDM) case in the large a limit. We also find there will be more stricture growth on large scales in the SCG scenario than in ΛCDM and the variations of σ8 and S3 between SCG and ΛCDM cannot be discriminated.
NASA Astrophysics Data System (ADS)
McMillan, Mitchell; Hu, Zhiyong
2017-10-01
Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.
Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey
2014-04-15
In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling
NASA Technical Reports Server (NTRS)
Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne
2003-01-01
Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
NASA Astrophysics Data System (ADS)
Wagener, T.
2017-12-01
Current societal problems and questions demand that we increasingly build hydrologic models for regional or even continental scale assessment of global change impacts. Such models offer new opportunities for scientific advancement, for example by enabling comparative hydrology or connectivity studies, and for improved support of water management decision, since we might better understand regional impacts on water resources from large scale phenomena such as droughts. On the other hand, we are faced with epistemic uncertainties when we move up in scale. The term epistemic uncertainty describes those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past (e.g. due to climate change), because the historical data is unreliable (e.g. because it is imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is no observation network available at all). In this talk I will explore: (1) how we might build a bridge between what we have learned about catchment scale processes and hydrologic model development and evaluation at larger scales. (2) How we can understand the impact of epistemic uncertainty in large scale hydrologic models. And (3) how we might utilize large scale hydrologic predictions to understand climate change impacts, e.g. on infectious disease risk.
Large scale structure in universes dominated by cold dark matter
NASA Technical Reports Server (NTRS)
Bond, J. Richard
1986-01-01
The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.
An Illustrative Guide to the Minerva Framework
NASA Astrophysics Data System (ADS)
Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration
2017-10-01
Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.
Modeling Alaska boreal forests with a controlled trend surface approach
Mo Zhou; Jingjing Liang
2012-01-01
An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...
NASA Astrophysics Data System (ADS)
Draper, Martin; Usera, Gabriel
2015-04-01
The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of neutrally stratified atmospheric boundary layers over heterogeneous terrain". Water Resources Research, 2006, 42, WO1409 (18 p). [4] J. Keissl, M. Parlange, C. Meneveau. "Field experimental study of dynamic Smagorinsky models in the atmospheric surface layer". Journal of the Atmospheric Science, 2004, 61, 2296-2307. [5] E. Bou-Zeid, N. Vercauteren, M.B. Parlange, C. Meneveau. "Scale dependence of subgrid-scale model coefficients: An a priori study". Physics of Fluids, 2008, 20, 115106. [6] G. Kirkil, J. Mirocha, E. Bou-Zeid, F.K. Chow, B. Kosovic, "Implementation and evaluation of dynamic subfilter - scale stress models for large - eddy simulation using WRF". Monthly Weather Review, 2012, 140, 266-284. [7] S. Radhakrishnan, U. Piomelli. "Large-eddy simulation of oscillating boundary layers: model comparison and validation". Journal of Geophysical Research, 2008, 113, C02022. [8] G. Usera, A. Vernet, J.A. Ferré. "A parallel block-structured finite volume method for flows in complex geometry with sliding interfaces". Flow, Turbulence and Combustion, 2008, 81, 471-495. [9] Y-T. Wu, F. Porté-Agel. "Large-eddy simulation of wind-turbine wakes: evaluation of turbine parametrisations". BoundaryLayerMeteorology, 2011, 138, 345-366.
Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement
ERIC Educational Resources Information Center
Zheng, Xiaohui
2009-01-01
The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…
Measuring the topology of large-scale structure in the universe
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III
1988-01-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Measuring the topology of large-scale structure in the universe
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III
1988-11-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Derivation of large-scale cellular regulatory networks from biological time series data.
de Bivort, Benjamin L
2010-01-01
Pharmacological agents and other perturbants of cellular homeostasis appear to nearly universally affect the activity of many genes, proteins, and signaling pathways. While this is due in part to nonspecificity of action of the drug or cellular stress, the large-scale self-regulatory behavior of the cell may also be responsible, as this typically means that when a cell switches states, dozens or hundreds of genes will respond in concert. If many genes act collectively in the cell during state transitions, rather than every gene acting independently, models of the cell can be created that are comprehensive of the action of all genes, using existing data, provided that the functional units in the model are collections of genes. Techniques to develop these large-scale cellular-level models are provided in detail, along with methods of analyzing them, and a brief summary of major conclusions about large-scale cellular networks to date.
NASA Technical Reports Server (NTRS)
Jackson, Karen E.
1990-01-01
Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.
Gravitational waves during inflation from a 5D large-scale repulsive gravity model
NASA Astrophysics Data System (ADS)
Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio
2012-10-01
We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
HELICITY CONSERVATION IN NONLINEAR MEAN-FIELD SOLAR DYNAMO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pipin, V. V.; Sokoloff, D. D.; Zhang, H.
It is believed that magnetic helicity conservation is an important constraint on large-scale astrophysical dynamos. In this paper, we study a mean-field solar dynamo model that employs two different formulations of the magnetic helicity conservation. In the first approach, the evolution of the averaged small-scale magnetic helicity is largely determined by the local induction effects due to the large-scale magnetic field, turbulent motions, and the turbulent diffusive loss of helicity. In this case, the dynamo model shows that the typical strength of the large-scale magnetic field generated by the dynamo is much smaller than the equipartition value for the magneticmore » Reynolds number 10{sup 6}. This is the so-called catastrophic quenching (CQ) phenomenon. In the literature, this is considered to be typical for various kinds of solar dynamo models, including the distributed-type and the Babcock-Leighton-type dynamos. The problem can be resolved by the second formulation, which is derived from the integral conservation of the total magnetic helicity. In this case, the dynamo model shows that magnetic helicity propagates with the dynamo wave from the bottom of the convection zone to the surface. This prevents CQ because of the local balance between the large-scale and small-scale magnetic helicities. Thus, the solar dynamo can operate in a wide range of magnetic Reynolds numbers up to 10{sup 6}.« less
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
NASA Astrophysics Data System (ADS)
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
Measured acoustic characteristics of ducted supersonic jets at different model scales
NASA Technical Reports Server (NTRS)
Jones, R. R., III; Ahuja, K. K.; Tam, Christopher K. W.; Abdelwahab, M.
1993-01-01
A large-scale (about a 25x enlargement) model of the Georgia Tech Research Institute (GTRI) hardware was installed and tested in the Propulsion Systems Laboratory of the NASA Lewis Research Center. Acoustic measurements made in these two facilities are compared and the similarity in acoustic behavior over the scale range under consideration is highlighted. The study provide the acoustic data over a relatively large-scale range which may be used to demonstrate the validity of scaling methods employed in the investigation of this phenomena.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
NASA Astrophysics Data System (ADS)
Klingbeil, Knut; Lemarié, Florian; Debreu, Laurent; Burchard, Hans
2018-05-01
The state of the art of the numerics of hydrostatic structured-grid coastal ocean models is reviewed here. First, some fundamental differences in the hydrodynamics of the coastal ocean, such as the large surface elevation variation compared to the mean water depth, are contrasted against large scale ocean dynamics. Then the hydrodynamic equations as they are used in coastal ocean models as well as in large scale ocean models are presented, including parameterisations for turbulent transports. As steps towards discretisation, coordinate transformations and spatial discretisations based on a finite-volume approach are discussed with focus on the specific requirements for coastal ocean models. As in large scale ocean models, splitting of internal and external modes is essential also for coastal ocean models, but specific care is needed when drying & flooding of intertidal flats is included. As one obvious characteristic of coastal ocean models, open boundaries occur and need to be treated in a way that correct model forcing from outside is transmitted to the model domain without reflecting waves from the inside. Here, also new developments in two-way nesting are presented. Single processes such as internal inertia-gravity waves, advection and turbulence closure models are discussed with focus on the coastal scales. Some overview on existing hydrostatic structured-grid coastal ocean models is given, including their extensions towards non-hydrostatic models. Finally, an outlook on future perspectives is made.
NASA Astrophysics Data System (ADS)
Bronstert, Axel; Heistermann, Maik; Francke, Till
2017-04-01
Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on first principals, partly pde-type, are available for several processes (but not for all), because measurement and modelling scale are compatible (-) the spatial model domain are hardly representative for larger spatial entities, including regions for which water resources management decisions are to be taken; straightforward upsizing is also limited by data availability and computational requirements. Meso scale (e.g. extent of a small to large catchment or region): (+) the spatial extent of the model domain has approximately the same extent as the regions for which water resources management decisions are to be taken. I.e., such models enable water resources quantification at the scale of most water management decisions; (+) data of some state conditions (e.g. vegetation cover, topography, river network and cross sections) are available; (+) data of some boundary fluxes (in particular surface runoff / channel flow) are directly measurable with mostly sufficient certainty; (+) equations, partly based on simple water budgeting, partly variants of pde-type equations, are available for most hydrological processes. This enables the construction of meso-scale distributed models reflecting the spatial heterogeneity of regions/landscapes; (-) process scale, measurement scale, and modelling scale differ from each other for a number of processes, e.g., such as runoff generation; (-) the process formulation (usually derived from micro-scale studies) cannot directly be transferred to the modelling domain. Upscaling procedures for this purpose are not readily and generally available. Macro scale (e.g. extent of a continent up to global): (+) the spatial extent of the model may cover the whole Earth. This enables an attractive global display of model results; (+) model results might be technically interchangeable or at least comparable with results from other global models, such as global climate models; (-) process scale, measurement scale, and modelling scale differ heavily from each other for all hydrological and associated processes; (-) the model domain and its results are not representative regions for which water resources management decisions are to be taken. (-) both state condition and boundary flux data are hardly available for the whole model domain. Water management data and discharge data from remote regions are particular incomplete / unavailable for this scale. This undermines the model's verifiability; (-) since process formulation and resulting modelling reliability at this scale is very limited, such models can hardly show any explanatory skills or prognostic power; (-) since both the entire model domain and the spatial sub-units cover large areas, model results represent values averaged over at least the spatial sub-unit's extent. In many cases, the applied time scale implies a long-term averaging in time, too. We emphasize the importance to be aware of the above mentioned strengths and weaknesses of those scale-specific models. (Many of the) results of the current global model studies do not reflect such limitations. In particular, we consider the averaging over large model entities in space and/or time inadequate. Many hydrological processes are of a non-linear nature, including threshold-type behaviour. Such features cannot be reflected by such large scale entities. The model results therefore can be of little or no use for water resources decisions and/or even misleading for public debates or decision making. Some rather newly developed sustainability concepts, e.g. "Planetary Boundaries" in which humanity may "continue to develop and thrive for generations to come" are based on such global-scale approaches and models. However, many of the major problems regarding sustainability on Earth, e.g. water scarcity, do not exhibit on a global but on a regional scale. While on a global scale water might look like being available in sufficient quantity and quality, there are many regions where water problems already have very harmful or even devastating effects. Therefore, it is the challenge to derive models and observation programmes for regional scales. In case a global display is desired future efforts should be directed towards the development of a global picture based on a mosaic of regional sound assessments, rather than "zooming into" the results of large-scale simulations. Still, a key question remains to be discussed, i.e. for which purpose models at this (global) scale can be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
Thatcher, T L; Wilson, D J; Wood, E E; Craig, M J; Sextro, R G
2004-08-01
Scale modeling is a useful tool for analyzing complex indoor spaces. Scale model experiments can reduce experimental costs, improve control of flow and temperature conditions, and provide a practical method for pretesting full-scale system modifications. However, changes in physical scale and working fluid (air or water) can complicate interpretation of the equivalent effects in the full-scale structure. This paper presents a detailed scaling analysis of a water tank experiment designed to model a large indoor space, and experimental results obtained with this model to assess the influence of furniture and people in the pollutant concentration field at breathing height. Theoretical calculations are derived for predicting the effects from losses of molecular diffusion, small scale eddies, turbulent kinetic energy, and turbulent mass diffusivity in a scale model, even without Reynolds number matching. Pollutant dispersion experiments were performed in a water-filled 30:1 scale model of a large room, using uranine dye injected continuously from a small point source. Pollutant concentrations were measured in a plane, using laser-induced fluorescence techniques, for three interior configurations: unobstructed, table-like obstructions, and table-like and figure-like obstructions. Concentrations within the measurement plane varied by more than an order of magnitude, even after the concentration field was fully developed. Objects in the model interior had a significant effect on both the concentration field and fluctuation intensity in the measurement plane. PRACTICAL IMPLICATION: This scale model study demonstrates both the utility of scale models for investigating dispersion in indoor environments and the significant impact of turbulence created by furnishings and people on pollutant transport from floor level sources. In a room with no furniture or occupants, the average concentration can vary by about a factor of 3 across the room. Adding furniture and occupants can increase this spatial variation by another factor of 3.
Yang, X I A; Meneveau, C
2017-04-13
In recent years, there has been growing interest in large-eddy simulation (LES) modelling of atmospheric boundary layers interacting with arrays of wind turbines on complex terrain. However, such terrain typically contains geometric features and roughness elements reaching down to small scales that typically cannot be resolved numerically. Thus subgrid-scale models for the unresolved features of the bottom roughness are needed for LES. Such knowledge is also required to model the effects of the ground surface 'underneath' a wind farm. Here we adapt a dynamic approach to determine subgrid-scale roughness parametrizations and apply it for the case of rough surfaces composed of cuboidal elements with broad size distributions, containing many scales. We first investigate the flow response to ground roughness of a few scales. LES with the dynamic roughness model which accounts for the drag of unresolved roughness is shown to provide resolution-independent results for the mean velocity distribution. Moreover, we develop an analytical roughness model that accounts for the sheltering effects of large-scale on small-scale roughness elements. Taking into account the shading effect, constraints from fundamental conservation laws, and assumptions of geometric self-similarity, the analytical roughness model is shown to provide analytical predictions that agree well with roughness parameters determined from LES.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Black, R. X.
2017-12-01
We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grest, Gary S.
2017-09-01
Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less
2017-01-01
Phase relations between specific scales in a turbulent boundary layer are studied here by highlighting the associated nonlinear scale interactions in the flow. This is achieved through an experimental technique that allows for targeted forcing of the flow through the use of a dynamic wall perturbation. Two distinct large-scale modes with well-defined spatial and temporal wavenumbers were simultaneously forced in the boundary layer, and the resulting nonlinear response from their direct interactions was isolated from the turbulence signal for the study. This approach advances the traditional studies of large- and small-scale interactions in wall turbulence by focusing on the direct interactions between scales with triadic wavenumber consistency. The results are discussed in the context of modelling high Reynolds number wall turbulence. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167576
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
Large-scale anisotropy of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, J.; Wilson, M. L.
1981-01-01
Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.
The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscal...
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
Enabling large-scale viscoelastic calculations via neural network acceleration
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.
2017-12-01
One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.
Impact of entrainment on cloud droplet spectra: theory, observations, and modeling
NASA Astrophysics Data System (ADS)
Grabowski, W.
2016-12-01
Understanding the impact of entrainment and mixing on microphysical properties of warm boundary layer clouds is an important aspect of the representation of such clouds in large-scale models of weather and climate. Entrainment leads to a reduction of the liquid water content in agreement with the fundamental thermodynamics, but its impact on the droplet spectrum is difficult to quantify in observations and modeling. For in-situ (e.g., aircraft) observations, it is impossible to follow air parcels and observe processes that lead to changes of the droplet spectrum in different regions of a cloud. For similar reasons traditional modeling methodologies (e.g., the Eulerian large eddy simulation) are not useful either. Moreover, both observations and modeling can resolve only relatively narrow range of spatial scales. Theory, typically focusing on differences between idealized concepts of homogeneous and inhomogeneous mixing, is also of a limited use for the multiscale turbulent mixing between a cloud and its environment. This presentation will illustrate the above points and argue that the Lagrangian large-eddy simulation with appropriate subgrid-scale scheme may provide key insights and eventually lead to novel parameterizations for large-scale models.
NASA Astrophysics Data System (ADS)
Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.
2013-04-01
The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.
NASA Technical Reports Server (NTRS)
Schlundt, D. W.
1976-01-01
The installed performance degradation of a swivel nozzle thrust deflector system obtained during increased vectoring angles of a large-scale test program was investigated and improved. Small-scale models were used to generate performance data for analyzing selected swivel nozzle configurations. A single-swivel nozzle design model with five different nozzle configurations and a twin-swivel nozzle design model, scaled to 0.15 size of the large-scale test hardware, were statically tested at low exhaust pressure ratios of 1.4, 1.3, 1.2, and 1.1 and vectored at four nozzle positions from 0 deg cruise through 90 deg vertical used for the VTOL mode.
Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa
2015-01-01
Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
Multiresolution comparison of precipitation datasets for large-scale models
NASA Astrophysics Data System (ADS)
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
The impact of land-surface wetness heterogeneity on mesoscale heat fluxes
NASA Technical Reports Server (NTRS)
Chen, Fei; Avissar, Roni
1994-01-01
Vertical heat fluxes associated with mesoscale circulations generated by land-surface wetness discontinuities are often stronger than turbulent fluxes, especially in the upper part of the atmospheric planetary boundary layer. As a result, they contribute significantly to the subgrid-scale fluxes in large-scale atmospheric models. Yet they are not considered in these models. To provide some insights into the possible parameterization of these fluxes in large-scale models, a state-of-the-art mesoscale numerical model was used to investigate the relationships between mesoscale heat fluxes and atmospheric and land-surface characteristics that play a key role in the generation of mesoscale circulations. The distribution of land-surface wetness, the wavenumber and the wavelength of the land-surface discontinuities, and the large-scale wind speed have a significant impact on the mesoscale heat fluxes. Empirical functions were derived to characterize the relationships between mesoscale heat fluxes and the spatial distribution of land-surface wetness. The strongest mesoscale heat fluxes were obtained for a wavelength of forcing corresponding approximately to the local Rossby deformation radius. The mesoscale heat fluxes are weakened by large-scale background winds but remain significant even with moderate winds.
Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Seal, Sudip K
2010-01-01
The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Muller, Christoff
2015-01-01
Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and future periods. Implementation differences in these and other modeling choices contribute to significant variation among global-scale crop model assessments in addition to differences in crop model implementations that also cause large differences in site-specific crop modeling (Asseng et al., 2013; Bassu et al., 2014).
Statistical Ensemble of Large Eddy Simulations
NASA Technical Reports Server (NTRS)
Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clauss, D.B.
The analyses used to predict the behavior of a 1:8-scale model of a steel LWR containment building to static overpressurization are described and results are presented. Finite strain, large displacement, and nonlinear material properties were accounted for using finite element methods. Three-dimensional models were needed to analyze the penetrations, which included operable equipment hatches, personnel lock representations, and a constrained pipe. It was concluded that the scale model would fail due to leakage caused by large deformations of the equipment hatch sleeves. 13 refs., 34 figs., 1 tab.
2009-06-01
simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE
The influence of large-scale wind power on global climate.
Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J
2004-11-16
Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
NASA Astrophysics Data System (ADS)
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
Field-aligned currents and large-scale magnetospheric electric fields
NASA Technical Reports Server (NTRS)
Dangelo, N.
1979-01-01
The existence of field-aligned currents (FAC) at northern and southern high latitudes was confirmed by a number of observations, most clearly by experiments on the TRIAD and ISIS 2 satellites. The high-latitude FAC system is used to relate what is presently known about the large-scale pattern of high-latitude ionospheric electric fields and their relation to solar wind parameters. Recently a simplified model was presented for polar cap electric fields. The model is of considerable help in visualizing the large-scale features of FAC systems. A summary of the FAC observations is given. The simplified model is used to visualize how the FAC systems are driven by their generators.
Fermi rules out the IC/CMB model for the Large-Scale Jet X-ray emission of 3C 273
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, E. T.
2014-01-01
The process responsible for the Chandra-detected X-ray emission from the large-scale jets of powerful quasars is not clear yet. The two main models are inverse Compton scattering off the cosmic microwave background (IC/CMB) photons and synchrotron emission from a population of electrons separate from those producing the radio-IR emission. These two models imply radically different conditions in the large scale jet in terms of jet speed and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the larger-scale environment. Georganopoulos et al. (2006) proposed a diagnostic based on a fundamental difference between these two models: the production of synchrotron X-rays requires multi-TeV electrons, while the EC/CMB model requires a cutoff in the electron energy distribution below TeV energies. This has significant implications for the gamma-ray emission predicted by these two models. Here we present new Fermi observations that put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that clearly violates the flux expected from the IC/CMB X-ray interpretation found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source. Further, the Fermi upper limit constraints the Doppler beaming factor delta <5.
Wang, Lu-Yong; Fasulo, D
2006-01-01
Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.
Scale and modeling issues in water resources planning
Lins, H.F.; Wolock, D.M.; McCabe, G.J.
1997-01-01
Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.
Propulsion simulator for magnetically-suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Goldey, C. L.; Sacco, G. P.; Lawing, Pierce L.
1991-01-01
The objective of phase two of a current investigation sponsored by NASA Langley Research Center is to demonstrate the measurement of aerodynamic forces/moments, including the effects of exhaust gases, in magnetic suspension and balance system (MSBS) wind tunnels. Two propulsion simulator models are being developed: a small-scale and a large-scale unit, both employing compressed, liquified carbon dioxide as propellant. The small-scale unit was designed, fabricated, and statically-tested at Physical Sciences Inc. (PSI). The large-scale simulator is currently in the preliminary design stage. The small-scale simulator design/development is presented, and the data from its static firing on a thrust stand are discussed. The analysis of this data provides important information for the design of the large-scale unit. A description of the preliminary design of the device is also presented.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
NASA Astrophysics Data System (ADS)
Schumann, G.
2016-12-01
Routinely obtaining real-time 2-D inundation patterns of a flood event at a meaningful spatial resolution and over large scales is at the moment only feasible with either operational aircraft flights or satellite imagery. Of course having model simulations of floodplain inundation available to complement the remote sensing data is highly desirable, for both event re-analysis and forecasting event inundation. Using the Texas 2015 flood disaster, we demonstrate the value of multi-scale EO data for large scale 2-D floodplain inundation modeling and forecasting. A dynamic re-analysis of the Texas 2015 flood disaster was run using a 2-D flood model developed for accurate large scale simulations. We simulated the major rivers entering the Gulf of Mexico and used flood maps produced from both optical and SAR satellite imagery to examine regional model sensitivities and assess associated performance. It was demonstrated that satellite flood maps can complement model simulations and add value, although this is largely dependent on a number of important factors, such as image availability, regional landscape topology, and model uncertainty. In the preferred case where model uncertainty is high, landscape topology is complex (i.e. urbanized coastal area) and satellite flood maps are available (in case of SAR for instance), satellite data can significantly reduce model uncertainty by identifying the "best possible" model parameter set. However, most often the situation is occurring where model uncertainty is low and spatially contiguous flooding can be mapped from satellites easily enough, such as in rural large inland river floodplains. Consequently, not much value from satellites can be added. Nevertheless, where a large number of flood maps are available, model credibility can be increased substantially. In the case presented here this was true for at least 60% of the many thousands of kilometers of river flow length simulated, where satellite flood maps existed. The next steps of this project is to employ a technique termed "targeted observation" approach, which is an assimilation based procedure that allows quantifying the impact observations have on model predictions at the local scale and also along the entire river system, when assimilated with the model at specific "overpass" locations.
NASA Astrophysics Data System (ADS)
Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.
2012-10-01
Large scale shell model calculations based on a new diagonalization algorithm are performed in order to investigate the mixed symmetry states in chains of nuclei in the proximity of N=82. The resulting spectra and transitions are in agreement with the experiments and consistent with the scheme provided by the interacting boson model.
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.
2014-12-01
Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
The statistical overlap theory of chromatography using power law (fractal) statistics.
Schure, Mark R; Davis, Joe M
2011-12-30
The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics. Copyright © 2011 Elsevier B.V. All rights reserved.
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
Large scale structure formation of the normal branch in the DGP brane world model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Yong-Seon
2008-06-15
In this paper, we study the large scale structure formation of the normal branch in the DGP model (Dvail, Gabadadze, and Porrati brane world model) by applying the scaling method developed by Sawicki, Song, and Hu for solving the coupled perturbed equations of motion of on-brane and off-brane. There is a detectable departure of perturbed gravitational potential from the cold dark matter model with vacuum energy even at the minimal deviation of the effective equation of state w{sub eff} below -1. The modified perturbed gravitational potential weakens the integrated Sachs-Wolfe effect which is strengthened in the self-accelerating branch DGP model.more » Additionally, we discuss the validity of the scaling solution in the de Sitter limit at late times.« less
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
NASA Astrophysics Data System (ADS)
Sanchez-Gomez, Emilia; Somot, S.; Déqué, M.
2009-10-01
One of the main concerns in regional climate modeling is to which extent limited-area regional climate models (RCM) reproduce the large-scale atmospheric conditions of their driving general circulation model (GCM). In this work we investigate the ability of a multi-model ensemble of regional climate simulations to reproduce the large-scale weather regimes of the driving conditions. The ensemble consists of a set of 13 RCMs on a European domain, driven at their lateral boundaries by the ERA40 reanalysis for the time period 1961-2000. Two sets of experiments have been completed with horizontal resolutions of 50 and 25 km, respectively. The spectral nudging technique has been applied to one of the models within the ensemble. The RCMs reproduce the weather regimes behavior in terms of composite pattern, mean frequency of occurrence and persistence reasonably well. The models also simulate well the long-term trends and the inter-annual variability of the frequency of occurrence. However, there is a non-negligible spread among the models which is stronger in summer than in winter. This spread is due to two reasons: (1) we are dealing with different models and (2) each RCM produces an internal variability. As far as the day-to-day weather regime history is concerned, the ensemble shows large discrepancies. At daily time scale, the model spread has also a seasonal dependence, being stronger in summer than in winter. Results also show that the spectral nudging technique improves the model performance in reproducing the large-scale of the driving field. In addition, the impact of increasing the number of grid points has been addressed by comparing the 25 and 50 km experiments. We show that the horizontal resolution does not affect significantly the model performance for large-scale circulation.
CFD Script for Rapid TPS Damage Assessment
NASA Technical Reports Server (NTRS)
McCloud, Peter
2013-01-01
This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola
1992-01-01
A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.
Large-scale wind tunnel tests of a sting-supported V/STOL fighter model at high angles of attack
NASA Technical Reports Server (NTRS)
Stoll, F.; Minter, E. A.
1981-01-01
A new sting model support has been developed for the NASA/Ames 40- by 80-Foot Wind Tunnel. This addition to the facility permits testing of relatively large models to large angles of attack or angles of yaw depending on model orientation. An initial test on the sting is described. This test used a 0.4-scale powered V/STOL model designed for testing at angles of attack to 90 deg and greater. A method for correcting wake blockage was developed and applied to the force and moment data. Samples of this data and results of surface-pressure measurements are presented.
NASA Astrophysics Data System (ADS)
Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.
2015-12-01
Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are being simulated for plausible physical reasons, boosting confidence in future projections of temperature extremes. Conversely, where model skill is identified to be lower, caution should be exercised in interpreting future projections.
Linking crop yield anomalies to large-scale atmospheric circulation in Europe.
Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J
2017-06-15
Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.
Polymer Physics of the Large-Scale Structure of Chromatin.
Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario
2016-01-01
We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.
NASA Astrophysics Data System (ADS)
Wang, S.; Sobel, A. H.; Nie, J.
2015-12-01
Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the importance of both horizontal advection of moisture and cloud-radiative feedback to the dynamics of the MJO, as well as to accurate simulation and prediction of it in models.
Investigation of low-latitude hydrogen emission in terms of a two-component interstellar gas model
NASA Technical Reports Server (NTRS)
Baker, P. L.; Burton, W. B.
1975-01-01
High-resolution 21-cm hydrogen line observations at low galactic latitude are analyzed to determine the large-scale distribution of galactic hydrogen. Distribution parameters are found by model fitting, optical depth effects are computed using a two-component gas model suggested by the observations, and calculations are made for a one-component uniform spin-temperature gas model to show the systematic departures between this model and data obtained by incorrect treatment of the optical depth effects. Synthetic 21-cm line profiles are computed from the two-component model, and the large-scale trends of the observed emission profiles are reproduced together with the magnitude of the small-scale emission irregularities. Values are determined for the thickness of the galactic hydrogen disk between half density points, the total observed neutral hydrogen mass of the galaxy, and the central number density of the intercloud hydrogen atoms. It is shown that typical hydrogen clouds must be between 1 and 13 pc in diameter and that optical thinness exists on large-scale despite the presence of optically thin gas.
Extreme weather: Subtropical floods and tropical cyclones
NASA Astrophysics Data System (ADS)
Shaevitz, Daniel A.
Extreme weather events have a large effect on society. As such, it is important to understand these events and to project how they may change in a future, warmer climate. The aim of this thesis is to develop a deeper understanding of two types of extreme weather events: subtropical floods and tropical cyclones (TCs). In the subtropics, the latitude is high enough that quasi-geostrophic dynamics are at least qualitatively relevant, while low enough that moisture may be abundant and convection strong. Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. In the first part of this thesis, I examine the possible triggering of convection by the large-scale dynamics and investigate the coupling between the two. Specifically two examples of extreme precipitation events in the subtropics are analyzed, the 2010 and 2014 floods of India and Pakistan and the 2015 flood of Texas and Oklahoma. I invert the quasi-geostrophic omega equation to decompose the large-scale vertical motion profile to components due to synoptic forcing and diabatic heating. Additionally, I present model results from within the Column Quasi-Geostrophic framework. A single column model and cloud-revolving model are forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. It is found that convection was triggered primarily by mechanically forced orographic ascent over the Himalayas during the India/Pakistan flood and by upper-level Potential Vorticity disturbances during the Texas/Oklahoma flood. Furthermore, a climate attribution analysis was conducted for the Texas/Oklahoma flood and it is found that anthropogenic climate change was responsible for a small amount of rainfall during the event but the intensity of this event may be greatly increased if it occurs in a future climate. In the second part of this thesis, I examine the ability of high-resolution global atmospheric models to simulate TCs. Specifically, I present an intercomparison of several models' ability to simulate the global characteristics of TCs in the current climate. This is a necessary first step before using these models to project future changes in TCs. Overall, the models were able to reproduce the geographic distribution of TCs reasonably well, with some of the models performing remarkably well. The intensity of TCs varied widely between the models, with some of this difference being due to model resolution.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
NASA Astrophysics Data System (ADS)
Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.
2017-12-01
In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at even larger geographical domains. Keywords: PAHs; Community multi-scale air quality model; Multimedia fate model; Land use
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
On the Subgrid-Scale Modeling of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Squires, Kyle; Zeman, Otto
1990-01-01
A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.
Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme
NASA Astrophysics Data System (ADS)
Veljović, K.; Rajković, B.; Mesinger, F.
2009-04-01
Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat limited in view of the integrations having being done only for 10-day forecasts. Even so, one should note that they are among very few done using forecast as opposed to reanalysis or analysis global driving data. Our results suggest that (1) running the Eta as an RCM no significant loss of large-scale kinetic energy with time seems to be taking place; (2) no disadvantage from using the Eta LBC scheme compared to the relaxation scheme is seen, while enjoying the advantage of the scheme being significantly less demanding than the relaxation given that it needs driver model fields at the outermost domain boundary only; and (3) the Eta RCM skill in forecasting large scales, with no large scale nudging, seems to be just about the same as that of the driver model, or, in the terminology of Castro et al., the Eta RCM does not lose "value of the large scale" which exists in the larger global analyses used for the initial condition and for verification.
CMB hemispherical asymmetry from non-linear isocurvature perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assadullahi, Hooshyar; Wands, David; Firouzjahi, Hassan
2015-04-01
We investigate whether non-adiabatic perturbations from inflation could produce an asymmetric distribution of temperature anisotropies on large angular scales in the cosmic microwave background (CMB). We use a generalised non-linear δ N formalism to calculate the non-Gaussianity of the primordial density and isocurvature perturbations due to the presence of non-adiabatic, but approximately scale-invariant field fluctuations during multi-field inflation. This local-type non-Gaussianity leads to a correlation between very long wavelength inhomogeneities, larger than our observable horizon, and smaller scale fluctuations in the radiation and matter density. Matter isocurvature perturbations contribute primarily to low CMB multipoles and hence can lead to a hemisphericalmore » asymmetry on large angular scales, with negligible asymmetry on smaller scales. In curvaton models, where the matter isocurvature perturbation is partly correlated with the primordial density perturbation, we are unable to obtain a significant asymmetry on large angular scales while respecting current observational constraints on the observed quadrupole. However in the axion model, where the matter isocurvature and primordial density perturbations are uncorrelated, we find it may be possible to obtain a significant asymmetry due to isocurvature modes on large angular scales. Such an isocurvature origin for the hemispherical asymmetry would naturally give rise to a distinctive asymmetry in the CMB polarisation on large scales.« less
Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation
2016-08-02
PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED
Ralph Alig; Darius Adams; John Mills; Richard Haynes; Peter Ince; Robert Moulton
2001-01-01
The TAMM/NAPAP/ATLAS/AREACHANGE(TNAA) system and the Forest and Agriculture Sector Optimization Model (FASOM) are two large-scale forestry sector modeling systems that have been employed to analyze the U.S. forest resource situation. The TNAA system of static, spatial equilibrium models has been applied to make SO-year projections of the U.S. forest sector for more...
Intra-reach headwater fish assemblage structure
McKenna, James E.
2017-01-01
Large-scale conservation efforts can take advantage of modern large databases and regional modeling and assessment methods. However, these broad-scale efforts often assume uniform average habitat conditions and/or species assemblages within stream reaches.
NASA Astrophysics Data System (ADS)
Li, Jianping; Xia, Xiangsheng
2015-09-01
In order to improve the understanding of the hot deformation and dynamic recrystallization (DRX) behaviors of large-scaled AZ80 magnesium alloy fabricated by semi-continuous casting, compression tests were carried out in the temperature range from 250 to 400 °C and strain rate range from 0.001 to 0.1 s-1 on a Gleeble 1500 thermo-mechanical machine. The effects of the temperature and strain rate on the hot deformation behavior have been expressed by means of the conventional hyperbolic sine equation, and the influence of the strain has been incorporated in the equation by considering its effect on different material constants for large-scaled AZ80 magnesium alloy. In addition, the DRX behavior has been discussed. The result shows that the deformation temperature and strain rate exerted remarkable influences on the flow stress. The constitutive equation of large-scaled AZ80 magnesium alloy for hot deformation at steady-state stage (ɛ = 0.5) was The true stress-true strain curves predicted by the extracted model were in good agreement with the experimental results, thereby confirming the validity of the developed constitutive relation. The DRX kinetic model of large-scaled AZ80 magnesium alloy was established as X d = 1 - exp[-0.95((ɛ - ɛc)/ɛ*)2.4904]. The rate of DRX increases with increasing deformation temperature, and high temperature is beneficial for achieving complete DRX in the large-scaled AZ80 magnesium alloy.
Some aspects of control of a large-scale dynamic system
NASA Technical Reports Server (NTRS)
Aoki, M.
1975-01-01
Techniques of predicting and/or controlling the dynamic behavior of large scale systems are discussed in terms of decentralized decision making. Topics discussed include: (1) control of large scale systems by dynamic team with delayed information sharing; (2) dynamic resource allocation problems by a team (hierarchical structure with a coordinator); and (3) some problems related to the construction of a model of reduced dimension.
Ensemble Kalman filters for dynamical systems with unresolved turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.
Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less
Using MHD Models for Context for Multispacecraft Missions
NASA Astrophysics Data System (ADS)
Reiff, P. H.; Sazykin, S. Y.; Webster, J.; Daou, A.; Welling, D. T.; Giles, B. L.; Pollock, C.
2016-12-01
The use of global MHD models such as BATS-R-US to provide context to data from widely spaced multispacecraft mission platforms is gaining in popularity and in effectiveness. Examples are shown, primarily from the Magnetospheric Multiscale Mission (MMS) program compared to BATS-R-US. We present several examples of large-scale magnetospheric configuration changes such as tail dipolarization events and reconfigurations after a sector boundary crossing which are made much more easily understood by placing the spacecraft in the model fields. In general, the models can reproduce the large-scale changes observed by the various spacecraft but sometimes miss small-scale or rapid time changes.
Single-trabecula building block for large-scale finite element models of cancellous bone.
Dagan, D; Be'ery, M; Gefen, A
2004-07-01
Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.
NASA Astrophysics Data System (ADS)
Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.
2015-05-01
Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.
Bellamy, Chloe; Altringham, John
2015-01-01
Conservation increasingly operates at the landscape scale. For this to be effective, we need landscape scale information on species distributions and the environmental factors that underpin them. Species records are becoming increasingly available via data centres and online portals, but they are often patchy and biased. We demonstrate how such data can yield useful habitat suitability models, using bat roost records as an example. We analysed the effects of environmental variables at eight spatial scales (500 m - 6 km) on roost selection by eight bat species (Pipistrellus pipistrellus, P. pygmaeus, Nyctalus noctula, Myotis mystacinus, M. brandtii, M. nattereri, M. daubentonii, and Plecotus auritus) using the presence-only modelling software MaxEnt. Modelling was carried out on a selection of 418 data centre roost records from the Lake District National Park, UK. Target group pseudoabsences were selected to reduce the impact of sampling bias. Multi-scale models, combining variables measured at their best performing spatial scales, were used to predict roosting habitat suitability, yielding models with useful predictive abilities. Small areas of deciduous woodland consistently increased roosting habitat suitability, but other habitat associations varied between species and scales. Pipistrellus were positively related to built environments at small scales, and depended on large-scale woodland availability. The other, more specialist, species were highly sensitive to human-altered landscapes, avoiding even small rural towns. The strength of many relationships at large scales suggests that bats are sensitive to habitat modifications far from the roost itself. The fine resolution, large extent maps will aid targeted decision-making by conservationists and planners. We have made available an ArcGIS toolbox that automates the production of multi-scale variables, to facilitate the application of our methods to other taxa and locations. Habitat suitability modelling has the potential to become a standard tool for supporting landscape-scale decision-making as relevant data and open source, user-friendly, and peer-reviewed software become widely available.
Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick
2007-11-01
We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.
Schmoldt, D.L.; Peterson, D.L.; Keane, R.E.; Lenihan, J.M.; McKenzie, D.; Weise, D.R.; Sandberg, D.V.
1999-01-01
A team of fire scientists and resource managers convened 17-19 April 1996 in Seattle, Washington, to assess the effects of fire disturbance on ecosystems. Objectives of this workshop were to develop scientific recommendations for future fire research and management activities. These recommendations included a series of numerically ranked scientific and managerial questions and responses focusing on (1) links among fire effects, fuels, and climate; (2) fire as a large-scale disturbance; (3) fire-effects modeling structures; and (4) managerial concerns, applications, and decision support. At the present time, understanding of fire effects and the ability to extrapolate fire-effects knowledge to large spatial scales are limited, because most data have been collected at small spatial scales for specific applications. Although we clearly need more large-scale fire-effects data, it will be more expedient to concentrate efforts on improving and linking existing models that simulate fire effects in a georeferenced format while integrating empirical data as they become available. A significant component of this effort should be improved communication between modelers and managers to develop modeling tools to use in a planning context. Another component of this modeling effort should improve our ability to predict the interactions of fire and potential climatic change at very large spatial scales. The priority issues and approaches described here provide a template for fire science and fire management programs in the next decade and beyond.
Duvvuri, Subrahmanyam; McKeon, Beverley
2017-03-13
Phase relations between specific scales in a turbulent boundary layer are studied here by highlighting the associated nonlinear scale interactions in the flow. This is achieved through an experimental technique that allows for targeted forcing of the flow through the use of a dynamic wall perturbation. Two distinct large-scale modes with well-defined spatial and temporal wavenumbers were simultaneously forced in the boundary layer, and the resulting nonlinear response from their direct interactions was isolated from the turbulence signal for the study. This approach advances the traditional studies of large- and small-scale interactions in wall turbulence by focusing on the direct interactions between scales with triadic wavenumber consistency. The results are discussed in the context of modelling high Reynolds number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Structural Similitude and Scaling Laws for Plates and Shells: A Review
NASA Technical Reports Server (NTRS)
Simitses, G. J.; Starnes, J. H., Jr.; Rezaeepazhand, J.
2000-01-01
This paper deals with the development and use of scaled-down models in order to predict the structural behavior of large prototypes. The concept is fully described and examples are presented which demonstrate its applicability to beam-plates, plates and cylindrical shells of laminated construction. The concept is based on the use of field equations, which govern the response behavior of both the small model as well as the large prototype. The conditions under which the experimental data of a small model can be used to predict the behavior of a large prototype are called scaling laws or similarity conditions and the term that best describes the process is structural similitude. Moreover, since the term scaling is used to describe the effect of size on strength characteristics of materials, a discussion is included which should clarify the difference between "scaling law" and "size effect". Finally, a historical review of all published work in the broad area of structural similitude is presented for completeness.
NASA Astrophysics Data System (ADS)
Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick
2017-04-01
Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.
Large-scale functional models of visual cortex for remote sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E
Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less
NASA Astrophysics Data System (ADS)
Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis
2018-02-01
We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.
A Universal Model for Solar Eruptions
NASA Astrophysics Data System (ADS)
Wyper, Peter; Antiochos, Spiro K.; DeVore, C. Richard
2017-08-01
We present a universal model for solar eruptions that encompasses coronal mass ejections (CMEs) at one end of the scale, to coronal jets at the other. The model is a natural extension of the Magnetic Breakout model for large-scale fast CMEs. Using high-resolution adaptive mesh MHD simulations conducted with the ARMS code, we show that so-called blowout or mini-filament coronal jets can be explained as one realisation of the breakout process. We also demonstrate the robustness of this “breakout-jet” model by studying three realisations in simulations with different ambient field inclinations. We conclude that magnetic breakout supports both large-scale fast CMEs and small-scale coronal jets, and by inference eruptions at scales in between. Thus, magnetic breakout provides a unified model for solar eruptions. P.F.W was supported in this work by an award of a RAS Fellowship and an appointment to the NASA Postdoctoral Program. C.R.D and S.K.A were supported by NASA’s LWS TR&T and H-SR programs.
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-12-01
One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.
NASA Astrophysics Data System (ADS)
van der Molen, Johan
2015-04-01
Tidal power generation through submerged turbine-type devices is in an advanced stage of testing, and large-scale applications are being planned in areas with high tidal current speeds. The potential impact of such large-scale applications on the hydrography can be investigated using hydrodynamical models. In addition, aspects of the potential impact on the marine ecosystem can be studied using biogeochemical models. In this study, the coupled hydrodynamics-biogeochemistry model GETM-ERSEM is used in a shelf-wide application to investigate the potential impact of large-scale tidal power generation in the Pentland Firth. A scenario representing the currently licensed power extraction suggested i) an average reduction in M2 tidal current velocities of several cm/s within the Pentland Firth, ii) changes in the residual circulation of several mm/s in the vicinity of the Pentland Firth, iii) an increase in M2 tidal amplitude of up to 1 cm to the west of the Pentland Firth, and iv) a reduction of several mm in M2 tidal amplitude along the east coast of the UK. A second scenario representing 10 times the currently licensed power extraction resulted in changes that were approximately 10 times as large. Simulations including the biogeochemistry model for these scenarios are currently in preparation, and first results will be presented at the the conference, aiming at impacts on primary production and benthic production.
Using LISREL to Evaluate Measurement Models and Scale Reliability.
ERIC Educational Resources Information Center
Fleishman, John; Benson, Jeri
1987-01-01
LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…
NASA Astrophysics Data System (ADS)
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
Cosmic microwave background probes models of inflation
NASA Technical Reports Server (NTRS)
Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.
1992-01-01
Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.
Detectability of large-scale power suppression in the galaxy distribution
NASA Astrophysics Data System (ADS)
Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan
2010-12-01
Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.
NASA Technical Reports Server (NTRS)
Fukumori, I.; Raghunath, R.; Fu, L. L.
1996-01-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.
2013-09-30
flow models, such as Delft3D, with our developed Boussinesq -type model. The vision of this project is to develop an operational tool for the...situ measurements or large-scale wave models. This information will be used to drive the offshore wave boundary condition. • Execute the Boussinesq ...model to match with the Boussinesq -type theory would be one which can simulate sheared and stratified currents due to large-scale (non-wave) forcings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorski, K.M.
1991-03-01
The relation between cosmic microwave background (CMB) anisotropies and large-scale galaxy streaming motions is examined within the framework of inflationary cosmology. The minimal Sachs and Wolfe (1967) CMB anisotropies at large angular scales in the models with initial Harrison-Zel'dovich spectrum of inhomogeneity normalized to the local large-scale bulk flow, which are independent of the Hubble constant and specific nature of dark matter, are found to be within the anticipated ultimate sensitivity limits of COBE's Differential Microwave Radiometer experiment. For example, the most likely value of the quadrupole coefficient is predicted to be a2 not less than 7 x 10 tomore » the -6th, where equality applies to the limiting minimal model. If (1) COBE's DMR instruments perform well throughout the two-year period; (2) the anisotropy data are not marred by the systematic errors; (3) the large-scale motions retain their present observational status; (4) there is no statistical conspiracy in a sense of the measured bulk flow being of untypically high and the large-scale anisotropy of untypically low amplitudes; and (5) the low-order multipoles in the all-sky primordial fireball temperature map are not detected, the inflationary paradigm will have to be questioned. 19 refs.« less
NASA Astrophysics Data System (ADS)
Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.
2016-12-01
Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua
2016-10-01
Tropical Instability Waves (TIWs) and the El Niño-Southern Oscillation (ENSO) are two air-sea coupling phenomena that are prominent in the tropical Pacific, occurring at vastly different space-time scales. It has been challenging to adequately represent both of these processes within a large-scale coupled climate model, which has led to a poor understanding of the interactions between TIW-induced feedback and ENSO. In this study, a novel modeling system was developed that allows representation of TIW-scale air-sea coupling and its interaction with ENSO. Satellite data were first used to derive an empirical model for TIW-induced sea surface wind stress perturbations (τTIW). The model was then embedded in a basin-wide hybrid-coupled model (HCM) of the tropical Pacific. Because τTIW were internally determined from TIW-scale sea surface temperatures (SSTTIW) simulated in the ocean model, the wind-SST coupling at TIW scales was interactively represented within the large-scale coupled model. Because the τTIW-SSTTIW coupling part of the model can be turned on or off in the HCM simulations, the related TIW wind feedback effects can be isolated and examined in a straightforward way. Then, the TIW-scale wind feedback effects on the large-scale mean ocean state and interannual variability in the tropical Pacific were investigated based on this embedded system. The interactively represented TIW-scale wind forcing exerted an asymmetric influence on SSTs in the HCM, characterized by a mean-state cooling and by a positive feedback on interannual variability, acting to enhance ENSO amplitude. Roughly speaking, the feedback tends to increase interannual SST variability by approximately 9%. Additionally, there is a tendency for TIW wind to have an effect on the phase transition during ENSO evolution, with slightly shortened interannual oscillation periods. Additional sensitivity experiments were performed to elucidate the details of TIW wind effects on SST evolution during ENSO cycles.
NASA Astrophysics Data System (ADS)
Burov, E.; Guillou-Frottier, L.
2005-05-01
Current debates on the existence of mantle plumes largely originate from interpretations of supposed signatures of plume-induced surface topography that are compared with predictions of geodynamic models of plume-lithosphere interactions. These models often inaccurately predict surface evolution: in general, they assume a fixed upper surface and consider the lithosphere as a single viscous layer. In nature, the surface evolution is affected by the elastic-brittle-ductile deformation, by a free upper surface and by the layered structure of the lithosphere. We make a step towards reconciling mantle- and tectonic-scale studies by introducing a tectonically realistic continental plate model in large-scale plume-lithosphere interaction. This model includes (i) a natural free surface boundary condition, (ii) an explicit elastic-viscous(ductile)-plastic(brittle) rheology and (iii) a stratified structure of continental lithosphere. The numerical experiments demonstrate a number of important differences from predictions of conventional models. In particular, this relates to plate bending, mechanical decoupling of crustal and mantle layers and tension-compression instabilities, which produce transient topographic signatures such as uplift and subsidence at large (>500 km) and small scale (300-400, 200-300 and 50-100 km). The mantle plumes do not necessarily produce detectable large-scale topographic highs but often generate only alternating small-scale surface features that could otherwise be attributed to regional tectonics. A single large-wavelength deformation, predicted by conventional models, develops only for a very cold and thick lithosphere. Distinct topographic wavelengths or temporarily spaced events observed in the East African rift system, as well as over French Massif Central, can be explained by a single plume impinging at the base of the continental lithosphere, without evoking complex asthenospheric upwelling.
NASA Technical Reports Server (NTRS)
Weinberg, B. C.; Mcdonald, H.
1986-01-01
The existence of large scale coherent structures in turbulent shear flows has been well documented. Discrepancies between experimental and computational data suggest a necessity to understand the roles they play in mass and momentum transport. Using conditional sampling and averaging on coincident two-component velocity and concentration velocity experimental data for swirling and nonswirling coaxial jets, triggers for identifying the structures were examined. Concentration fluctuation was found to be an adequate trigger or indicator for the concentration-velocity data, but no suitable detector was located for the two-component velocity data. The large scale structures are found in the region where the largest discrepancies exist between model and experiment. The traditional gradient transport model does not fit in this region as a result of these structures. The large scale motion was found to be responsible for a large percentage of the axial mass transport. The large scale structures were found to convect downstream at approximately the mean velocity of the overall flow in the axial direction. The radial mean velocity of the structures was found to be substantially greater than that of the overall flow.
MHD Modeling of the Solar Wind with Turbulence Transport and Heating
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Usmanov, A. V.; Matthaeus, W. H.; Breech, B.
2009-01-01
We have developed a magnetohydrodynamic model that describes the global axisymmetric steady-state structure of the solar wind near solar minimum with account for transport of small-scale turbulence associated heating. The Reynolds-averaged mass, momentum, induction, and energy equations for the large-scale solar wind flow are solved simultaneously with the turbulence transport equations in the region from 0.3 to 100 AU. The large-scale equations include subgrid-scale terms due to turbulence and the turbulence (small-scale) equations describe the effects of transport and (phenomenologically) dissipation of the MHD turbulence based on a few statistical parameters (turbulence energy, normalized cross-helicity, and correlation scale). The coupled set of equations is integrated numerically for a source dipole field on the Sun by a time-relaxation method in the corotating frame of reference. We present results on the plasma, magnetic field, and turbulence distributions throughout the heliosphere and on the role of the turbulence in the large-scale structure and temperature distribution in the solar wind.
Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR
NASA Astrophysics Data System (ADS)
Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.
2017-12-01
Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.
Large-scale linear programs in planning and prediction.
DOT National Transportation Integrated Search
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Hong S. He; Robert E. Keane; Louis R. Iverson
2008-01-01
Forest landscape models have become important tools for understanding large-scale and long-term landscape (spatial) processes such as climate change, fire, windthrow, seed dispersal, insect outbreak, disease propagation, forest harvest, and fuel treatment, because controlled field experiments designed to study the effects of these processes are often not possible (...
ERIC Educational Resources Information Center
Li, Ying; Jiao, Hong; Lissitz, Robert W.
2012-01-01
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Postinflationary Higgs relaxation and the origin of matter-antimatter asymmetry.
Kusenko, Alexander; Pearce, Lauren; Yang, Louis
2015-02-13
The recent measurement of the Higgs boson mass implies a relatively slow rise of the standard model Higgs potential at large scales, and a possible second minimum at even larger scales. Consequently, the Higgs field may develop a large vacuum expectation value during inflation. The relaxation of the Higgs field from its large postinflationary value to the minimum of the effective potential represents an important stage in the evolution of the Universe. During this epoch, the time-dependent Higgs condensate can create an effective chemical potential for the lepton number, leading to a generation of the lepton asymmetry in the presence of some large right-handed Majorana neutrino masses. The electroweak sphalerons redistribute this asymmetry between leptons and baryons. This Higgs relaxation leptogenesis can explain the observed matter-antimatter asymmetry of the Universe even if the standard model is valid up to the scale of inflation, and any new physics is suppressed by that high scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crater, Jason; Galleher, Connor; Lievense, Jeff
NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integratedmore » black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.« less
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
NASA Astrophysics Data System (ADS)
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less
Skin Friction Reduction Through Large-Scale Forcing
NASA Astrophysics Data System (ADS)
Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer
2017-11-01
Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.
A holistic approach for large-scale derived flood frequency analysis
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno
2017-04-01
Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@harbor.kobe-u.ac.jp, E-mail: sano@ile.osaka-u.ac.jp
2014-10-10
The mechanism of large-scale dynamos in rigidly rotating stratified convection is explored by direct numerical simulations (DNS) in Cartesian geometry. A mean-field dynamo model is also constructed using turbulent velocity profiles consistently extracted from the corresponding DNS results. By quantitative comparison between the DNS and our mean-field model, it is demonstrated that the oscillatory α{sup 2} dynamo wave, excited and sustained in the convection zone, is responsible for large-scale magnetic activities such as cyclic polarity reversal and spatiotemporal migration. The results provide strong evidence that a nonuniformity of the α-effect, which is a natural outcome of rotating stratified convection, canmore » be an important prerequisite for large-scale stellar dynamos, even without the Ω-effect.« less
Development of the Large-Scale Forcing Data to Support MC3E Cloud Modeling Studies
NASA Astrophysics Data System (ADS)
Xie, S.; Zhang, Y.
2011-12-01
The large-scale forcing fields (e.g., vertical velocity and advective tendencies) are required to run single-column and cloud-resolving models (SCMs/CRMs), which are the two key modeling frameworks widely used to link field data to climate model developments. In this study, we use an advanced objective analysis approach to derive the required forcing data from the soundings collected by the Midlatitude Continental Convective Cloud Experiment (MC3E) in support of its cloud modeling studies. MC3E is the latest major field campaign conducted during the period 22 April 2011 to 06 June 2011 in south-central Oklahoma through a joint effort between the DOE ARM program and the NASA Global Precipitation Measurement Program. One of its primary goals is to provide a comprehensive dataset that can be used to describe the large-scale environment of convective cloud systems and evaluate model cumulus parameterizations. The objective analysis used in this study is the constrained variational analysis method. A unique feature of this approach is the use of domain-averaged surface and top-of-the atmosphere (TOA) observations (e.g., precipitation and radiative and turbulent fluxes) as constraints to adjust atmospheric state variables from soundings by the smallest possible amount to conserve column-integrated mass, moisture, and static energy so that the final analysis data is dynamically and thermodynamically consistent. To address potential uncertainties in the surface observations, an ensemble forcing dataset will be developed. Multi-scale forcing will be also created for simulating various scale convective systems. At the meeting, we will provide more details about the forcing development and present some preliminary analysis of the characteristics of the large-scale forcing structures for several selected convective systems observed during MC3E.
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2015-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.
NASA Astrophysics Data System (ADS)
Saksena, S.; Merwade, V.; Singhofen, P.
2017-12-01
There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.
Downscaling Ocean Conditions: Initial Results using a Quasigeostrophic and Realistic Ocean Model
NASA Astrophysics Data System (ADS)
Katavouta, Anna; Thompson, Keith
2014-05-01
Previous theoretical work (Henshaw et al, 2003) has shown that the small-scale modes of variability of solutions of the unforced, incompressible Navier-Stokes equation, and Burgers' equation, can be reconstructed with surprisingly high accuracy from the time history of a few of the large-scale modes. Motivated by this theoretical work we first describe a straightforward method for assimilating information on the large scales in order to recover the small scale oceanic variability. The method is based on nudging in specific wavebands and frequencies and is similar to the so-called spectral nudging method that has been used successfully for atmospheric downscaling with limited area models (e.g. von Storch et al., 2000). The validity of the method is tested using a quasigestrophic model configured to simulate a double ocean gyre separated by an unstable mid-ocean jet. It is shown that important features of the ocean circulation including the position of the meandering mid-ocean jet and associated pinch-off eddies can indeed be recovered from the time history of a small number of large-scales modes. The benefit of assimilating additional time series of observations from a limited number of locations, that alone are too sparse to significantly improve the recovery of the small scales using traditional assimilation techniques, is also demonstrated using several twin experiments. The final part of the study outlines the application of the approach using a realistic high resolution (1/36 degree) model, based on the NEMO (Nucleus for European Modelling of the Ocean) modeling framework, configured for the Scotian Shelf of the east coast of Canada. The large scale conditions used in this application are obtained from the HYCOM (HYbrid Coordinate Ocean Model) + NCODA (Navy Coupled Ocean Data Assimilation) global 1/12 degree analysis product. Henshaw, W., Kreiss, H.-O., Ystrom, J., 2003. Numerical experiments on the interaction between the larger- and the small-scale motion of the Navier-Stokes equations. Multiscale Modeling and Simulation 1, 119-149. von Storch, H., Langenberg, H., Feser, F., 2000. A spectral nudging technique for dynamical downscaling purposes. Monthly Weather Review 128, 3664-3673.
NASA Astrophysics Data System (ADS)
Cassani, Mary Kay Kuhr
The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the courses. The large enrollment SCALE-UP model as implemented at the host university did not increase science teaching self-efficacy of non-science majors, as hypothesized. This was likely due to limited modification of standard cooperative activities according to the inquiry-guided SCALE-UP model. It was also found that larger SCALE-UP enrollments did not decrease science teaching self-efficacy when standard cooperative activities were used in the larger class.
The Neutral Islands during the Late Epoch of Reionization
NASA Astrophysics Data System (ADS)
Xu, Yidong; Yue, Bin; Chen, Xuelei
2018-05-01
The large-scale structure of the ionization field during the epoch of reionization (EoR) can be modeled by the excursion set theory. While the growth of ionized regions during the early stage are described by the ``bubble model'', the shrinking process of neutral regions after the percolation of the ionized region calls for an ``island model''. An excursion set based analytical model and a semi-numerical code (islandFAST) have been developed. The ionizing background and the bubbles inside the islands are also included in the treatment. With two kinds of absorbers of ionizing photons, i.e. the large-scale under-dense neutral islands and the small-scale over-dense clumps, the ionizing background are self-consistently evolved in the model.
Low speed tests of a fixed geometry inlet for a tilt nacelle V/STOL airplane
NASA Technical Reports Server (NTRS)
Syberg, J.; Koncsek, J. L.
1977-01-01
Test data were obtained with a 1/4 scale cold flow model of the inlet at freestream velocities from 0 to 77 m/s (150 knots) and angles of attack from 45 deg to 120 deg. A large scale model was tested with a high bypass ratio turbofan in the NASA/ARC wind tunnel. A fixed geometry inlet is a viable concept for a tilt nacelle V/STOL application. Comparison of data obtained with the two models indicates that flow separation at high angles of attack and low airflow rates is strongly sensitive to Reynolds number and that the large scale model has a significantly improved range of separation-free operation.
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
NASA Astrophysics Data System (ADS)
Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.
2017-03-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This approach basically consisted in three steps: 1 - decomposing large-scale climate and hydrological signals (SLP field, precipitation or streamflow) using discrete wavelet multiresolution analysis, 2 - generating a statistical downscaling model per time-scale, 3 - summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either precipitation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with alternating flood and extremely low-flow/drought periods (e.g., winter/spring 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. In accordance with previous studies, the wavelet components detected in SLP, precipitation and streamflow on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation.
NASA Astrophysics Data System (ADS)
Von Storch, H.; Klehmet, K.; Geyer, B.; Li, D.; Schubert-Frisius, M.; Tim, N.; Zorita, E.
2015-12-01
Global re-analyses suffer from inhomogeneities, as they process data from networks under development. However, the large-scale component of such re-analyses is mostly homogeneous; additional observational data add in most cases to a better description of regional details and less so on large-scale states. Therefore, the concept of downscaling may be applied to homogeneously complementing the large-scale state of the re-analyses with regional detail - wherever the condition of homogeneity of the large-scales is fulfilled. Technically this can be done by using a regional climate model, or a global climate model, which is constrained on the large scale by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional risks - in particular marine risks - was identified. While the data density in Europe is considerably better than in most other regions of the world, even here insufficient spatial and temporal coverage is limiting risk assessments. Therefore, downscaled data-sets are frequently used by off-shore industries. We have run this system also in regions with reduced or absent data coverage, such as the Lena catchment in Siberia, in the Yellow Sea/Bo Hai region in East Asia, in Namibia and the adjacent Atlantic Ocean. Also a global (large scale constrained) simulation has been. It turns out that spatially detailed reconstruction of the state and change of climate in the three to six decades is doable for any region of the world.The different data sets are archived and may freely by used for scientific purposes. Of course, before application, a careful analysis of the quality for the intended application is needed, as sometimes unexpected changes in the quality of the description of large-scale driving states prevail.
A comprehensive study on urban true orthorectification
Zhou, G.; Chen, W.; Kelmelis, J.A.; Zhang, Dongxiao
2005-01-01
To provide some advanced technical bases (algorithms and procedures) and experience needed for national large-scale digital orthophoto generation and revision of the Standards for National Large-Scale City Digital Orthophoto in the National Digital Orthophoto Program (NDOP), this paper presents a comprehensive study on theories, algorithms, and methods of large-scale urban orthoimage generation. The procedures of orthorectification for digital terrain model (DTM)-based and digital building model (DBM)-based orthoimage generation and their mergence for true orthoimage generation are discussed in detail. A method of compensating for building occlusions using photogrammetric geometry is developed. The data structure needed to model urban buildings for accurately generating urban orthoimages is presented. Shadow detection and removal, the optimization of seamline for automatic mosaic, and the radiometric balance of neighbor images are discussed. Street visibility analysis, including the relationship between flight height, building height, street width, and relative location of the street to the imaging center, is analyzed for complete true orthoimage generation. The experimental results demonstrated that our method can effectively and correctly orthorectify the displacements caused by terrain and buildings in urban large-scale aerial images. ?? 2005 IEEE.
Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.
2007-01-01
In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921
Structural similitude and design of scaled down laminated models
NASA Technical Reports Server (NTRS)
Simitses, G. J.; Rezaeepazhand, J.
1993-01-01
The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.
Large-Scale Traffic Microsimulation From An MPO Perspective
DOT National Transportation Integrated Search
1997-01-01
One potential advancement of the four-step travel model process is the forecasting and simulation of individual activities and travel. A common concern with such an approach is that the data and computational requirements for a large-scale, regional ...
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.;
2001-01-01
It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.
Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction
NASA Technical Reports Server (NTRS)
Li, Zhijin; Chao, Yi; Li, P. Peggy
2012-01-01
A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.
Cosmic strings and the large-scale structure
NASA Technical Reports Server (NTRS)
Stebbins, Albert
1988-01-01
A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.
Visualization, documentation, analysis, and communication of large scale gene regulatory networks
Longabaugh, William J.R.; Davidson, Eric H.; Bolouri, Hamid
2009-01-01
Summary Genetic regulatory networks (GRNs) are complex, large-scale, and spatially and temporally distributed. These characteristics impose challenging demands on computational GRN modeling tools, and there is a need for custom modeling tools. In this paper, we report on our ongoing development of BioTapestry, an open source, freely available computational tool designed specifically for GRN modeling. We also outline our future development plans, and give some examples of current applications of BioTapestry. PMID:18757046
Parallel Simulation of Unsteady Turbulent Flames
NASA Technical Reports Server (NTRS)
Menon, Suresh
1996-01-01
Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Progress in the Development of a Global Quasi-3-D Multiscale Modeling Framework
NASA Astrophysics Data System (ADS)
Jung, J.; Konor, C. S.; Randall, D. A.
2017-12-01
The Quasi-3-D Multiscale Modeling Framework (Q3D MMF) is a second-generation MMF, which has following advances over the first-generation MMF: 1) The cloud-resolving models (CRMs) that replace conventional parameterizations are not confined to the large-scale dynamical-core grid cells, and are seamlessly connected to each other, 2) The CRMs sense the three-dimensional large- and cloud-scale environment, 3) Two perpendicular sets of CRM channels are used, and 4) The CRMs can resolve the steep surface topography along the channel direction. The basic design of the Q3D MMF has been developed and successfully tested in a limited-area modeling framework. Currently, global versions of the Q3D MMF are being developed for both weather and climate applications. The dynamical cores governing the large-scale circulation in the global Q3D MMF are selected from two cube-based global atmospheric models. The CRM used in the model is the 3-D nonhydrostatic anelastic Vector-Vorticity Model (VVM), which has been tested with the limited-area version for its suitability for this framework. As a first step of the development, the VVM has been reconstructed on the cubed-sphere grid so that it can be applied to global channel domains and also easily fitted to the large-scale dynamical cores. We have successfully tested the new VVM by advecting a bell-shaped passive tracer and simulating the evolutions of waves resulted from idealized barotropic and baroclinic instabilities. For improvement of the model, we also modified the tracer advection scheme to yield positive-definite results and plan to implement a new physics package that includes a double-moment microphysics and an aerosol physics. The interface for coupling the large-scale dynamical core and the VVM is under development. In this presentation, we shall describe the recent progress in the development and show some test results.
Cosmic homogeneity: a spectroscopic and model-independent measurement
NASA Astrophysics Data System (ADS)
Gonçalves, R. S.; Carvalho, G. C.; Bengaly, C. A. P., Jr.; Carvalho, J. C.; Bernui, A.; Alcaniz, J. S.; Maartens, R.
2018-03-01
Cosmology relies on the Cosmological Principle, i.e. the hypothesis that the Universe is homogeneous and isotropic on large scales. This implies in particular that the counts of galaxies should approach a homogeneous scaling with volume at sufficiently large scales. Testing homogeneity is crucial to obtain a correct interpretation of the physical assumptions underlying the current cosmic acceleration and structure formation of the Universe. In this letter, we use the Baryon Oscillation Spectroscopic Survey to make the first spectroscopic and model-independent measurements of the angular homogeneity scale θh. Applying four statistical estimators, we show that the angular distribution of galaxies in the range 0.46 < z < 0.62 is consistent with homogeneity at large scales, and that θh varies with redshift, indicating a smoother Universe in the past. These results are in agreement with the foundations of the standard cosmological paradigm.
Acoustic scaling: A re-evaluation of the acoustic model of Manchester Studio 7
NASA Astrophysics Data System (ADS)
Walker, R.
1984-12-01
The reasons for the reconstruction and re-evaluation of the acoustic scale mode of a large music studio are discussed. The design and construction of the model using mechanical and structural considerations rather than purely acoustic absorption criteria is described and the results obtained are given. The results confirm that structural elements within the studio gave rise to unexpected and unwanted low-frequency acoustic absorption. The results also show that at least for the relatively well understood mechanisms of sound energy absorption physical modelling of the structural and internal components gives an acoustically accurate scale model, within the usual tolerances of acoustic design. The poor reliability of measurements of acoustic absorption coefficients, is well illustrated. The conclusion is reached that such acoustic scale modelling is a valid and, for large scale projects, financially justifiable technique for predicting fundamental acoustic effects. It is not appropriate for the prediction of fine details because such small details are unlikely to be reproduced exactly at a different size without extensive measurements of the material's performance at both scales.
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
USDA-ARS?s Scientific Manuscript database
Relevant data about subsurface water flow and solute transport at relatively large scales that are of interest to the public are inherently laborious and in most cases simply impossible to obtain. Upscaling in which fine-scale models and data are used to predict changes at the coarser scales is the...
Large Eddy Simulation of a Turbulent Jet
NASA Technical Reports Server (NTRS)
Webb, A. T.; Mansour, Nagi N.
2001-01-01
Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.
Machine Learning, deep learning and optimization in computer vision
NASA Astrophysics Data System (ADS)
Canu, Stéphane
2017-03-01
As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
Susan Will-Wolf; Peter Neitlich
2010-01-01
Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...
Susan Will-Wolf; Sarah Jovan; Michael C. Amacher
2017-01-01
Our development of lichen elemental bioindicators for a United States of America (USA) national monitoring program is a useful model for other large-scale programs. Concentrations of 20 elements were measured, validated, and analyzed for 203 samples of five common lichen species. Collections were made by trained non-specialists near 75 permanent plots and an expert...
Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2016-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.
Closing in on the large-scale CMB power asymmetry
NASA Astrophysics Data System (ADS)
Contreras, D.; Hutchinson, J.; Moss, A.; Scott, D.; Zibin, J. P.
2018-03-01
Measurements of the cosmic microwave background (CMB) temperature anisotropies have revealed a dipolar asymmetry in power at the largest scales, in apparent contradiction with the statistical isotropy of standard cosmological models. The significance of the effect is not very high, and is dependent on a posteriori choices. Nevertheless, a number of models have been proposed that produce a scale-dependent asymmetry. We confront several such models for a physical, position-space modulation with CMB temperature observations. We find that, while some models that maintain the standard isotropic power spectrum are allowed, others, such as those with modulated tensor or uncorrelated isocurvature modes, can be ruled out on the basis of the overproduction of isotropic power. This remains the case even when an extra isocurvature mode fully anticorrelated with the adiabatic perturbations is added to suppress power on large scales.
Large-Scale Brain Systems in ADHD: Beyond the Prefrontal-Striatal Model
Castellanos, F. Xavier; Proal, Erika
2012-01-01
Attention-deficit/hyperactivity disorder (ADHD) has long been thought to reflect dysfunction of prefrontal-striatal circuitry, with involvement of other circuits largely ignored. Recent advances in systems neuroscience-based approaches to brain dysfunction enable the development of models of ADHD pathophysiology that encompass a number of different large-scale “resting state” networks. Here we review progress in delineating large-scale neural systems and illustrate their relevance to ADHD. We relate frontoparietal, dorsal attentional, motor, visual, and default networks to the ADHD functional and structural literature. Insights emerging from mapping intrinsic brain connectivity networks provide a potentially mechanistic framework for understanding aspects of ADHD, such as neuropsychological and behavioral inconsistency, and the possible role of primary visual cortex in attentional dysfunction in the disorder. PMID:22169776
A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis
NASA Astrophysics Data System (ADS)
Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.
2006-12-01
Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.
Regional climate model sensitivity to domain size
NASA Astrophysics Data System (ADS)
Leduc, Martin; Laprise, René
2009-05-01
Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.
Drought forecasting in Luanhe River basin involving climatic indices
NASA Astrophysics Data System (ADS)
Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.
2017-11-01
Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.
Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.
2016-01-01
Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
The Effect of Large Scale Salinity Gradient on Langmuir Turbulence
NASA Astrophysics Data System (ADS)
Fan, Y.; Jarosz, E.; Yu, Z.; Jensen, T.; Sullivan, P. P.; Liang, J.
2017-12-01
Langmuir circulation (LC) is believed to be one of the leading order causes of turbulent mixing in the upper ocean. It is important for momentum and heat exchange across the mixed layer (ML) and directly impact the dynamics and thermodynamics in the upper ocean and lower atmosphere including the vertical distributions of chemical, biological, optical, and acoustic properties. Based on Craik and Leibovich (1976) theory, large eddy simulation (LES) models have been developed to simulate LC in the upper ocean, yielding new insights that could not be obtained from field observations and turbulent closure models. Due its high computational cost, LES models are usually limited to small domain sizes and cannot resolve large-scale flows. Furthermore, most LES models used in the LC simulations use periodic boundary conditions in the horizontal direction, which assumes the physical properties (i.e. temperature and salinity) and expected flow patterns in the area of interest are of a periodically repeating nature so that the limited small LES domain is representative for the larger area. Using periodic boundary condition can significantly reduce computational effort in problems, and it is a good assumption for isotropic shear turbulence. However, LC is anisotropic (McWilliams et al 1997) and was observed to be modulated by crosswind tidal currents (Kukulka et al 2011). Using symmetrical domains, idealized LES studies also indicate LC could interact with oceanic fronts (Hamlington et al 2014) and standing internal waves (Chini and Leibovich, 2005). The present study expands our previous LES modeling investigations of Langmuir turbulence to the real ocean conditions with large scale environmental motion that features fresh water inflow into the study region. Large scale gradient forcing is introduced to the NCAR LES model through scale separation analysis. The model is applied to a field observation in the Gulf of Mexico in July, 2016 when the measurement site was impacted by large fresh water inflow due to flooding from the Mississippi river. Model results indicate that the strong salinity gradient can reduce the mean flow in the ML and inhibit the turbulence in the planetary boundary layer. The Langmuir cells are also rotated clockwise by the pressure gradient.
1981-11-01
OPERATIONS MANAGEMENT S. TYPE OF REPORT A PERIOD COVERED TEST OF THE USE OF THE WHITE AMUR FOR CONTROL OF Report 2 of a series PROBLEM AQUATIC PLANTS...111. 1981. "Large-Scale Operations Management Test of the Use of the White Amur for Control of Problem Aquatic Plants; Report 2, First Year Poststock...Al 3 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC PLANTS A MODEL FOR EVALUATION OF
Water balance model for Kings Creek
NASA Technical Reports Server (NTRS)
Wood, Eric F.
1990-01-01
Particular attention is given to the spatial variability that affects the representation of water balance at the catchment scale in the context of macroscale water-balance modeling. Remotely sensed data are employed for parameterization, and the resulting model is developed so that subgrid spatial variability is preserved and therefore influences the grid-scale fluxes of the model. The model permits the quantitative evaluation of the surface-atmospheric interactions related to the large-scale hydrologic water balance.
NASA Astrophysics Data System (ADS)
Flint, A. L.; Flint, L. E.
2010-12-01
The characterization of hydrologic response to current and future climates is of increasing importance to many countries around the world that rely heavily on changing and uncertain water supplies. Large-scale models that can calculate a spatially distributed water balance and elucidate groundwater recharge and surface water flows for large river basins provide a basis of estimates of changes due to future climate projections. Unfortunately many regions in the world have very sparse data for parameterization or calibration of hydrologic models. For this study, the Tigris and Euphrates River basins were used for the development of a regional water balance model at 180-m spatial scale, using the Basin Characterization Model, to estimate historical changes in groundwater recharge and surface water flows in the countries of Turkey, Syria, Iraq, Iran, and Saudi Arabia. Necessary input parameters include precipitation, air temperature, potential evapotranspiration (PET), soil properties and thickness, and estimates of bulk permeability from geologic units. Data necessary for calibration includes snow cover, reservoir volumes (from satellite data and historic, pre-reservoir elevation data) and streamflow measurements. Global datasets for precipitation, air temperature, and PET were available at very large spatial scales (50 km) through the world scale databases, finer scale WorldClim climate data, and required downscaling to fine scales for model input. Soils data were available through world scale soil maps but required parameterization on the basis of textural data to estimate soil hydrologic properties. Soil depth was interpreted from geomorphologic interpretation and maps of quaternary deposits, and geologic materials were categorized from generalized geologic maps of each country. Estimates of bedrock permeability were made on the basis of literature and data on driller’s logs and adjusted during calibration of the model to streamflow measurements where available. Results of historical water balance calculations throughout the Tigris and Euphrates River basins will be shown along with details of processing input data to provide spatial continuity and downscaling. Basic water availability analysis for recharge and runoff is readily available from a determinisitic solar radiation energy balance model and a global potential evapotranspiration model and global estimates of precipitation and air temperature. Future climate estimates can be readily applied to the same water and energy balance models to evaluate future water availability for countries around the globe.
Large Eddy Simulation Study for Fluid Disintegration and Mixing
NASA Technical Reports Server (NTRS)
Bellan, Josette; Taskinoglu, Ezgi
2011-01-01
A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not totally satisfactory. With a model for the additional term in the momentum equation, the predictions of the constant-coefficient Smagorinsky and constant-coefficient Scale-Similarity models were improved to a certain extent; however, most of the improvement was obtained for the Gradient model. The previously derived model and a newly developed model for the additional term in the momentum equation were both tested, with the new model proving even more successful than the previous model at reproducing the high density-gradient magnitude regions. Several dynamic SGS-flux models, in which the SGS-flux model coefficient is computed as part of the simulation, were tested in conjunction with the new model for this additional term in the momentum equation. The most successful dynamic model was a "mixed" model combining the Smagorinsky and Gradient models. This work is directly applicable to simulations of gas turbine engines (aeronautics) and rocket engines (astronautics).
He, Xinhua; Hu, Wenfa
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.
He, Xinhua
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
Scaling and criticality in a stochastic multi-agent model of a financial market
NASA Astrophysics Data System (ADS)
Lux, Thomas; Marchesi, Michele
1999-02-01
Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.
Stream Flow Prediction by Remote Sensing and Genetic Programming
NASA Technical Reports Server (NTRS)
Chang, Ni-Bin
2009-01-01
A genetic programming (GP)-based, nonlinear modeling structure relates soil moisture with synthetic-aperture-radar (SAR) images to present representative soil moisture estimates at the watershed scale. Surface soil moisture measurement is difficult to obtain over a large area due to a variety of soil permeability values and soil textures. Point measurements can be used on a small-scale area, but it is impossible to acquire such information effectively in large-scale watersheds. This model exhibits the capacity to assimilate SAR images and relevant geoenvironmental parameters to measure soil moisture.
NASA Astrophysics Data System (ADS)
Tang, G.; Bartlein, P. J.
2012-01-01
Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p < 0.01) in the Everglades of Florida over the years 1996-2001. The modeled monthly soil moisture for Illinois of the US agrees well (R2 = 0.79, p < 0.01) with the observed over the years 1984-2001. The modeled monthly stream flow for most 12 major rivers in the US is consistent R2 > 0.46, p < 0.01; Nash-Sutcliffe Coefficients >0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.
Large-scale shell-model calculation with core excitations for neutron-rich nuclei beyond 132Sn
NASA Astrophysics Data System (ADS)
Jin, Hua; Hasegawa, Munetake; Tazaki, Shigeru; Kaneko, Kazunari; Sun, Yang
2011-10-01
The structure of neutron-rich nuclei with a few nucleons beyond 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including neutron core excitations, a new effective interaction is determined by employing the extended pairing-plus-quadrupole model with monopole corrections. The model provides a systematical description for energy levels of A=133-135 nuclei up to high spins and reproduces available data of electromagnetic transitions. The structure of these nuclei is analyzed in detail, with emphasis of effects associated with core excitations. The results show evidence of hexadecupole correlation in addition to octupole correlation in this mass region. The suggested feature of magnetic rotation in 135Te occurs in the present shell-model calculation.
Driving terrestrial ecosystem models from space
NASA Technical Reports Server (NTRS)
Waring, R. H.
1993-01-01
Regional air pollution, land-use conversion, and projected climate change all affect ecosystem processes at large scales. Changes in vegetation cover and growth dynamics can impact the functioning of ecosystems, carbon fluxes, and climate. As a result, there is a need to assess and monitor vegetation structure and function comprehensively at regional to global scales. To provide a test of our present understanding of how ecosystems operate at large scales we can compare model predictions of CO2, O2, and methane exchange with the atmosphere against regional measurements of interannual variation in the atmospheric concentration of these gases. Recent advances in remote sensing of the Earth's surface are beginning to provide methods for estimating important ecosystem variables at large scales. Ecologists attempting to generalize across landscapes have made extensive use of models and remote sensing technology. The success of such ventures is dependent on merging insights and expertise from two distinct fields. Ecologists must provide the understanding of how well models emulate important biological variables and their interactions; experts in remote sensing must provide the biophysical interpretation of complex optical reflectance and radar backscatter data.
Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective
NASA Astrophysics Data System (ADS)
Cheng, W.; Samtaney, R.
2014-01-01
The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.
Avalanches and scaling collapse in the large-N Kuramoto model
NASA Astrophysics Data System (ADS)
Coleman, J. Patrick; Dahmen, Karin A.; Weaver, Richard L.
2018-04-01
We study avalanches in the Kuramoto model, defined as excursions of the order parameter due to ephemeral episodes of synchronization. We present scaling collapses of the avalanche sizes, durations, heights, and temporal profiles, extracting scaling exponents, exponent relations, and scaling functions that are shown to be consistent with the scaling behavior of the power spectrum, a quantity independent of our particular definition of an avalanche. A comprehensive scaling picture of the noise in the subcritical finite-N Kuramoto model is developed, linking this undriven system to a larger class of driven avalanching systems.
NASA Technical Reports Server (NTRS)
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.
1985-01-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.
Inner-outer predictive wall model for wall-bounded turbulence in hypersonic flow
NASA Astrophysics Data System (ADS)
Martin, M. Pino; Helm, Clara M.
2017-11-01
The inner-outer predictive wall model of Mathis et al. is modified for hypersonic turbulent boundary layers. The model is based on a modulation of the energized motions in the inner layer by large scale momentum fluctuations in the logarithmic layer. Using direct numerical simulation (DNS) data of turbulent boundary layers with free stream Mach number 3 to 10, it is shown that the variation of the fluid properties in the compressible flows leads to large Reynolds number (Re) effects in the outer layer and facilitate the modulation observed in high Re incompressible flows. The modulation effect by the large scale increases with increasing free-stream Mach number. The model is extended to include spanwise and wall-normal velocity fluctuations and is generalized through Morkovin scaling. Temperature fluctuations are modeled using an appropriate Reynolds Analogy. Density fluctuations are calculated using an equation of state and a scaling with Mach number. DNS data are used to obtain the universal signal and parameters. The model is tested by using the universal signal to reproduce the flow conditions of Mach 3 and Mach 7 turbulent boundary layer DNS data and comparing turbulence statistics between the modeled flow and the DNS data. This work is supported by the Air Force Office of Scientific Research under Grant FA9550-17-1-0104.
ERIC Educational Resources Information Center
Johnson, LeAnne D.
2017-01-01
Bringing effective practices to scale across large systems requires attending to how information and belief systems come together in decisions to adopt, implement, and sustain those practices. Statewide scaling of the Pyramid Model, a framework for positive behavior intervention and support, across different types of early childhood programs…
Joel W. Homan; Charles H. Luce; James P. McNamara; Nancy F. Glenn
2011-01-01
Describing the spatial variability of heterogeneous snowpacks at a watershed or mountain-front scale is important for improvements in large-scale snowmelt modelling. Snowmelt depletion curves, which relate fractional decreases in snowcovered area (SCA) against normalized decreases in snow water equivalent (SWE), are a common approach to scale-up snowmelt models....
Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B
2011-01-01
In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less
NASA Astrophysics Data System (ADS)
Yu, Garmay; A, Shvetsov; D, Karelov; D, Lebedev; A, Radulescu; M, Petukhov; V, Isaev-Ivanov
2012-02-01
Based on X-ray crystallographic data available at Protein Data Bank, we have built molecular dynamics (MD) models of homologous recombinases RecA from E. coli and D. radiodurans. Functional form of RecA enzyme, which is known to be a long helical filament, was approximated by a trimer, simulated in periodic water box. The MD trajectories were analyzed in terms of large-scale conformational motions that could be detectable by neutron and X-ray scattering techniques. The analysis revealed that large-scale RecA monomer dynamics can be described in terms of relative motions of 7 subdomains. Motion of C-terminal domain was the major contributor to the overall dynamics of protein. Principal component analysis (PCA) of the MD trajectories in the atom coordinate space showed that rotation of C-domain is correlated with the conformational changes in the central domain and N-terminal domain, that forms the monomer-monomer interface. Thus, even though C-terminal domain is relatively far from the interface, its orientation is correlated with large-scale filament conformation. PCA of the trajectories in the main chain dihedral angle coordinate space implicates a co-existence of a several different large-scale conformations of the modeled trimer. In order to clarify the relationship of independent domain orientation with large-scale filament conformation, we have performed analysis of independent domain motion and its implications on the filament geometry.
Project Management Life Cycle Models to Improve Management in High-rise Construction
NASA Astrophysics Data System (ADS)
Burmistrov, Andrey; Siniavina, Maria; Iliashenko, Oksana
2018-03-01
The paper describes a possibility to improve project management in high-rise buildings construction through the use of various Project Management Life Cycle Models (PMLC models) based on traditional and agile project management approaches. Moreover, the paper describes, how the split the whole large-scale project to the "project chain" will create the factor for better manageability of the large-scale buildings project and increase the efficiency of the activities of all participants in such projects.
NASA Astrophysics Data System (ADS)
Best, J.
2004-05-01
The origin and scaling of large-scale coherent flow structures has been of central interest in furthering understanding of the nature of turbulent boundary layers, and recent work has shown the presence of large-scale turbulent flow structures that may extend through the whole flow depth. Such structures may dominate the entrainment of bedload sediment and advection of fine sediment in suspension. However, we still know remarkably little of the interactions between the dynamics of coherent flow structures and sediment transport, and its implications for ecosystem dynamics. This paper will discuss the first results of two-phase particle imaging velocimetry (PIV) that has been used to visualize large-scale turbulent flow structures moving over a flat bed in a water channel, and the motion of sand particles within these flows. The talk will outline the methodology, involving the fluorescent tagging of sediment and its discrimination from the fluid phase, and show results that illustrate the key role of these large-scale structures in the transport of sediment. Additionally, the presence of these structures will be discussed in relation to the origin of vorticity within flat-bed boundary layers and recent models that envisage these large-scale motions as being linked to whole-flow field structures. Discussion will focus on if these recent models simply reflect the organization of turbulent boundary layer structure and vortex packets, some of which are amply visualised at the laminar-turbulent transition.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hicks, E. P.; Rosner, R., E-mail: eph2001@columbia.edu
In this paper, we provide support for the Rayleigh-Taylor-(RT)-based subgrid model used in full-star simulations of deflagrations in Type Ia supernovae explosions. We use the results of a parameter study of two-dimensional direct numerical simulations of an RT unstable model flame to distinguish between the two main types of subgrid models (RT or turbulence dominated) in the flamelet regime. First, we give scalings for the turbulent flame speed, the Reynolds number, the viscous scale, and the size of the burning region as the non-dimensional gravity (G) is varied. The flame speed is well predicted by an RT-based flame speed model.more » Next, the above scalings are used to calculate the Karlovitz number (Ka) and to discuss appropriate combustion regimes. No transition to thin reaction zones is seen at Ka = 1, although such a transition is expected by turbulence-dominated subgrid models. Finally, we confirm a basic physical premise of the RT subgrid model, namely, that the flame is fractal, and thus self-similar. By modeling the turbulent flame speed, we demonstrate that it is affected more by large-scale RT stretching than by small-scale turbulent wrinkling. In this way, the RT instability controls the flame directly from the large scales. Overall, these results support the RT subgrid model.« less
A numerical projection technique for large-scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang
2011-10-01
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
A numerical study of the string function using a primitive equation ocean model
NASA Astrophysics Data System (ADS)
Tyler, R. H.; Käse, R.
We use results from a primitive-equation ocean numerical model (SCRUM) to test a theoretical 'string function' formulation put forward by Tyler and Käse in another article in this issue. The string function acts as a stream function for the large-scale potential energy flow under the combined beta and topographic effects. The model results verify that large-scale anomalies propagate along the string function contours with a speed correctly given by the cross-string gradient. For anomalies having a scale similar to the Rossby radius, material rates of change in the layer mass following the string velocity are balanced by material rates of change in relative vorticity following the flow velocity. It is shown that large-amplitude anomalies can be generated when wind stress is resonant with the string function configuration.
A 100,000 Scale Factor Radar Range.
Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser
2017-12-19
The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.
Pesaran, Bijan; Vinck, Martin; Einevoll, Gaute T; Sirota, Anton; Fries, Pascal; Siegel, Markus; Truccolo, Wilson; Schroeder, Charles E; Srinivasan, Ramesh
2018-06-25
New technologies to record electrical activity from the brain on a massive scale offer tremendous opportunities for discovery. Electrical measurements of large-scale brain dynamics, termed field potentials, are especially important to understanding and treating the human brain. Here, our goal is to provide best practices on how field potential recordings (electroencephalograms, magnetoencephalograms, electrocorticograms and local field potentials) can be analyzed to identify large-scale brain dynamics, and to highlight critical issues and limitations of interpretation in current work. We focus our discussion of analyses around the broad themes of activation, correlation, communication and coding. We provide recommendations for interpreting the data using forward and inverse models. The forward model describes how field potentials are generated by the activity of populations of neurons. The inverse model describes how to infer the activity of populations of neurons from field potential recordings. A recurring theme is the challenge of understanding how field potentials reflect neuronal population activity given the complexity of the underlying brain systems.
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
The use of large scale datasets for understanding traffic network state.
DOT National Transportation Integrated Search
2013-09-01
The goal of this proposal is to develop novel modeling techniques to infer individual activity patterns from the large scale cell phone : datasets and taxi data from NYC. As such this research offers a paradigm shift from traditional transportation m...
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
A study on large-scale nudging effects in regional climate model simulation
NASA Astrophysics Data System (ADS)
Yhang, Yoo-Bin; Hong, Song-You
2011-05-01
The large-scale nudging effects on the East Asian summer monsoon (EASM) are examined using the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM). The NCEP/DOE reanalysis data is used to provide large-scale forcings for RSM simulations, configured with an approximately 50-km grid over East Asia, centered on the Korean peninsula. The RSM with a variant of spectral nudging, that is, the scale selective bias correction (SSBC), is forced by perfect boundary conditions during the summers (June-July-August) from 1979 to 2004. The two summers of 2000 and 2004 are investigated to demonstrate the impact of SSBC on precipitation in detail. It is found that the effect of SSBC on the simulated seasonal precipitation is in general neutral without a discernible advantage. Although errors in large-scale circulation for both 2000 and 2004 are reduced by using the SSBC method, the impact on simulated precipitation is found to be negative in 2000 and positive in 2004 summers. One possible reason for a different effect is that precipitation in the summer of 2004 is characterized by a strong baroclinicity, while precipitation in 2000 is caused by thermodynamic instability. The reduction of convective rainfall over the oceans by the application of the SSBC method seems to play an important role in modeled atmosphere.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
2018-01-01
We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129
A novel representation of groundwater dynamics in large-scale land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael; Kollet, Stefan
2017-04-01
Land surface processes are connected to groundwater dynamics via shallow soil moisture. For example, groundwater affects evapotranspiration (by influencing the variability of soil moisture) and runoff generation mechanisms. However, contemporary Land Surface Models (LSM) generally consider isolated soil columns and free drainage lower boundary condition for simulating hydrology. This is mainly due to the fact that incorporating detailed groundwater dynamics in LSMs usually requires considerable computing resources, especially for large-scale applications (e.g., continental to global). Yet, these simplifications undermine the potential effect of groundwater dynamics on land surface mass and energy fluxes. In this study, we present a novel approach of representing high-resolution groundwater dynamics in LSMs that is computationally efficient for large-scale applications. This new parameterization is incorporated in the Joint UK Land Environment Simulator (JULES) and tested at the continental-scale.
Measuring large-scale vertical motion in the atmosphere with dropsondes
NASA Astrophysics Data System (ADS)
Bony, Sandrine; Stevens, Bjorn
2017-04-01
Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.
ERIC Educational Resources Information Center
Johnson, Matthew S.; Jenkins, Frank
2005-01-01
Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
An increase in aerosol burden due to the land-sea warming contrast
NASA Astrophysics Data System (ADS)
Hassan, T.; Allen, R.; Randles, C. A.
2017-12-01
Climate models simulate an increase in most aerosol species in response to warming, particularly over the tropics and Northern Hemisphere midlatitudes. This increase in aerosol burden is related to a decrease in wet removal, primarily due to reduced large-scale precipitation. Here, we show that the increase in aerosol burden, and the decrease in large-scale precipitation, is related to a robust climate change phenomenon—the land/sea warming contrast. Idealized simulations with two state of the art climate models, the National Center for Atmospheric Research Community Atmosphere Model version 5 (NCAR CAM5) and the Geophysical Fluid Dynamics Laboratory Atmospheric Model 3 (GFDL AM3), show that muting the land-sea warming contrast negates the increase in aerosol burden under warming. This is related to smaller decreases in near-surface relative humidity over land, and in turn, smaller decreases in large-scale precipitation over land—especially in the NH midlatitudes. Furthermore, additional idealized simulations with an enhanced land/sea warming contrast lead to the opposite result—larger decreases in relative humidity over land, larger decreases in large-scale precipitation, and larger increases in aerosol burden. Our results, which relate the increase in aerosol burden to the robust climate projection of enhanced land warming, adds confidence that a warmer world will be associated with a larger aerosol burden.
NASA Astrophysics Data System (ADS)
Kashid, Satishkumar S.; Maity, Rajib
2012-08-01
SummaryPrediction of Indian Summer Monsoon Rainfall (ISMR) is of vital importance for Indian economy, and it has been remained a great challenge for hydro-meteorologists due to inherent complexities in the climatic systems. The Large-scale atmospheric circulation patterns from tropical Pacific Ocean (ENSO) and those from tropical Indian Ocean (EQUINOO) are established to influence the Indian Summer Monsoon Rainfall. The information of these two large scale atmospheric circulation patterns in terms of their indices is used to model the complex relationship between Indian Summer Monsoon Rainfall and the ENSO as well as EQUINOO indices. However, extracting the signal from such large-scale indices for modeling such complex systems is significantly difficult. Rainfall predictions have been done for 'All India' as one unit, as well as for five 'homogeneous monsoon regions of India', defined by Indian Institute of Tropical Meteorology. Recent 'Artificial Intelligence' tool 'Genetic Programming' (GP) has been employed for modeling such problem. The Genetic Programming approach is found to capture the complex relationship between the monthly Indian Summer Monsoon Rainfall and large scale atmospheric circulation pattern indices - ENSO and EQUINOO. Research findings of this study indicate that GP-derived monthly rainfall forecasting models, that use large-scale atmospheric circulation information are successful in prediction of All India Summer Monsoon Rainfall with correlation coefficient as good as 0.866, which may appears attractive for such a complex system. A separate analysis is carried out for All India Summer Monsoon rainfall for India as one unit, and five homogeneous monsoon regions, based on ENSO and EQUINOO indices of months of March, April and May only, performed at end of month of May. In this case, All India Summer Monsoon Rainfall could be predicted with 0.70 as correlation coefficient with somewhat lesser Correlation Coefficient (C.C.) values for different 'homogeneous monsoon regions'.
NASA Astrophysics Data System (ADS)
Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.
2013-12-01
Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.
USDA-ARS?s Scientific Manuscript database
Soil hydraulic properties can be retrieved from physical sampling of soil, via surveys, but this is time consuming and only as accurate as the scale of the sample. Remote sensing provides an opportunity to get pertinent soil properties at large scales, which is very useful for large scale modeling....
De Vilmorin, Philippe; Slocum, Ashley; Jaber, Tareq; Schaefer, Oliver; Ruppach, Horst; Genest, Paul
2015-01-01
This article describes a four virus panel validation of EMD Millipore's (Bedford, MA) small virus-retentive filter, Viresolve® Pro, using TrueSpike(TM) viruses for a Biogen Idec process intermediate. The study was performed at Charles River Labs in King of Prussia, PA. Greater than 900 L/m(2) filter throughput was achieved with the approximately 8 g/L monoclonal antibody feed. No viruses were detected in any filtrate samples. All virus log reduction values were between ≥3.66 and ≥5.60. The use of TrueSpike(TM) at Charles River Labs allowed Biogen Idec to achieve a more representative scaled-down model and potentially reduce the cost of its virus filtration step and the overall cost of goods. The body of data presented here is an example of the benefits of following the guidance from the PDA Technical Report 47, The Preparation of Virus Spikes Used for Viral Clearance Studies. The safety of biopharmaceuticals is assured through the use of multiple steps in the purification process that are capable of virus clearance, including filtration with virus-retentive filters. The amount of virus present at the downstream stages in the process is expected to be and is typically low. The viral clearance capability of the filtration step is assessed in a validation study. The study utilizes a small version of the larger manufacturing size filter, and a large, known amount of virus is added to the feed prior to filtration. Viral assay before and after filtration allows the virus log reduction value to be quantified. The representativeness of the small-scale model is supported by comparing large-scale filter performance to small-scale filter performance. The large-scale and small-scale filtration runs are performed using the same operating conditions. If the filter performance at both scales is comparable, it supports the applicability of the virus log reduction value obtained with the small-scale filter to the large-scale manufacturing process. However, the virus preparation used to spike the feed material often contains impurities that contribute adversely to virus filter performance in the small-scale model. The added impurities from the virus spike, which are not present at manufacturing scale, compromise the scale-down model and put into question the direct applicability of the virus clearance results. Another consequence of decreased filter performance due to virus spike impurities is the unnecessary over-sizing of the manufacturing system to match the low filter capacity observed in the scale-down model. This article describes how improvements in mammalian virus spike purity ensure the validity of the log reduction value obtained with the scale-down model and support economically optimized filter usage. © PDA, Inc. 2015.
Large-angle cosmic microwave background anisotropies in an open universe
NASA Technical Reports Server (NTRS)
Kamionkowski, Marc; Spergel, David N.
1994-01-01
If the universe is open, scales larger than the curvature scale may be probed by observation of large-angle fluctuations in the cosmic microwave background (CMB). We consider primordial adiabatic perturbations and discuss power spectra that are power laws in volume, wavelength, and eigenvalue of the Laplace operator. Such spectra may have arisen if, for example, the universe underwent a period of `frustated' inflation. The resulting large-angle anisotropies of the CMB are computed. The amplitude generally increases as Omega is decreased but decreases as h is increased. Interestingly enough, for all three Ansaetze, anisotropies on angular scales larger than the curvature scale are suppressed relative to the anisotropies on scales smaller than the curvature scale, but cosmic variance makes discrimination between various models difficult. Models with 0.2 approximately less than Omega h approximately less than 0.3 appear compatible with CMB fluctuations detected by Cosmic Background Explorer Satellite (COBE) and the Tenerife experiment and with the amplitude and spectrum of fluctuations of galaxy counts in the APM, CfA, and 1.2 Jy IRAS surveys. COBE normalization for these models yields sigma(sub 8) approximately = 0.5 - 0.7. Models with smaller values of Omega h when normalized to COBE require bias factors in excess of 2 to be compatible with the observed galaxy counts on the 8/h Mpc scale. Requiring that the age of the universe exceed 10 Gyr implies that Omega approximately greater than 0.25, while requiring that from the last-scattering term in the Sachs-Wolfe formula, large-angle anisotropies come primarily from the decay of potential fluctuations at z approximately less than 1/Omega. Thus, if the universe is open, COBE has been detecting temperature fluctuations produced at moderate redshift rather than at z approximately 1300.
NASA Astrophysics Data System (ADS)
Tang, S.; Xie, S.; Tang, Q.; Zhang, Y.
2017-12-01
Two types of instruments, the eddy correlation flux measurement system (ECOR) and the energy balance Bowen ratio system (EBBR), are used at the Atmospheric Radiation Measurement (ARM) program Southern Great Plains (SGP) site to measure surface latent and sensible fluxes. ECOR and EBBR typically sample different land surface types, and the domain-mean surface fluxes derived from ECOR and EBBR are not always consistent. The uncertainties of the surface fluxes will have impacts on the derived large-scale forcing data and further affect the simulations of single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulation models (LES), especially for the shallow-cumulus clouds which are mainly driven by surface forcing. This study aims to quantify the uncertainties of the large-scale forcing caused by surface turbulence flux measurements and investigate the impacts on cloud simulations using long-term observations from the ARM SGP site.
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
NASA Technical Reports Server (NTRS)
Spinks, Debra (Compiler)
1997-01-01
This report contains the 1997 annual progress reports of the research fellows and students supported by the Center for Turbulence Research (CTR). Titles include: Invariant modeling in large-eddy simulation of turbulence; Validation of large-eddy simulation in a plain asymmetric diffuser; Progress in large-eddy simulation of trailing-edge turbulence and aeronautics; Resolution requirements in large-eddy simulations of shear flows; A general theory of discrete filtering for LES in complex geometry; On the use of discrete filters for large eddy simulation; Wall models in large eddy simulation of separated flow; Perspectives for ensemble average LES; Anisotropic grid-based formulas for subgrid-scale models; Some modeling requirements for wall models in large eddy simulation; Numerical simulation of 3D turbulent boundary layers using the V2F model; Accurate modeling of impinging jet heat transfer; Application of turbulence models to high-lift airfoils; Advances in structure-based turbulence modeling; Incorporating realistic chemistry into direct numerical simulations of turbulent non-premixed combustion; Effects of small-scale structure on turbulent mixing; Turbulent premixed combustion in the laminar flamelet and the thin reaction zone regime; Large eddy simulation of combustion instabilities in turbulent premixed burners; On the generation of vorticity at a free-surface; Active control of turbulent channel flow; A generalized framework for robust control in fluid mechanics; Combined immersed-boundary/B-spline methods for simulations of flow in complex geometries; and DNS of shock boundary-layer interaction - preliminary results for compression ramp flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Wang, Minghuai; Ghan, Steven J.
Aerosol-cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity (ω500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strong large-scale ascendmore » (ω500 < -25 hPa/d) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is as large as that in stratocumulus regimes, which indicates that regimes with strong large-scale ascend are as important as stratocumulus regimes in studying AIE. 42" It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm/d) contributes the most to the total aerosol indirect forcing (from 64% to nearly 100%). Results show that the uncertainty in AIE is even larger within specific dynamical regimes than that globally, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
NASA Technical Reports Server (NTRS)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Florke, M.; Huang, S.; Motovilov, Y.; Buda, S.;
2017-01-01
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity to climate variability and climate change is comparable for impact models designed for either scale. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a better reproduction of reference conditions. However, the sensitivity of the two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases, but have distinct differences in other cases, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability. Whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models calibrated and validated against observed discharge should be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climatemore » change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.« less
Is There Any Real Observational Contradictoty To The Lcdm Model?
NASA Astrophysics Data System (ADS)
Ma, Yin-Zhe
2011-01-01
In this talk, I am going to question the two apparent observational contradictories to LCDM cosmology---- the lack of large angle correlations in the cosmic microwave background, and the very large bulk flow of galaxy peculiar velocities. On the super-horizon scale, "Copi etal. (2009)” have been arguing that the lack of large angular correlations of the CMB temperature field provides strong evidence against the standard, statistically isotropic, LCDM cosmology. I am going to argue that the "ad-hoc” discrepancy is due to the sub-optimal estimator of the low-l multipoles, and a posteriori statistics, which exaggerates the statistical significance. On Galactic scales, "Watkins et al. (2008)” shows that the very large bulk flow prefers a very large density fluctuation, which seems to contradict to the LCDM model. I am going to show that these results are due to their underestimation of the small scale velocity dispersion, and an arbitrary way of combining catalogues. With the appropriate way of combining catalogue data, as well as the treating the small scale velocity dispersion as a free parameter, the peculiar velocity field provides unconvincing evidence against LCDM cosmology.
NASA Technical Reports Server (NTRS)
Brondum, D. C.; Bennett, J. C.
1986-01-01
The existence of large scale coherent structures in turbulent shear flows has been well documented. Discrepancies between experimental and computational data suggest a necessity to understand the roles they play in mass and momentum transport. Using conditional sampling and averaging on coincident two component velocity and concentration velocity experimental data for swirling and nonswirling coaxial jets, triggers for identifying the structures were examined. Concentration fluctuation was found to be an adequate trigger or indicator for the concentration-velocity data, but no suitable detector was located for the two component velocity data. The large scale structures are found in the region where the largest discrepancies exist between model and experiment. The traditional gradient transport model does not fit in this region as a result of these structures. The large scale motion was found to be responsible for a large percentage downstream at approximately the mean velocity of the overall flow in the axial direction. The radial mean velocity of the structures was found to be substantially greater than that of the overall flow.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Li, Yong; Yuan, Gonglin; Wei, Zengxin
2015-01-01
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.
NASA Astrophysics Data System (ADS)
Guervilly, C.; Cardin, P.
2017-12-01
Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.
LES with and without explicit filtering: comparison and assessment of various models
NASA Astrophysics Data System (ADS)
Winckelmans, Gregoire S.; Jeanmart, Herve; Wray, Alan A.; Carati, Daniele
2000-11-01
The proper mathematical formalism for large eddy simulation (LES) of turbulent flows assumes that a regular ``explicit" filter (i.e., a filter with a well-defined second moment, such as the gaussian, the top hat, etc.) is applied to the equations of fluid motion. This filter is then responsible for a ``filtered-scale" stress. Because of the discretization of the filtered equations, using the LES grid, there is also a ``subgrid-scale" stress. The global effective stress is found to be the discretization of a filtered-scale stress plus a subgrid-scale stress. The former can be partially reconstructed from an exact, infinite, series, the first term of which is the ``tensor-diffusivity" model of Leonard and is found, in practice, to be sufficient for modeling. Alternatively, sufficient reconstruction can also be achieved using the ``scale-similarity" model of Bardina. The latter corresponds to loss of information: it cannot be reconstructed; its effect (essentially dissipation) must be modeled using ad hoc modeling strategies (such as the dynamic version of the ``effective viscosity" model of Smagorinsky). Practitionners also often assume LES without explicit filtering: the effective stress is then only a subgrid-scale stress. We here compare the performance of various LES models for both approaches (with and without explicit filtering), and for cases without solid boundaries: (1) decay of isotropic turbulence; (2) decay of aircraft wake vortices in a turbulent atmosphere. One main conclusion is that better subgrid-scale models are still needed, the effective viscosity models being too active at the large scales.
NASA Astrophysics Data System (ADS)
Omrani, H.; Drobinski, P.; Dubos, T.
2009-09-01
In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.
Bayesian Hierarchical Modeling for Big Data Fusion in Soil Hydrology
NASA Astrophysics Data System (ADS)
Mohanty, B.; Kathuria, D.; Katzfuss, M.
2016-12-01
Soil moisture datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors on the other hand provide observations on a finer spatial scale (meter scale or less) but are sparsely available. Soil moisture is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables. Hydrologic processes usually occur at a scale of 1 km or less and therefore spatially ubiquitous and temporally periodic soil moisture products at this scale are required to aid local decision makers in agriculture, weather prediction and reservoir operations. Past literature has largely focused on downscaling RS soil moisture for a small extent of a field or a watershed and hence the applicability of such products has been limited. The present study employs a spatial Bayesian Hierarchical Model (BHM) to derive soil moisture products at a spatial scale of 1 km for the state of Oklahoma by fusing point scale Mesonet data and coarse scale RS data for soil moisture and its auxiliary covariates such as precipitation, topography, soil texture and vegetation. It is seen that the BHM model handles change of support problems easily while performing accurate uncertainty quantification arising from measurement errors and imperfect retrieval algorithms. The computational challenge arising due to the large number of measurements is tackled by utilizing basis function approaches and likelihood approximations. The BHM model can be considered as a complex Bayesian extension of traditional geostatistical prediction methods (such as Kriging) for large datasets in the presence of uncertainties.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Validation of the RAGE Hydrocode for Impacts into Volatile-Rich Targets
NASA Astrophysics Data System (ADS)
Plesko, C. S.; Asphaug, E.; Coker, R. F.; Wohletz, K. H.; Korycansky, D. G.; Gisler, G. R.
2007-12-01
In preparation for a detailed study of large-scale impacts into the Martian surface, we have validated the RAGE hydrocode (Gittings et al., in press, CSD) against a suite of experiments and statistical models. We present comparisons of hydrocode models to centimeter-scale gas gun impacts (Nakazawa et al. 2002), an underground nuclear test (Perret, 1971), and crater scaling laws (Holsapple 1993, O'Keefe and Ahrens 1993). We have also conducted model convergence and uncertainty analyses which will be presented. Results to date are encouraging for our current model goals, and indicate areas where the hydrocode may be extended in the future. This validation work is focused on questions related to the specific problem of large impacts into volatile-rich targets. The overall goal of this effort is to be able to realistically model large-scale Noachian, and possibly post- Noachian, impacts on Mars not so much to model the crater morphology as to understand the evolution of target volatiles in the post-impact regime, to explore how large craters might set the stage for post-impact hydro- geologic evolution both locally (in the crater subsurface) and globally, due to the redistribution of volatiles from the surface and subsurface into the atmosphere. This work is performed under the auspices of IGPP and the DOE at LANL under contracts W-7405-ENG-36 and DE-AC52-06NA25396. Effort by DK and EA is sponsored by NASA's Mars Fundamental Research Program.
Regional atmospheric models simulate their pertinent processes over a limited portion of the global atmosphere. This portion of the atmosphere can be a large fraction, as in the case of continental-scale modeling, or small fraction, as in the case of urban-scale modeling. Regio...
NASA Astrophysics Data System (ADS)
Wosnik, Martin; Bachant, Peter
2016-11-01
Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.
Ip, Ryan H L; Li, W K; Leung, Kenneth M Y
2013-09-15
Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.
The X-ray emission mechanism of large scale powerful quasar jets: Fermi rules out IC/CMB for 3C 273.
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, Eileen T.
2013-12-01
The process responsible for the Chandra-detected X-ray emission from the large-scale jets of powerful quasars is not clear yet. The two main models are inverse Compton scattering off the cosmic microwave background photons (IC/CMB) and synchrotron emission from a population of electrons separate from those producing the radio-IR emission. These two models imply radically different conditions in the large scale jet in terms of jet speed, kinetic power, and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the larger-scale environment. Georganopoulos et al. (2006) proposed a diagnostic based on a fundamental difference between these two models: the production of synchrotron X-rays requires multi-TeV electrons, while the EC/CMB model requires a cutoff in the electron energy distribution below TeV energies. This has significant implications for the γ-ray emission predicted by these two models. Here we present new Fermi observations that put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that clearly violates the flux expected from the IC/CMB X-ray interpretation found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source. Further, the upper limit from Fermi puts a limit on the Doppler beaming factor of at least δ <9, assuming equipartition fields, and possibly as low as δ <5 assuming no major deceleration of the jet from knots A through D1.
1982-08-01
AD-AIA 700 FLORIDA UN1V GAINESVILLE DEPT OF ENVIRONMENTAL ENGIN -ETC F/G 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMOR--ENL...Conway ecosystem and is part of the Large- Scale Operations Management Test (LSOMT) of the Aquatic Plant Control Research Program (APCRP) at the WES...should be cited as follows: Blancher, E. C., II, and Fellows, C. R. 1982. "Large-Scale Operations Management Test of Use of the White Amur for Control
Scale-Similar Models for Large-Eddy Simulations
NASA Technical Reports Server (NTRS)
Sarghini, F.
1999-01-01
Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.
Bridging the scales in a eulerian air quality model to assess megacity export of pollution
NASA Astrophysics Data System (ADS)
Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.
2013-08-01
In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
Anisotropic modulus stabilisation: strings at LHC scales with micron-sized extra dimensions
NASA Astrophysics Data System (ADS)
Cicoli, M.; Burgess, C. P.; Quevedo, F.
2011-10-01
We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: ( i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); ( ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; ( iii) a rich spectrum of string and KK states at TeV scales; and ( iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3 or T 4-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are likely to be present on K3 or T 4-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and briefly discuss some of their astrophysical, cosmological and phenomenological implications.
Flagellum synchronization inhibits large-scale hydrodynamic instabilities in sperm suspensions
NASA Astrophysics Data System (ADS)
Schöller, Simon F.; Keaveny, Eric E.
2016-11-01
Sperm in suspension can exhibit large-scale collective motion and form coherent structures. Our picture of such coherent motion is largely based on reduced models that treat the swimmers as self-locomoting rigid bodies that interact via steady dipolar flow fields. Swimming sperm, however, have many more degrees of freedom due to elasticity, have a more exotic shape, and generate spatially-complex, time-dependent flow fields. While these complexities are known to lead to phenomena such as flagellum synchronization and attraction, how these effects impact the overall suspension behaviour and coherent structure formation is largely unknown. Using a computational model that captures both flagellum beating and elasticity, we simulate suspensions on the order of 103 individual swimming sperm cells whose motion is coupled through the surrounding Stokesian fluid. We find that the tendency for flagella to synchronize and sperm to aggregate inhibits the emergence of the large-scale hydrodynamic instabilities often associated with active suspensions. However, when synchronization is repressed by adding noise in the flagellum actuation mechanism, the picture changes and the structures that resemble large-scale vortices appear to re-emerge. Supported by an Imperial College PhD scholarship.
Does lower Omega allow a resolution of the large-scale structure problem?
NASA Technical Reports Server (NTRS)
Silk, Joseph; Vittorio, Nicola
1987-01-01
The intermediate angular scale anisotropy of the cosmic microwave background, peculiar velocities, density correlations, and mass fluctuations for both neutrino and baryon-dominated universes with Omega less than one are evaluated. The large coherence length associated with a low-Omega, hot dark matter-dominated universe provides substantial density fluctuations on scales up to 100 Mpc: there is a range of acceptable models that are capable of producing large voids and superclusters of galaxies and the clustering of galaxy clusters, with Omega roughly 0.3, without violating any observational constraint. Low-Omega, cold dark matter-dominated cosmologies are also examined. All of these models may be reconciled with the inflationary requirement of a flat universe by introducing a cosmological constant 1-Omega.
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
Investigating a link between large and small-scale chaos features on Europa
NASA Astrophysics Data System (ADS)
Tognetti, L.; Rhoden, A.; Nelson, D. M.
2017-12-01
Chaos is one of the most recognizable, and studied, features on Europa's surface. Most models of chaos formation invoke liquid water at shallow depths within the ice shell; the liquid destabilizes the overlying ice layer, breaking it into mobile rafts and destroying pre-existing terrain. This class of model has been applied to both large-scale chaos like Conamara and small-scale features (i.e. microchaos), which are typically <10 km in diameter. Currently unknown, however, is whether both large-scale and small-scale features are produced together, e.g. through a network of smaller sills linked to a larger liquid water pocket. If microchaos features do form as satellites of large-scale chaos features, we would expect a drop off in the number density of microchaos with increasing distance from the large chaos feature; the trend should not be observed in regions without large-scale chaos features. Here, we test the hypothesis that large chaos features create "satellite" systems of smaller chaos features. Either outcome will help us better understand the relationship between large-scale chaos and microchaos. We focus first on regions surrounding the large chaos features Conamara and Murias (e.g. the Mitten). We map all chaos features within 90,000 sq km of the main chaos feature and assign each one a ranking (High Confidence, Probable, or Low Confidence) based on the observed characteristics of each feature. In particular, we look for a distinct boundary, loss of preexisting terrain, the existence of rafts or blocks, and the overall smoothness of the feature. We also note features that are chaos-like but lack sufficient characteristics to be classified as chaos. We then apply the same criteria to map microchaos features in regions of similar area ( 90,000 sq km) that lack large chaos features. By plotting the distribution of microchaos with distance from the center point of the large chaos feature or the mapping region (for the cases without a large feature), we determine whether there is a distinct signature linking large-scale chaos features with nearby microchaos. We discuss the implications of these results on the process of chaos formation and the extent of liquid water within Europa's ice shell.
Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora
2016-02-05
Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less
Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice
NASA Astrophysics Data System (ADS)
Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand
2018-01-01
Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.
Scherer, Laura; Venkatesh, Aranya; Karuppiah, Ramkumar; Pfister, Stephan
2015-04-21
Physical water scarcities can be described by water stress indices. These are often determined at an annual scale and a watershed level; however, such scales mask seasonal fluctuations and spatial heterogeneity within a watershed. In order to account for this level of detail, first and foremost, water availability estimates must be improved and refined. State-of-the-art global hydrological models such as WaterGAP and UNH/GRDC have previously been unable to reliably reflect water availability at the subbasin scale. In this study, the Soil and Water Assessment Tool (SWAT) was tested as an alternative to global models, using the case study of the Mississippi watershed. While SWAT clearly outperformed the global models at the scale of a large watershed, it was judged to be unsuitable for global scale simulations due to the high calibration efforts required. The results obtained in this study show that global assessments miss out on key aspects related to upstream/downstream relations and monthly fluctuations, which are important both for the characterization of water scarcity in the Mississippi watershed and for water footprints. Especially in arid regions, where scarcity is high, these models provide unsatisfying results.
Influence of a large-scale field on energy dissipation in magnetohydrodynamic turbulence
NASA Astrophysics Data System (ADS)
Zhdankin, Vladimir; Boldyrev, Stanislav; Mason, Joanne
2017-07-01
In magnetohydrodynamic (MHD) turbulence, the large-scale magnetic field sets a preferred local direction for the small-scale dynamics, altering the statistics of turbulence from the isotropic case. This happens even in the absence of a total magnetic flux, since MHD turbulence forms randomly oriented large-scale domains of strong magnetic field. It is therefore customary to study small-scale magnetic plasma turbulence by assuming a strong background magnetic field relative to the turbulent fluctuations. This is done, for example, in reduced models of plasmas, such as reduced MHD, reduced-dimension kinetic models, gyrokinetics, etc., which make theoretical calculations easier and numerical computations cheaper. Recently, however, it has become clear that the turbulent energy dissipation is concentrated in the regions of strong magnetic field variations. A significant fraction of the energy dissipation may be localized in very small volumes corresponding to the boundaries between strongly magnetized domains. In these regions, the reduced models are not applicable. This has important implications for studies of particle heating and acceleration in magnetic plasma turbulence. The goal of this work is to systematically investigate the relationship between local magnetic field variations and magnetic energy dissipation, and to understand its implications for modelling energy dissipation in realistic turbulent plasmas.
Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
ERIC Educational Resources Information Center
Andrich, David; Marais, Ida; Humphry, Stephen Mark
2016-01-01
Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
NASA Astrophysics Data System (ADS)
Nunes, A.; Ivanov, V. Y.
2014-12-01
Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.
NASA Astrophysics Data System (ADS)
Choi, Hyun-Jung; Lee, Hwa Woon; Sung, Kyoung-Hee; Kim, Min-Jung; Kim, Yoo-Keun; Jung, Woo-Sik
In order to incorporate correctly the large or local scale circulation in the model, a nudging term is introduced into the equation of motion. Nudging effects should be included properly in the model to reduce the uncertainties and improve the air flow field. To improve the meteorological components, the nudging coefficient should perform the adequate influence on complex area for the model initialization technique which related to data reliability and error suppression. Several numerical experiments have been undertaken in order to evaluate the effects on air quality modeling by comparing the performance of the meteorological result with variable nudging coefficient experiment. All experiments are calculated by the upper wind conditions (synoptic or asynoptic condition), respectively. Consequently, it is important to examine the model response to nudging effect of wind and mass information. The MM5-CMAQ model was used to assess the ozone differences in each case, during the episode day in Seoul, Korea and we revealed that there were large differences in the ozone concentration for each run. These results suggest that for the appropriate simulation of large or small-scale circulations, nudging considering the synoptic and asynoptic nudging coefficient does have a clear advantage over dynamic initialization, so appropriate limitation of these nudging coefficient values on its upper wind conditions is necessary before making an assessment. The statistical verifications showed that adequate nudging coefficient for both wind and temperature data throughout the model had a consistently positive impact on the atmospheric and air quality field. On the case dominated by large-scale circulation, a large nudging coefficient shows a minor improvement in the atmospheric and air quality field. However, when small-scale convection is present, the large nudging coefficient produces consistent improvement in the atmospheric and air quality field.
The relativistic feedback discharge model of terrestrial gamma ray flashes
NASA Astrophysics Data System (ADS)
Dwyer, Joseph R.
2012-02-01
As thunderclouds charge, the large-scale fields may approach the relativistic feedback threshold, above which the production of relativistic runaway electron avalanches becomes self-sustaining through the generation of backward propagating runaway positrons and backscattered X-rays. Positive intracloud (IC) lightning may force the large-scale electric fields inside thunderclouds above the relativistic feedback threshold, causing the number of runaway electrons, and the resulting X-ray and gamma ray emission, to grow exponentially, producing very large fluxes of energetic radiation. As the flux of runaway electrons increases, ionization eventually causes the electric field to discharge, bringing the field below the relativistic feedback threshold again and reducing the flux of runaway electrons. These processes are investigated with a new model that includes the production, propagation, diffusion, and avalanche multiplication of runaway electrons; the production and propagation of X-rays and gamma rays; and the production, propagation, and annihilation of runaway positrons. In this model, referred to as the relativistic feedback discharge model, the large-scale electric fields are calculated self-consistently from the charge motion of the drifting low-energy electrons and ions, produced from the ionization of air by the runaway electrons, including two- and three-body attachment and recombination. Simulation results show that when relativistic feedback is considered, bright gamma ray flashes are a natural consequence of upward +IC lightning propagating in large-scale thundercloud fields. Furthermore, these flashes have the same time structures, including both single and multiple pulses, intensities, angular distributions, current moments, and energy spectra as terrestrial gamma ray flashes, and produce large current moments that should be observable in radio waves.
Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.
Ionescu, Catalin; Papava, Dragos; Olaru, Vlad; Sminchisescu, Cristian
2014-07-01
We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.
Compensation for large tensor modes with iso-curvature perturbations in CMB anisotropies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yokoyama, Shuichiro, E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: shu@icrr.u-tokyo.ac.jp
Recently, BICEP2 has reported the large tensor-to-scalar ratio r = 0.2{sup +0.07}{sub −0.05} from the observation of the cosmic microwave background (CMB) B-mode at degree-scales. Since tensor modes induce not only CMB B-mode but also the temperature fluctuations on large scales, to realize the consistent temperature fluctuations with the Planck result we should consider suppression of scalar perturbations on corresponding large scales. To realize such a suppression, we consider anti-correlated iso-curvature perturbations which could be realized in the simple curvaton model.
Architectural Optimization of Digital Libraries
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1998-01-01
This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
NASA Astrophysics Data System (ADS)
Peruani, Fernando
2016-11-01
Bacteria, chemically-driven rods, and motility assays are examples of active (i.e. self-propelled) Brownian rods (ABR). The physics of ABR, despite their ubiquity in experimental systems, remains still poorly understood. Here, we review the large-scale properties of collections of ABR moving in a dissipative medium. We address the problem by presenting three different models, of decreasing complexity, which we refer to as model I, II, and III, respectively. Comparing model I, II, and III, we disentangle the role of activity and interactions. In particular, we learn that in two dimensions by ignoring steric or volume exclusion effects, large-scale nematic order seems to be possible, while steric interactions prevent the formation of orientational order at large scales. The macroscopic behavior of ABR results from the interplay between active stresses and local alignment. ABR exhibit, depending on where we locate ourselves in parameter space, a zoology of macroscopic patterns that ranges from polar and nematic bands to dynamic aggregates.
A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT
2007-01-30
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real worldmore » instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.« less
Large-scale deformation associated with ridge subduction
Geist, E.L.; Fisher, M.A.; Scholl, D. W.
1993-01-01
Continuum models are used to investigate the large-scale deformation associated with the subduction of aseismic ridges. Formulated in the horizontal plane using thin viscous sheet theory, these models measure the horizontal transmission of stress through the arc lithosphere accompanying ridge subduction. Modelling was used to compare the Tonga arc and Louisville ridge collision with the New Hebrides arc and d'Entrecasteaux ridge collision, which have disparate arc-ridge intersection speeds but otherwise similar characteristics. Models of both systems indicate that diffuse deformation (low values of the effective stress-strain exponent n) are required to explain the observed deformation. -from Authors
NASA Technical Reports Server (NTRS)
Weiberg, James A.; Holzhauser, Curt A.
1961-01-01
Tests were made of a large-scale tilt-wing deflected-slipstream VTOL airplane with blowing-type BLC trailing-edge flaps. The model was tested with flap deflections of 0 deg. without BLC, 50 deg. with and without BLC, and 80 deg. with BLC for wing-tilt angles of 0, 30, and 50 deg. Included are results of tests of the model equipped with a leading-edge flap and the results of tests of the model in the presence of a ground plane.
NASA Technical Reports Server (NTRS)
Kubat, Gregory
2016-01-01
This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.
Understanding the k-5/3 to k-2.4 spectral break in aircraft wind data
NASA Astrophysics Data System (ADS)
Pinel, J.; Lovejoy, S.; Schertzer, D. J.; Tuck, A.
2010-12-01
A fundamental issue in atmospheric dynamics is to understand how the statistics of fluctuations of various fields vary with their space-time scale. The classical - and still “standard” model - dates back to Kraichnan and Charney’s work on 2-D and geostrophic (quasi 2-D) turbulence at the end of the 1960’s and early 1970’s. It postulates an isotropic 2-D turbulent regime at large scales and an isotropic 3D regime at small scales separated by a “dimensional transition” (once called a “mesoscale gap”) near the pressure scale height of ≈10 km. By the early 1980’s a quite different model emerged, the 23/9-D scaling model in which the dynamics were postulated to be dominated (over wide scale ranges) by a strongly anisotropic scale invariant cascade mechanism with structures becoming flatter and flatter at larger and larger scales in a scaling manner: the isotropy assumptions were discarded but the scaling and cascade assumptions retained. Today, thanks to the revolution in geodata and atmospheric models - both in quality and quantity - the 23/9-D model can explain the observed horizontal cascade structures in remotely sensed radiances, in meteorological “reanalyses”, in meteorological models, in high resolution drop sonde vertical analyses, of lidar vertical sections etc. All of these analyses directly contradict the standard model which predicts drastic “dimensional transitions” for scalar quantities. Indeed, until recently the only unexplained feature was a scale break in aircraft spectra of the (vector) horizontal wind somewhere between about 40 and 200 km. However - contrary to repeated claims - and thanks to a reanalysis of the historical papers - the transition that had been observed since the 1980’s was not between k^-5/3 and k^-3 but rather between k^-5/3 and k^-2.4. By 2009, the standard model was thus hanging by a thread. This was cut when careful analysis of scientific aircraft data allowed the 23/9-D model to explain the large scale k-2.4 regime as an artefact of the aircraft following a sloping trajectory: at large enough scales, the spectrum is simply dominated by vertical rather than horizontal fluctuations which have the required k^-2.4 form. Since aircraft frequently follow gently sloping isobars, this neatly explains the last obstacle to wide range anisotropic scaling models finally opening the door to an urgently needed consensus on the statistical structure of the atmosphere. However, objections remain: at large enough scales do isobaric and isoheight spectra really have different exponents? In this presentation we attempted to study this issue in more detail than before by analyzed data measured by commercial aircrafts through the Tropospheric Airborne Meteorological Data Reporting (TAMDAR) system over CONUS during year 2009. The TAMDAR system allows us to calculate the statistical properties of the wind field on constant pressure and altitude levels. Various statistical exponents were calculated (velocity increment in terms of horizontal, vertical displacement, pressure and time) and we show here what we learned and how this analysis can help with solving this question.
Cortical circuitry implementing graphical models.
Litvak, Shai; Ullman, Shimon
2009-11-01
In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew; ...
2018-05-14
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
Paleoclimate diagnostics: consistent large-scale temperature responses in warm and cold climates
NASA Astrophysics Data System (ADS)
Izumi, Kenji; Bartlein, Patrick; Harrison, Sandy
2015-04-01
The CMIP5 model simulations of the large-scale temperature responses to increased raditative forcing include enhanced land-ocean contrast, stronger response at higher latitudes than in the tropics, and differential responses in warm and cool season climates to uniform forcing. Here we show that these patterns are also characteristic of CMIP5 model simulations of past climates. The differences in the responses over land as opposed to over the ocean, between high and low latitudes, and between summer and winter are remarkably consistent (proportional and nearly linear) across simulations of both cold and warm climates. Similar patterns also appear in historical observations and paleoclimatic reconstructions, implying that such responses are characteristic features of the climate system and not simple model artifacts, thereby increasing our confidence in the ability of climate models to correctly simulate different climatic states. We also show the possibility that a small set of common mechanisms control these large-scale responses of the climate system across multiple states.
Can limited area NWP and/or RCM models improve on large scales inside their domain?
NASA Astrophysics Data System (ADS)
Mesinger, Fedor; Veljovic, Katarina
2017-04-01
In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales. Average rms wind difference at 250 hPa compared to ECMWF analyses was used as another verification measure. With 21 members run, at about the same resolution of the driver global and the nested Eta during the first 10 days of the experiment, both verification measures generally demonstrate advantage of the Eta, in particular during and after the time of a deep upper tropospheric trough crossing the Rockies at the first 2-6 days of the experiment. Rerunning the Eta ensemble switched to use sigma (Eta/sigma) showed this advantage of the Eta to come to a considerable degree, but not entirely, from its use of the eta coordinate. Compared to cumulative scores of the ensembles run, this is demonstrated to even a greater degree by the number of "wins" of one model vs. another. Thus, at 4.5 day time when the trough just about crossed the Rockies, all 21 Eta/eta members have better ETSa scores than their ECMWF driver members. Eta/sigma has 19 members improving upon ECMWF, but loses to Eta/eta by a score of as much as 20 to 1. ECMWF members do better with rms scores, losing to Eta/eta by 18 vs. 3, but winning over Eta/sigma by 12 to 9. Examples of wind plots behind these results are shown, and additional reasons possibly helping or not helping the results summarized are discussed.
Mechanisms of diurnal precipitation over the US Great Plains: a cloud resolving model perspective
NASA Astrophysics Data System (ADS)
Lee, Myong-In; Choi, Ildae; Tao, Wei-Kuo; Schubert, Siegfried D.; Kang, In-Sik
2010-02-01
The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program’s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.
McLaughlin, David; Shapley, Robert; Shelley, Michael
2003-01-01
A large-scale computational model of a local patch of input layer 4 [Formula: see text] of the primary visual cortex (V1) of the macaque monkey, together with a coarse-grained reduction of the model, are used to understand potential effects of cortical architecture upon neuronal performance. Both the large-scale point neuron model and its asymptotic reduction are described. The work focuses upon orientation preference and selectivity, and upon the spatial distribution of neuronal responses across the cortical layer. Emphasis is given to the role of cortical architecture (the geometry of synaptic connectivity, of the ordered and disordered structure of input feature maps, and of their interplay) as mechanisms underlying cortical responses within the model. Specifically: (i) Distinct characteristics of model neuronal responses (firing rates and orientation selectivity) as they depend upon the neuron's location within the cortical layer relative to the pinwheel centers of the map of orientation preference; (ii) A time independent (DC) elevation in cortico-cortical conductances within the model, in contrast to a "push-pull" antagonism between excitation and inhibition; (iii) The use of asymptotic analysis to unveil mechanisms which underly these performances of the model; (iv) A discussion of emerging experimental data. The work illustrates that large-scale scientific computation--coupled together with analytical reduction, mathematical analysis, and experimental data, can provide significant understanding and intuition about the possible mechanisms of cortical response. It also illustrates that the idealization which is a necessary part of theoretical modeling can outline in sharp relief the consequences of differing alternative interpretations and mechanisms--with final arbiter being a body of experimental evidence whose measurements address the consequences of these analyses.
NASA Technical Reports Server (NTRS)
Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.
2010-01-01
The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.
NASA Astrophysics Data System (ADS)
Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.
2012-12-01
Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of climate change on crop productivity in a watershed. The first was carried out by the large-scale crop model alone. The second was carried out by the integrated model of the large-scale crop model and the H08 model. The former projected that changes in temperature and precipitation due to future climate change would give rise to increasing the water stress in crops. Nevertheless, the latter projected that the increasing amount of agricultural water resources in the watershed would supply sufficient amount of water for irrigation, consequently reduce the water stress. The integrated model demonstrated the importance of taking into account the water circulation in watershed when predicting the regional crop production.
Could the electroweak scale be linked to the large scale structure of the Universe?
NASA Technical Reports Server (NTRS)
Chakravorty, Alak; Massarotti, Alessandro
1991-01-01
We study a model where the domain walls are generated through a cosmological phase transition involving a scalar field. We assume the existence of a coupling between the scalar field and dark matter and show that the interaction between domain walls and dark matter leads to an energy dependent reflection mechanism. For a simple Yakawa coupling, we find that the vacuum expectation value of the scalar field is theta approx. equals 30GeV - 1TeV, in order for the model to be successful in the formation of large scale 'pancake' structures.
A new framework to increase the efficiency of large-scale solar power plants.
NASA Astrophysics Data System (ADS)
Alimohammadi, Shahrouz; Kleissl, Jan P.
2015-11-01
A new framework to estimate the spatio-temporal behavior of solar power is introduced, which predicts the statistical behavior of power output at utility scale Photo-Voltaic (PV) power plants. The framework is based on spatio-temporal Gaussian Processes Regression (Kriging) models, which incorporates satellite data with the UCSD version of the Weather and Research Forecasting model. This framework is designed to improve the efficiency of the large-scale solar power plants. The results are also validated from measurements of the local pyranometer sensors, and some improvements in different scenarios are observed. Solar energy.
Formation of large-scale structure from cosmic-string loops and cold dark matter
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Scherrer, Robert J.
1987-01-01
Some results from a numerical simulation of the formation of large-scale structure from cosmic-string loops are presented. It is found that even though G x mu is required to be lower than 2 x 10 to the -6th (where mu is the mass per unit length of the string) to give a low enough autocorrelation amplitude, there is excessive power on smaller scales, so that galaxies would be more dense than observed. The large-scale structure does not include a filamentary or connected appearance and shares with more conventional models based on Gaussian perturbations the lack of cluster-cluster correlation at the mean cluster separation scale as well as excessively small bulk velocities at these scales.
NASA Technical Reports Server (NTRS)
Dittmar, J. H.
1985-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system
Jensen, Tue V.; Pinson, Pierre
2017-01-01
Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation. PMID:29182600
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.
Jensen, Tue V; Pinson, Pierre
2017-11-28
Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system
NASA Astrophysics Data System (ADS)
Jensen, Tue V.; Pinson, Pierre
2017-11-01
Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.
NASA Technical Reports Server (NTRS)
Canuto, V. M.
1978-01-01
A review of big-bang cosmology is presented, emphasizing the big-bang model, hypotheses on the origin of galaxies, observational tests of the big-bang model that may be possible with the Large Space Telescope, and the scale-covariant theory of gravitation. Detailed attention is given to the equations of general relativity, the redshift-distance relation for extragalactic objects, expansion of the universe, the initial singularity, the discovery of the 3-K blackbody radiation, and measurements of the amount of deuterium in the universe. The curvature of the expanding universe is examined along with the magnitude-redshift relation for quasars and galaxies. Several models for the origin of galaxies are evaluated, and it is suggested that a model of galaxy formation via the formation of black holes is consistent with the model of an expanding universe. Scale covariance is discussed, a scale-covariant theory is developed which contains invariance under scale transformation, and it is shown that Dirac's (1937) large-numbers hypothesis finds a natural role in this theory by relating the atomic and Einstein units.
Numerical Simulations of Vortical Mode Stirring: Effects of Large Scale Shear and Strain
2015-09-30
Numerical Simulations of Vortical Mode Stirring: Effects of Large-Scale Shear and Strain M.-Pascale Lelong NorthWest Research Associates...can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local ambient conditions including latitude...talk at the 1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Nonlinear Effects in Internal Waves Conference held
Analysis of Large Scale Spatial Variability of Soil Moisture Using a Geostatistical Method
2010-01-25
2010 / Accepted: 19 January 2010 / Published: 25 January 2010 Abstract: Spatial and temporal soil moisture dynamics are critically needed to...scale observed and simulated estimates of soil moisture under pre- and post-precipitation event conditions. This large scale variability is a crucial... dynamics is essential in the hydrological and meteorological modeling, improves our understanding of land surface–atmosphere interactions. Spatial and
Seeded hot dark matter models with inflation
NASA Technical Reports Server (NTRS)
Gratsias, John; Scherrer, Robert J.; Steigman, Gary; Villumsen, Jens V.
1993-01-01
We examine massive neutrino (hot dark matter) models for large-scale structure in which the density perturbations are produced by randomly distributed relic seeds and by inflation. Power spectra, streaming velocities, and the Sachs-Wolfe quadrupole fluctuation are derived for this model. We find that the pure seeded hot dark matter model without inflation produces Sachs-Wolfe fluctuations far smaller than those seen by COBE. With the addition of inflationary perturbations, fluctuations consistent with COBE can be produced. The COBE results set the normalization of the inflationary component, which determines the large-scale (about 50/h Mpc) streaming velocities. The normalization of the seed power spectrum is a free parameter, which can be adjusted to obtain the desired fluctuations on small scales. The power spectra produced are very similar to those seen in mixed hot and cold dark matter models.
NASA Technical Reports Server (NTRS)
Gradwohl, Ben-Ami
1991-01-01
The universe may have undergone a superfluid-like phase during its evolution, resulting from the injection of nontopological charge into the spontaneously broken vacuum. In the presence of vortices this charge is identified with angular momentum. This leads to turbulent domains on the scale of the correlation length. By restoring the symmetry at low temperatures, the vortices dissociate and push the charges to the boundaries of these domains. The model can be scaled (phenomenologically) to very low energies, it can be incorporated in a late time phase transition and form large scale structure in the boundary layers of the correlation volumes. The novel feature of the model lies in the fact that the dark matter is endowed with coherent motion. The possibilities of identifying this flow around superfluid vortices with the observed large scale bulk motion is discussed. If this identification is possible, then the definite prediction can be made that a more extended map of peculiar velocities would have to reveal large scale circulations in the flow pattern.
Generation of a Large-scale Magnetic Field in a Convective Full-sphere Cross-helicity Dynamo
NASA Astrophysics Data System (ADS)
Pipin, V. V.; Yokoi, N.
2018-05-01
We study the effects of the cross-helicity in the full-sphere large-scale mean-field dynamo models of a 0.3 M ⊙ star rotating with a period of 10 days. In exploring several dynamo scenarios that stem from magnetic field generation by the cross-helicity effect, we found that the cross-helicity provides the natural generation mechanisms for the large-scale scale axisymmetric and nonaxisymmetric magnetic field. Therefore, the rotating stars with convective envelopes can produce a large-scale magnetic field generated solely due to the turbulent cross-helicity effect (we call it γ 2-dynamo). Using mean-field models we compare the properties of the large-scale magnetic field organization that stems from dynamo mechanisms based on the kinetic helicity (associated with the α 2 dynamos) and cross-helicity. For the fully convective stars, both generation mechanisms can maintain large-scale dynamos even for the solid body rotation law inside the star. The nonaxisymmetric magnetic configurations become preferable when the cross-helicity and the α-effect operate independently of each other. This corresponds to situations with purely γ 2 or α 2 dynamos. The combination of these scenarios, i.e., the γ 2 α 2 dynamo, can generate preferably axisymmetric, dipole-like magnetic fields at strengths of several kGs. Thus, we found a new dynamo scenario that is able to generate an axisymmetric magnetic field even in the case of a solid body rotation of the star. We discuss the possible applications of our findings to stellar observations.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
Higher-level simulations of turbulent flows
NASA Technical Reports Server (NTRS)
Ferziger, J. H.
1981-01-01
The fundamentals of large eddy simulation are considered and the approaches to it are compared. Subgrid scale models and the development of models for the Reynolds-averaged equations are discussed as well as the use of full simulation in testing these models. Numerical methods used in simulating large eddies, the simulation of homogeneous flows, and results from full and large scale eddy simulations of such flows are examined. Free shear flows are considered with emphasis on the mixing layer and wake simulation. Wall-bounded flow (channel flow) and recent work on the boundary layer are also discussed. Applications of large eddy simulation and full simulation in meteorological and environmental contexts are included along with a look at the direction in which work is proceeding and what can be expected from higher-level simulation in the future.
Toward exascale production of recombinant adeno-associated virus for gene transfer applications.
Cecchini, S; Negrete, A; Kotin, R M
2008-06-01
To gain acceptance as a medical treatment, adeno-associated virus (AAV) vectors require a scalable and economical production method. Recent developments indicate that recombinant AAV (rAAV) production in insect cells is compatible with current good manufacturing practice production on an industrial scale. This platform can fully support development of rAAV therapeutics from tissue culture to small animal models, to large animal models, to toxicology studies, to Phase I clinical trials and beyond. Efforts to characterize, optimize and develop insect cell-based rAAV production have culminated in successful bioreactor-scale production of rAAV, with total yields potentially capable of approaching the exa-(10(18)) scale. These advances in large-scale AAV production will allow us to address specific catastrophic, intractable human diseases such as Duchenne muscular dystrophy, for which large amounts of recombinant vector are essential for successful outcome.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Filter size definition in anisotropic subgrid models for large eddy simulation on irregular grids
NASA Astrophysics Data System (ADS)
Abbà, Antonella; Campaniello, Dario; Nini, Michele
2017-06-01
The definition of the characteristic filter size to be used for subgrid scales models in large eddy simulation using irregular grids is still an unclosed problem. We investigate some different approaches to the definition of the filter length for anisotropic subgrid scale models and we propose a tensorial formulation based on the inertial ellipsoid of the grid element. The results demonstrate an improvement in the prediction of several key features of the flow when the anisotropicity of the grid is explicitly taken into account with the tensorial filter size.
Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation
NASA Astrophysics Data System (ADS)
Ogawa, Masatoshi; Ogai, Harutoshi
Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.
Structure and modeling of turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novikov, E.A.
The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scalemore » motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).« less
NASA Astrophysics Data System (ADS)
Agrawal, Ankit; Ganai, Nirmalendu; Sengupta, Surajit; Menon, Gautam I.
2017-01-01
Active matter models describe a number of biophysical phenomena at the cell and tissue scale. Such models explore the macroscopic consequences of driving specific soft condensed matter systems of biological relevance out of equilibrium through ‘active’ processes. Here, we describe how active matter models can be used to study the large-scale properties of chromosomes contained within the nuclei of human cells in interphase. We show that polymer models for chromosomes that incorporate inhomogeneous activity reproduce many general, yet little understood, features of large-scale nuclear architecture. These include: (i) the spatial separation of gene-rich, low-density euchromatin, predominantly found towards the centre of the nucleus, vis a vis. gene-poor, denser heterochromatin, typically enriched in proximity to the nuclear periphery, (ii) the differential positioning of individual gene-rich and gene-poor chromosomes, (iii) the formation of chromosome territories, as well as (iv), the weak size-dependence of the positions of individual chromosome centres-of-mass relative to the nuclear centre that is seen in some cell types. Such structuring is induced purely by the combination of activity and confinement and is absent in thermal equilibrium. We systematically explore active matter models for chromosomes, discussing how our model can be generalized to study variations in chromosome positioning across different cell types. The approach and model we outline here represent a preliminary attempt towards a quantitative, first-principles description of the large-scale architecture of the cell nucleus.
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan
2004-07-01
It is well known that regional climate simulations are sensitive to the size and position of the domain chosen for calculations. Here we study the physical mechanisms of this sensitivity. We conducted simulations with the Regional Atmospheric Modeling System (RAMS) for June 2000 over North America at 50 km horizontal resolution using a 7500 km × 5400 km grid and NCEP/NCAR reanalysis as boundary conditions. The position of the domain was displaced in several directions, always maintaining the U.S. in the interior, out of the buffer zone along the lateral boundaries. Circulation biases developed a large scale structure, organized by the Rocky Mountains, resulting from a systematic shifting of the synoptic wave trains that crossed the domain. The distortion of the large-scale circulation was produced by interaction of the modeled flow with the lateral boundaries of the nested domain and varied when the position of the grid was altered. This changed the large-scale environment among the different simulations and translated into diverse conditions for the development of the mesoscale processes that produce most of precipitation for the Great Plains in the summer season. As a consequence, precipitation results varied, sometimes greatly, among the experiments with the different grid positions. To eliminate the dependence of results on the position of the domain, we used spectral nudging of waves longer than 2500 km above the boundary layer. Moisture was not nudged at any level. This constrained the synoptic scales to follow reanalysis while allowing the model to develop the small-scale dynamics responsible for the rainfall. Nudging of the large scales successfully eliminated the variation of precipitation results when the grid was moved. We suggest that this technique is necessary for all downscaling studies with regional models with domain sizes of a few thousand kilometers and larger embedded in global models.
Perturbation theory for cosmologies with nonlinear structure
NASA Astrophysics Data System (ADS)
Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy
2017-11-01
The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.
NASA Astrophysics Data System (ADS)
Coon, E.; Jan, A.; Painter, S. L.; Moulton, J. D.; Wilson, C. J.
2017-12-01
Many permafrost-affected regions in the Arctic manifest a polygonal patterned ground, which contains large carbon stores and is vulnerability to climate change as warming temperatures drive melting ice wedges, polygon degradation, and thawing of the underlying carbon-rich soils. Understanding the fate of this carbon is difficult. The system is controlled by complex, nonlinear physics coupling biogeochemistry, thermal-hydrology, and geomorphology, and there is a strong spatial scale separation between microtopograpy (at the scale of an individual polygon) and the scale of landscape change (at the scale of many thousands of polygons). Physics-based models have come a long way, and are now capable of representing the diverse set of processes, but only on individual polygons or a few polygons. Empirical models have been used to upscale across land types, including ecotypes evolving from low-centered (pristine) polygons to high-centered (degraded) polygon, and do so over large spatial extent, but are limited in their ability to discern causal process mechanisms. Here we present a novel strategy that looks to use physics-based models across scales, bringing together multiple capabilities to capture polygon degradation under a warming climate and its impacts on thermal-hydrology. We use fine-scale simulations on individual polygons to motivate a mixed-dimensional strategy that couples one-dimensional columns representing each individual polygon through two-dimensional surface flow. A subgrid model is used to incorporate the effects of surface microtopography on surface flow; this model is described and calibrated to fine-scale simulations. And critically, a subsidence model that tracks volume loss in bulk ice wedges is used to alter the subsurface structure and subgrid parameters, enabling the inclusion of the feedbacks associated with polygon degradation. This combined strategy results in a model that is able to capture the key features of polygon permafrost degradation, but in a simulation across a large spatial extent of polygonal tundra.
Multi-scale modeling of multi-component reactive transport in geothermal aquifers
NASA Astrophysics Data System (ADS)
Nick, Hamidreza M.; Raoof, Amir; Wolf, Karl-Heinz; Bruhn, David
2014-05-01
In deep geothermal systems heat and chemical stresses can cause physical alterations, which may have a significant effect on flow and reaction rates. As a consequence it will lead to changes in permeability and porosity of the formations due to mineral precipitation and dissolution. Large-scale modeling of reactive transport in such systems is still challenging. A large area of uncertainty is the way in which the pore-scale information controlling the flow and reaction will behave at a larger scale. A possible choice is to use constitutive relationships relating, for example the permeability and porosity evolutions to the change in the pore geometry. While determining such relationships through laboratory experiments may be limited, pore-network modeling provides an alternative solution. In this work, we introduce a new workflow in which a hybrid Finite-Element Finite-Volume method [1,2] and a pore network modeling approach [3] are employed. Using the pore-scale model, relevant constitutive relations are developed. These relations are then embedded in the continuum-scale model. This approach enables us to study non-isothermal reactive transport in porous media while accounting for micro-scale features under realistic conditions. The performance and applicability of the proposed model is explored for different flow and reaction regimes. References: 1. Matthäi, S.K., et al.: Simulation of solute transport through fractured rock: a higher-order accurate finite-element finite-volume method permitting large time steps. Transport in porous media 83.2 (2010): 289-318. 2. Nick, H.M., et al.: Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive Henry problem. Journal of contaminant hydrology 145 (2012), 90-104. 3. Raoof A., et al.: PoreFlow: A Complex pore-network model for simulation of reactive transport in variably saturated porous media, Computers & Geosciences, 61, (2013), 160-174.
Linking an ecosystem model and a landscape model to study forest species response to climate warming
Hong S. He; David J. Mladenoff; Thomas R. Crow
1999-01-01
No single model can address forest change from single tree to regional scales. We discuss a framework linking an ecosystem process model {LINKAGES) with a spatial landscape model (LANDIS) to examine forest species responses to climate warming for a large, heterogeneous landscape in northern Wisconsin, USA. Individual species response at the ecosystem scale was...
Analysis of BJ493 diesel engine lubrication system properties
NASA Astrophysics Data System (ADS)
Liu, F.
2017-12-01
The BJ493ZLQ4A diesel engine design is based on the primary model of BJ493ZLQ3, of which exhaust level is upgraded to the National GB5 standard due to the improved design of combustion and injection systems. Given the above changes in the diesel lubrication system, its improved properties are analyzed in this paper. According to the structures, technical parameters and indices of the lubrication system, the lubrication system model of BJ493ZLQ4A diesel engine was constructed using the Flowmaster flow simulation software. The properties of the diesel engine lubrication system, such as the oil flow rate and pressure at different rotational speeds were analyzed for the schemes involving large- and small-scale oil filters. The calculated values of the main oil channel pressure are in good agreement with the experimental results, which verifies the proposed model feasibility. The calculation results show that the main oil channel pressure and maximum oil flow rate values for the large-scale oil filter scheme satisfy the design requirements, while the small-scale scheme yields too low main oil channel’s pressure and too high. Therefore, application of small-scale oil filters is hazardous, and the large-scale scheme is recommended.
Visualization and modeling of smoke transport over landscape scales
Glenn P. Forney; William Mell
2007-01-01
Computational tools have been developed at the National Institute of Standards and Technology (NIST) for modeling fire spread and smoke transport. These tools have been adapted to address fire scenarios that occur in the wildland urban interface (WUI) over kilometer-scale distances. These models include the smoke plume transport model ALOFT (A Large Open Fire plume...
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model
NASA Astrophysics Data System (ADS)
Kumar, M.; Duffy, C.
2006-05-01
Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.
The Large-Scale Structure of Scientific Method
ERIC Educational Resources Information Center
Kosso, Peter
2009-01-01
The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…
Large-scale translocation strategies for reintroducing red-cockaded woodpeckers
Daniel Saenz; Kristen A. Baum; Richard N. Conner; D. Craig Rudolph; Ralph Costa
2002-01-01
Translocation of wild birds is a potential conservation strategy for the endangered red-cockaded woodpecker (Picoides borealis). We developed and tested 8 large-scale translocation strategy models for a regional red-cockaded woodpecker reintroduction program. The purpose of the reintroduction program is to increase the number of red-cockaded...
Various Numerical Applications on Tropical Convective Systems Using a Cloud Resolving Model
NASA Technical Reports Server (NTRS)
Shie, C.-L.; Tao, W.-K.; Simpson, J.
2003-01-01
In recent years, increasing attention has been given to cloud resolving models (CRMs or cloud ensemble models-CEMs) for their ability to simulate the radiative-convective system, which plays a significant role in determining the regional heat and moisture budgets in the Tropics. The growing popularity of CRM usage can be credited to its inclusion of crucial and physically relatively realistic features such as explicit cloud-scale dynamics, sophisticated microphysical processes, and explicit cloud-radiation interaction. On the other hand, impacts of the environmental conditions (for example, the large-scale wind fields, heat and moisture advections as well as sea surface temperature) on the convective system can also be plausibly investigated using the CRMs with imposed explicit forcing. In this paper, by basically using a Goddard Cumulus Ensemble (GCE) model, three different studies on tropical convective systems are briefly presented. Each of these studies serves a different goal as well as uses a different approach. In the first study, which uses more of an idealized approach, the respective impacts of the large-scale horizontal wind shear and surface fluxes on the modeled tropical quasi-equilibrium states of temperature and water vapor are examined. In this 2-D study, the imposed large-scale horizontal wind shear is ideally either nudged (wind shear maintained strong) or mixed (wind shear weakened), while the minimum surface wind speed used for computing surface fluxes varies among various numerical experiments. For the second study, a handful of real tropical episodes (TRMM Kwajalein Experiment - KWAJEX, 1999; TRMM South China Sea Monsoon Experiment - SCSMEX, 1998) have been simulated such that several major atmospheric characteristics such as the rainfall amount and its associated stratiform contribution, the Qlheat and Q2/moisture budgets are investigated. In this study, the observed large-scale heat and moisture advections are continuously applied to the 2-D model. The modeled cloud generated from such an approach is termed continuously forced convection or continuous large-scale forced convection. A third study, which focuses on the respective impact of atmospheric components on upper Ocean heat and salt budgets, will be presented in the end. Unlike the two previous 2-D studies, this study employs the 3-D GCE-simulated diabatic source terms (using TOGA COARE observations) - radiation (longwave and shortwave), surface fluxes (sensible and latent heat, and wind stress), and precipitation as input for the Ocean mixed-layer (OML) model.
Development of Large-Eddy Interaction Model for inhomogeneous turbulent flows
NASA Technical Reports Server (NTRS)
Hong, S. K.; Payne, F. R.
1987-01-01
The objective of this paper is to demonstrate the applicability of a currently proposed model, with minimum empiricism, for calculation of the Reynolds stresses and other turbulence structural quantities in a channel. The current Large-Eddy Interaction Model not only yields Reynolds stresses but also presents an opportunity to illuminate typical characteristic motions of large-scale turbulence and the phenomenological aspects of engineering models for two Reynolds numbers.
ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.
Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng
2017-08-30
While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.
New probes of Cosmic Microwave Background large-scale anomalies
NASA Astrophysics Data System (ADS)
Aiola, Simone
Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the tensions between Planck, WMAP, and SPT temperature data and how the upcoming data release of the ACTpol experiment will contribute to this matter. We provide a description of the current status of the data-analysis pipeline and discuss its ability to recover large-scale modes.
US National Large-scale City Orthoimage Standard Initiative
Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.
2003-01-01
The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.
NASA Technical Reports Server (NTRS)
El-Hady, Nabil M.
1993-01-01
The laminar-turbulent breakdown of a boundary-layer flow along a hollow cylinder at Mach 4.5 is investigated with large-eddy simulation. The subgrid scales are modeled dynamically, where the model coefficients are determined from the local resolved field. The behavior of the dynamic-model coefficients is investigated through both an a priori test with direct numerical simulation data for the same case and a complete large-eddy simulation. Both formulations proposed by Germano et al. and Lilly are used for the determination of unique coefficients for the dynamic model and their results are compared and assessed. The behavior and the energy cascade of the subgrid-scale field structure are investigated at various stages of the transition process. The investigations are able to duplicate a high-speed transition phenomenon observed in experiments and explained only recently by the direct numerical simulations of Pruett and Zang, which is the appearance of 'rope-like' waves. The nonlinear evolution and breakdown of the laminar boundary layer and the structure of the flow field during the transition process were also investigated.
Prediction of Indian Summer-Monsoon Onset Variability: A Season in Advance.
Pradhan, Maheswar; Rao, A Suryachandra; Srivastava, Ankur; Dakate, Ashish; Salunke, Kiran; Shameera, K S
2017-10-27
Monsoon onset is an inherent transient phenomenon of Indian Summer Monsoon and it was never envisaged that this transience can be predicted at long lead times. Though onset is precipitous, its variability exhibits strong teleconnections with large scale forcing such as ENSO and IOD and hence may be predictable. Despite of the tremendous skill achieved by the state-of-the-art models in predicting such large scale processes, the prediction of monsoon onset variability by the models is still limited to just 2-3 weeks in advance. Using an objective definition of onset in a global coupled ocean-atmosphere model, it is shown that the skillful prediction of onset variability is feasible under seasonal prediction framework. The better representations/simulations of not only the large scale processes but also the synoptic and intraseasonal features during the evolution of monsoon onset are the comprehensions behind skillful simulation of monsoon onset variability. The changes observed in convection, tropospheric circulation and moisture availability prior to and after the onset are evidenced in model simulations, which resulted in high hit rate of early/delay in monsoon onset in the high resolution model.
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
A new model for extinction and recolonization in two dimensions: quantifying phylogeography.
Barton, Nicholas H; Kelleher, Jerome; Etheridge, Alison M
2010-09-01
Classical models of gene flow fail in three ways: they cannot explain large-scale patterns; they predict much more genetic diversity than is observed; and they assume that loosely linked genetic loci evolve independently. We propose a new model that deals with these problems. Extinction events kill some fraction of individuals in a region. These are replaced by offspring from a small number of parents, drawn from the preexisting population. This model of evolution forwards in time corresponds to a backwards model, in which ancestral lineages jump to a new location if they are hit by an event, and may coalesce with other lineages that are hit by the same event. We derive an expression for the identity in allelic state, and show that, over scales much larger than the largest event, this converges to the classical value derived by Wright and Malécot. However, rare events that cover large areas cause low genetic diversity, large-scale patterns, and correlations in ancestry between unlinked loci. © 2010 The Author(s). Journal compilation © 2010 The Society for the Study of Evolution.
Local gravity and large-scale structure
NASA Technical Reports Server (NTRS)
Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.
1990-01-01
The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
NASA Astrophysics Data System (ADS)
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
NASA Astrophysics Data System (ADS)
de Boer, D. H.; Hassan, M. A.; MacVicar, B.; Stone, M.
2005-01-01
Contributions by Canadian fluvial geomorphologists between 1999 and 2003 are discussed under four major themes: sediment yield and sediment dynamics of large rivers; cohesive sediment transport; turbulent flow structure and sediment transport; and bed material transport and channel morphology. The paper concludes with a section on recent technical advances. During the review period, substantial progress has been made in investigating the details of fluvial processes at relatively small scales. Examples of this emphasis are the studies of flow structure, turbulence characteristics and bedload transport, which continue to form central themes in fluvial research in Canada. Translating the knowledge of small-scale, process-related research to an understanding of the behaviour of large-scale fluvial systems, however, continues to be a formidable challenge. Models play a prominent role in elucidating the link between small-scale processes and large-scale fluvial geomorphology, and, as a result, a number of papers describing models and modelling results have been published during the review period. In addition, a number of investigators are now approaching the problem by directly investigating changes in the system of interest at larger scales, e.g. a channel reach over tens of years, and attempting to infer what processes may have led to the result. It is to be expected that these complementary approaches will contribute to an increased understanding of fluvial systems at a variety of spatial and temporal scales. Copyright
HAPEX-Sahel: A large-scale study of land-atmosphere interactions in the semi-arid tropics
NASA Technical Reports Server (NTRS)
Gutorbe, J-P.; Lebel, T.; Tinga, A.; Bessemoulin, P.; Brouwer, J.; Dolman, A.J.; Engman, E. T.; Gash, J. H. C.; Hoepffner, M.; Kabat, P.
1994-01-01
The Hydrologic Atmospheric Pilot EXperiment in the Sahel (HAPEX-Sahel) was carried out in Niger, West Africa, during 1991-1992, with an intensive observation period (IOP) in August-October 1992. It aims at improving the parameteriztion of land surface atmospheric interactions at the Global Circulation Model (GCM) gridbox scale. The experiment combines remote sensing and ground based measurements with hydrological and meteorological modeling to develop aggregation techniques for use in large scale estimates of the hydrological and meteorological behavior of large areas in the Sahel. The experimental strategy consisted of a period of intensive measurements during the transition period of the rainy to the dry season, backed up by a series of long term measurements in a 1 by 1 deg square in Niger. Three 'supersites' were instrumented with a variety of hydrological and (micro) meteorological equipment to provide detailed information on the surface energy exchange at the local scale. Boundary layer measurements and aircraft measurements were used to provide information at scales of 100-500 sq km. All relevant remote sensing images were obtained for this period. This program of measurements is now being analyzed and an extensive modelling program is under way to aggregate the information at all scales up to the GCM grid box scale. The experimental strategy and some preliminary results of the IOP are described.
Large-scale weakly supervised object localization via latent category learning.
Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve
2015-04-01
Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.
Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations
NASA Astrophysics Data System (ADS)
Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik
2017-02-01
The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.
NASA Technical Reports Server (NTRS)
Canuto, V. M.
1994-01-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.
NASA Astrophysics Data System (ADS)
Canuto, V. M.
1994-06-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
The Saskatchewan River Basin - a large scale observatory for water security research (Invited)
NASA Astrophysics Data System (ADS)
Wheater, H. S.
2013-12-01
The 336,000 km2 Saskatchewan River Basin (SaskRB) in Western Canada illustrates many of the issues of Water Security faced world-wide. It poses globally-important science challenges due to the diversity in its hydro-climate and ecological zones. With one of the world's more extreme climates, it embodies environments of global significance, including the Rocky Mountains (source of the major rivers in Western Canada), the Boreal Forest (representing 30% of Canada's land area) and the Prairies (home to 80% of Canada's agriculture). Management concerns include: provision of water resources to more than three million inhabitants, including indigenous communities; balancing competing needs for water between different uses, such as urban centres, industry, agriculture, hydropower and environmental flows; issues of water allocation between upstream and downstream users in the three prairie provinces; managing the risks of flood and droughts; and assessing water quality impacts of discharges from major cities and intensive agricultural production. Superimposed on these issues is the need to understand and manage uncertain water futures, including effects of economic growth and environmental change, in a highly fragmented water governance environment. Key science questions focus on understanding and predicting the effects of land and water management and environmental change on water quantity and quality. To address the science challenges, observational data are necessary across multiple scales. This requires focussed research at intensively monitored sites and small watersheds to improve process understanding and fine-scale models. To understand large-scale effects on river flows and quality, land-atmosphere feedbacks, and regional climate, integrated monitoring, modelling and analysis is needed at large basin scale. And to support water management, new tools are needed for operational management and scenario-based planning that can be implemented across multiple scales and multiple jurisdictions. The SaskRB has therefore been developed as a large scale observatory, now a Regional Hydroclimate Project of the World Climate Research Programme's GEWEX project, and is available to contribute to the emerging North American Water Program. State-of-the-art hydro-ecological experimental sites have been developed for the key biomes, and a river and lake biogeochemical research facility, focussed on impacts of nutrients and exotic chemicals. Data are integrated at SaskRB scale to support the development of improved large scale climate and hydrological modelling products, the development of DSS systems for local, provincial and basin-scale management, and the development of related social science research, engaging stakeholders in the research and exploring their values and priorities for water security. The observatory provides multiple scales of observation and modelling required to develop: a) new climate, hydrological and ecological science and modelling tools to address environmental change in key environments, and their integrated effects and feedbacks at large catchment scale, b) new tools needed to support river basin management under uncertainty, including anthropogenic controls on land and water management and c) the place-based focus for the development of new transdisciplinary science.
Holographic renormalization group and cosmology in theories with quasilocalized gravity
NASA Astrophysics Data System (ADS)
Csáki, Csaba; Erlich, Joshua; Hollowood, Timothy J.; Terning, John
2001-03-01
We study the long distance behavior of brane theories with quasilocalized gravity. The five-dimensional (5D) effective theory at large scales follows from a holographic renormalization group flow. As intuitively expected, the graviton is effectively four dimensional at intermediate scales and becomes five dimensional at large scales. However, in the holographic effective theory the essentially 4D radion dominates at long distances and gives rise to scalar antigravity. The holographic description shows that at large distances the Gregory-Rubakov-Sibiryakov (GRS) model is equivalent to the model recently proposed by Dvali, Gabadadze, and Porrati (DGP), where a tensionless brane is embedded into 5D Minkowski space, with an additional induced 4D Einstein-Hilbert term on the brane. In the holographic description the radion of the GRS model is automatically localized on the tensionless brane, and provides the ghostlike field necessary to cancel the extra graviton polarization of the DGP model. Thus, there is a holographic duality between these theories. This analysis provides physical insight into how the GRS model works at intermediate scales; in particular it sheds light on the size of the width of the graviton resonance, and also demonstrates how the holographic renormalization group can be used as a practical tool for calculations.
Holographic renormalization group and cosmology in theories with quasilocalized gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Csaki, Csaba; Erlich, Joshua; Hollowood, Timothy J.
2001-03-15
We study the long distance behavior of brane theories with quasilocalized gravity. The five-dimensional (5D) effective theory at large scales follows from a holographic renormalization group flow. As intuitively expected, the graviton is effectively four dimensional at intermediate scales and becomes five dimensional at large scales. However, in the holographic effective theory the essentially 4D radion dominates at long distances and gives rise to scalar antigravity. The holographic description shows that at large distances the Gregory-Rubakov-Sibiryakov (GRS) model is equivalent to the model recently proposed by Dvali, Gabadadze, and Porrati (DGP), where a tensionless brane is embedded into 5D Minkowskimore » space, with an additional induced 4D Einstein-Hilbert term on the brane. In the holographic description the radion of the GRS model is automatically localized on the tensionless brane, and provides the ghostlike field necessary to cancel the extra graviton polarization of the DGP model. Thus, there is a holographic duality between these theories. This analysis provides physical insight into how the GRS model works at intermediate scales; in particular it sheds light on the size of the width of the graviton resonance, and also demonstrates how the holographic renormalization group can be used as a practical tool for calculations.« less
Podolak, Charles J.
2013-01-01
An ensemble of rule-based models was constructed to assess possible future braided river planform configurations for the Toklat River in Denali National Park and Preserve, Alaska. This approach combined an analysis of large-scale influences on stability with several reduced-complexity models to produce the predictions at a practical level for managers concerned about the persistence of bank erosion while acknowledging the great uncertainty in any landscape prediction. First, a model of confluence angles reproduced observed angles of a major confluence, but showed limited susceptibility to a major rearrangement of the channel planform downstream. Second, a probabilistic map of channel locations was created with a two-parameter channel avulsion model. The predicted channel belt location was concentrated in the same area as the current channel belt. Finally, a suite of valley-scale channel and braid plain characteristics were extracted from a light detection and ranging (LiDAR)-derived surface. The characteristics demonstrated large-scale stabilizing topographic influences on channel planform. The combination of independent analyses increased confidence in the conclusion that the Toklat River braided planform is a dynamically stable system due to large and persistent valley-scale influences, and that a range of avulsive perturbations are likely to result in a relatively unchanged planform configuration in the short term.
Mejias, Jorge F; Murray, John D; Kennedy, Henry; Wang, Xiao-Jing
2016-11-01
Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions.
Mejias, Jorge F.; Murray, John D.; Kennedy, Henry; Wang, Xiao-Jing
2016-01-01
Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions. PMID:28138530
Data-driven Climate Modeling and Prediction
NASA Astrophysics Data System (ADS)
Kondrashov, D. A.; Chekroun, M.
2016-12-01
Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.
Wildhaber, Mark L.; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.; Dey, Rima
2017-01-01
We present a hierarchical series of spatially decreasing and temporally increasing models to evaluate the uncertainty in the atmosphere – ocean global climate model (AOGCM) and the regional climate model (RCM) relative to the uncertainty in the somatic growth of the endangered pallid sturgeon (Scaphirhynchus albus). For effects on fish populations of riverine ecosystems, cli- mate output simulated by coarse-resolution AOGCMs and RCMs must be downscaled to basins to river hydrology to population response. One needs to transfer the information from these climate simulations down to the individual scale in a way that minimizes extrapolation and can account for spatio-temporal variability in the intervening stages. The goal is a framework to determine whether, given uncertainties in the climate models and the biological response, meaningful inference can still be made. The non-linear downscaling of climate information to the river scale requires that one realistically account for spatial and temporal variability across scale. Our down- scaling procedure includes the use of fixed/calibrated hydrological flow and temperature models coupled with a stochastically parameterized sturgeon bioenergetics model. We show that, although there is a large amount of uncertainty associated with both the climate model output and the fish growth process, one can establish significant differences in fish growth distributions between models, and between future and current climates for a given model.
NASA Astrophysics Data System (ADS)
Wehner, Michael; Pall, Pardeep; Zarzycki, Colin; Stone, Daithi
2016-04-01
Probabilistic extreme event attribution is especially difficult for weather events that are caused by extremely rare large-scale meteorological patterns. Traditional modeling techniques have involved using ensembles of climate models, either fully coupled or with prescribed ocean and sea ice. Ensemble sizes for the latter case ranges from several 100 to tens of thousand. However, even if the simulations are constrained by the observed ocean state, the requisite large-scale meteorological pattern may not occur frequently enough or even at all in free running climate model simulations. We present a method to ensure that simulated events similar to the observed event are modeled with enough fidelity that robust statistics can be determined given the large scale meteorological conditions. By initializing suitably constrained short term ensemble hindcasts of both the actual weather system and a counterfactual weather system where the human interference in the climate system is removed, the human contribution to the magnitude of the event can be determined. However, the change (if any) in the probability of an event of the observed magnitude is conditional not only on the state of the ocean/sea ice system but also on the prescribed initial conditions determined by the causal large scale meteorological pattern. We will discuss the implications of this technique through two examples; the 2013 Colorado flood and the 2014 Typhoon Haiyan.
Numerical dissipation vs. subgrid-scale modelling for large eddy simulation
NASA Astrophysics Data System (ADS)
Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos
2017-05-01
This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.
Dynamic subfilter-scale stress model for large-eddy simulations
NASA Astrophysics Data System (ADS)
Rouhi, A.; Piomelli, U.; Geurts, B. J.
2016-08-01
We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.
1987-01-01
A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.
Cloud computing for genomic data analysis and collaboration.
Langmead, Ben; Nellore, Abhinav
2018-04-01
Next-generation sequencing has made major strides in the past decade. Studies based on large sequencing data sets are growing in number, and public archives for raw sequencing data have been doubling in size every 18 months. Leveraging these data requires researchers to use large-scale computational resources. Cloud computing, a model whereby users rent computers and storage from large data centres, is a solution that is gaining traction in genomics research. Here, we describe how cloud computing is used in genomics for research and large-scale collaborations, and argue that its elasticity, reproducibility and privacy features make it ideally suited for the large-scale reanalysis of publicly available archived data, including privacy-protected data.
NASA Technical Reports Server (NTRS)
Bretherton, Christopher S.
2002-01-01
The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@auecc.aichi-edu.ac.jp, E-mail: sano@ile.osaka-u.ac.jp
We report the first successful simulation of spontaneous formation of surface magnetic structures from a large-scale dynamo by strongly stratified thermal convection in Cartesian geometry. The large-scale dynamo observed in our strongly stratified model has physical properties similar to those in earlier weakly stratified convective dynamo simulations, indicating that the α {sup 2}-type mechanism is responsible for the dynamo. In addition to the large-scale dynamo, we find that large-scale structures of the vertical magnetic field are spontaneously formed in the convection zone (CZ) surface only in cases with a strongly stratified atmosphere. The organization of the vertical magnetic field proceedsmore » in the upper CZ within tens of convective turnover time and band-like bipolar structures recurrently appear in the dynamo-saturated stage. We consider several candidates to be possibly be the origin of the surface magnetic structure formation, and then suggest the existence of an as-yet-unknown mechanism for the self-organization of the large-scale magnetic structure, which should be inherent in the strongly stratified convective atmosphere.« less
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
Contribution of peculiar shear motions to large-scale structure
NASA Technical Reports Server (NTRS)
Mueler, Hans-Reinhard; Treumann, Rudolf A.
1994-01-01
Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.
NASA Astrophysics Data System (ADS)
Afanasiev, N. T.; Markov, V. P.
2011-08-01
Approximate functional relationships for the calculation of a disturbed transionogram with a trace deformation caused by the influence of a large-scale irregularity in the electron density are obtained. Numerical and asymptotic modeling of disturbed transionograms at various positions of a spacecraft relative to a ground-based observation point is performed. A possibility of the determination of the intensity and dimensions of a single large-scale irregularity near the boundary of the radio transparency frequency range of the ionosphere is demonstrated.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Measures of Agreement Between Many Raters for Ordinal Classifications
Nelson, Kerrie P.; Edwards, Don
2015-01-01
Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449
NASA Astrophysics Data System (ADS)
Gorokhovski, Mikhael; Zamansky, Rémi
2018-03-01
Consistently with observations from recent experiments and DNS, we focus on the effects of strong velocity increments at small spatial scales for the simulation of the drag force on particles in high Reynolds number flows. In this paper, we decompose the instantaneous particle acceleration in its systematic and residual parts. The first part is given by the steady-drag force obtained from the large-scale energy-containing motions, explicitly resolved by the simulation, while the second denotes the random contribution due to small unresolved turbulent scales. This is in contrast with standard drag models in which the turbulent microstructures advected by the large-scale eddies are deemed to be filtered by the particle inertia. In our paper, the residual term is introduced as the particle acceleration conditionally averaged on the instantaneous dissipation rate along the particle path. The latter is modeled from a log-normal stochastic process with locally defined parameters obtained from the resolved field. The residual term is supplemented by an orientation model which is given by a random walk on the unit sphere. We propose specific models for particles with diameter smaller and larger size than the Kolmogorov scale. In the case of the small particles, the model is assessed by comparison with direct numerical simulation (DNS). Results showed that by introducing this modeling, the particle acceleration statistics from DNS is predicted fairly well, in contrast with the standard LES approach. For the particles bigger than the Kolmogorov scale, we propose a fluctuating particle response time, based on an eddy viscosity estimated at the particle scale. This model gives stretched tails of the particle acceleration distribution and dependence of its variance consistent with experiments.
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
Large-scale structure from cosmic-string loops in a baryon-dominated universe
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Scherrer, Robert J.
1988-01-01
The results are presented of a numerical simulation of the formation of large-scale structure in a universe with Omega(0) = 0.2 and h = 0.5 dominated by baryons in which cosmic strings provide the initial density perturbations. The numerical model yields a power spectrum. Nonlinear evolution confirms that the model can account for 700 km/s bulk flows and a strong cluster-cluster correlation, but does rather poorly on smaller scales. There is no visual 'filamentary' structure, and the two-point correlation has too steep a logarithmic slope. The value of G mu = 4 x 10 to the -6th is significantly lower than previous estimates for the value of G mu in baryon-dominated cosmic string models.
Modeling near-wall turbulent flows
NASA Astrophysics Data System (ADS)
Marusic, Ivan; Mathis, Romain; Hutchins, Nicholas
2010-11-01
The near-wall region of turbulent boundary layers is a crucial region for turbulence production, but it is also a region that becomes increasing difficult to access and make measurements in as the Reynolds number becomes very high. Consequently, it is desirable to model the turbulence in this region. Recent studies have shown that the classical description, with inner (wall) scaling alone, is insufficient to explain the behaviour of the streamwise turbulence intensities with increasing Reynolds number. Here we will review our recent near-wall model (Marusic et al., Science 329, 2010), where the near-wall turbulence is predicted given information from only the large-scale signature at a single measurement point in the logarithmic layer, considerably far from the wall. The model is consistent with the Townsend attached eddy hypothesis in that the large-scale structures associated with the log-region are felt all the way down to the wall, but also includes a non-linear amplitude modulation effect of the large structures on the near-wall turbulence. Detailed predicted spectra across the entire near- wall region will be presented, together with other higher order statistics over a large range of Reynolds numbers varying from laboratory to atmospheric flows.
Large-scale shell-model study of the Sn isotopes
NASA Astrophysics Data System (ADS)
Osnes, Eivind; Engeland, Torgeir; Hjorth-Jensen, Morten
2015-05-01
We summarize the results of an extensive study of the structure of the Sn isotopes using a large shell-model space and effective interactions evaluated from realistic two-nucleon potentials. For a fuller account, see ref. [1].
Evaluation of a strain-sensitive transport model in LES of turbulent nonpremixed sooting flames
NASA Astrophysics Data System (ADS)
Lew, Jeffry K.; Yang, Suo; Mueller, Michael E.
2017-11-01
Direct Numerical Simulations (DNS) of turbulent nonpremixed jet flames have revealed that Polycyclic Aromatic Hydrocarbons (PAH) are confined to spatially intermittent regions of low scalar dissipation rate due to their slow formation chemistry. The length scales of these regions are on the order of the Kolmogorov scale or smaller, where molecular diffusion effects dominate over turbulent transport effects irrespective of the large-scale turbulent Reynolds number. A strain-sensitive transport model has been developed to identify such species whose slow chemistry, relative to local mixing rates, confines them to these small length scales. In a conventional nonpremixed ``flamelet'' approach, these species are then modeled with their molecular Lewis numbers, while remaining species are modeled with an effective unity Lewis number. A priori analysis indicates that this strain-sensitive transport model significantly affects PAH yield in nonpremixed flames with essentially no impact on temperature and major species. The model is applied with Large Eddy Simulation (LES) to a series of turbulent nonpremixed sooting jet flames and validated via comparisons with experimental measurements of soot volume fraction.
The origin of the structure of large-scale magnetic fields in disc galaxies
NASA Astrophysics Data System (ADS)
Nixon, C. J.; Hands, T. O.; King, A. R.; Pringle, J. E.
2018-07-01
The large-scale magnetic fields observed in spiral disc galaxies are often thought to result from dynamo action in the disc plane. However, the increasing importance of Faraday depolarization along any line of sight towards the galactic plane suggests that the strongest polarization signal may come from well above (˜0.3-1 kpc) this plane, from the vicinity of the warm interstellar medium (WIM)/halo interface. We propose (see also Henriksen & Irwin 2016) that the observed spiral fields (polarization patterns) result from the action of vertical shear on an initially poloidal field. We show that this simple model accounts for the main observed properties of large-scale fields. We speculate as to how current models of optical spiral structure may generate the observed arm/interarm spiral polarization patterns.
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian
2016-01-01
A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.
Numerical study of dynamo action at low magnetic Prandtl numbers.
Ponty, Y; Mininni, P D; Montgomery, D C; Pinton, J-F; Politano, H; Pouquet, A
2005-04-29
We present a three-pronged numerical approach to the dynamo problem at low magnetic Prandtl numbers P(M). The difficulty of resolving a large range of scales is circumvented by combining direct numerical simulations, a Lagrangian-averaged model and large-eddy simulations. The flow is generated by the Taylor-Green forcing; it combines a well defined structure at large scales and turbulent fluctuations at small scales. Our main findings are (i) dynamos are observed from P(M)=1 down to P(M)=10(-2), (ii) the critical magnetic Reynolds number increases sharply with P(M)(-1) as turbulence sets in and then it saturates, and (iii) in the linear growth phase, unstable magnetic modes move to smaller scales as P(M) is decreased. Then the dynamo grows at large scales and modifies the turbulent velocity fluctuations.
Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.
2012-01-01
Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.
Downscaling GLOF Hazards: An in-depth look at the Nepal Himalaya
NASA Astrophysics Data System (ADS)
Rounce, D.; McKinney, D. C.; Lala, J.
2016-12-01
The Nepal Himalaya house a large number of glacial lakes that pose a flood hazard to downstream communities and infrastructure. The modeling of the entire process chain of these glacial lake outburst floods (GLOFs) has been advancing rapidly in recent years. The most common cause of failure is mass movement entering the glacial lake, which triggers a tsunami-like wave that breaches the terminal moraine and causes the ensuing downstream flood. Unfortunately, modeling the avalanche, the breach of the moraine, and the downstream flood requires a large amount of site-specific information and can be very labor-intensive. Therefore, these detailed models need to be paired with large-scale hazard assessments that identify the glacial lakes that are the biggest threat and the triggering events that threaten these lakes. This study discusses the merger of a large-scale, remotely-based hazard assessment with more detailed GLOF models to show how GLOF hazard modeling can be downscaled in the Nepal Himalaya.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
NASA Astrophysics Data System (ADS)
Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B.
2004-11-01
Large Eddy Simulations (LES) of atmospheric boundary-layer air movement in urban environments are especially challenging due to complex ground topography. Typically in such applications, fairly coarse grids must be used where the subgrid-scale (SGS) model is expected to play a crucial role. A LES code using pseudo-spectral discretization in horizontal planes and second-order differencing in the vertical is implemented in conjunction with the immersed boundary method to incorporate complex ground topography, with the classic equilibrium log-law boundary condition in the new-wall region, and with several versions of the eddy-viscosity model: (1) the constant-coefficient Smagorinsky model, (2) the dynamic, scale-invariant Lagrangian model, and (3) the dynamic, scale-dependent Lagrangian model. Other planar-averaged type dynamic models are not suitable because spatial averaging is not possible without directions of statistical homogeneity. These SGS models are tested in LES of flow around a square cylinder and of flow over surface-mounted cubes. Effects on the mean flow are documented and found not to be major. Dynamic Lagrangian models give a physically more realistic SGS viscosity field, and in general, the scale-dependent Lagrangian model produces larger Smagorinsky coefficient than the scale-invariant one, leading to reduced distributions of resolved rms velocities especially in the boundary layers near the bluff bodies.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Describing Ecosystem Complexity through Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Shope, C. L.; Tenhunen, J. D.; Peiffer, S.
2011-12-01
Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.
Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.
Baars, W J; Hutchins, N; Marusic, I
2017-03-13
Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Large-scale mapping of hard-rock aquifer properties applied to Burkina Faso.
Courtois, Nathalie; Lachassagne, Patrick; Wyns, Robert; Blanchin, Raymonde; Bougaïré, Francis D; Somé, Sylvain; Tapsoba, Aïssata
2010-01-01
A country-scale (1:1,000,000) methodology has been developed for hydrogeologic mapping of hard-rock aquifers (granitic and metamorphic rocks) of the type that underlie a large part of the African continent. The method is based on quantifying the "useful thickness" and hydrodynamic properties of such aquifers and uses a recent conceptual model developed for this hydrogeologic context. This model links hydrodynamic parameters (transmissivity, storativity) to lithology and the geometry of the various layers constituting a weathering profile. The country-scale hydrogeological mapping was implemented in Burkina Faso, where a recent 1:1,000,000-scale digital geological map and a database of some 16,000 water wells were used to evaluate the methodology.
NASA Astrophysics Data System (ADS)
Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.
2015-12-01
Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple hydrologic with hydrodynamic computations while discriminating between 1D-channels and 2D-floodplains. Such a fully-fledged set-up would be able to provide higher-order flood hazard information, e.g. time to flooding and flood duration, ultimately leading to improved flood risk assessment and management at the large scale.
Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Benard P., Jr.; Woodard, Brian S.
2016-01-01
Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20% semispan), Midspan (64% semispan) and Outboard stations (83% semispan) of a wing based upon a 65% scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 deg C to -1.4 deg C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 deg C to -6.3 deg C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that there are morphological characteristics of glaze and scallop ice accretion on these swept-wing models that are dependent upon the velocity. This work has resulted in a large database of ice-accretion geometry on large-scale, swept-wing models.
Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Bernard P., Jr.; Woodard, Brian S.
2016-01-01
Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20 percent semispan), Midspan (64 percent semispan) and Outboard stations (83 percent semispan) of a wing based upon a 65 percent scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 to -1.4 C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 to -6.3 C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that there are morphological characteristics of glaze and scallop ice accretion on these swept-wing models that are dependent upon the velocity. This work has resulted in a large database of ice-accretion geometry on large-scale, swept-wing models.
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.
Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities
NASA Astrophysics Data System (ADS)
Doster, F.; Celia, M. A.; Nordbotten, J. M.
2012-12-01
Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.
NASA Astrophysics Data System (ADS)
Vanclooster, Marnik
2010-05-01
The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.
NASA Astrophysics Data System (ADS)
Yue, X.; Wang, W.; Schreiner, W. S.; Kuo, Y. H.; Lei, J.; Liu, J.; Burns, A. G.; Zhang, Y.; Zhang, S.
2015-12-01
Based on slant total electron content (TEC) observations made by ~10 satellites and ~450 ground IGS GNSS stations, we constructed a 4-D ionospheric electron density reanalysis during the March 17, 2013 geomagnetic storm. Four main large-scale ionospheric disturbances are identified from reanalysis: (1) The positive storm during the initial phase; (2) The SED (storm enhanced density) structure in both northern and southern hemisphere; (3) The large positive storm in main phase; (4) The significant negative storm in middle and low latitude during recovery phase. We then run the NCAR-TIEGCM model with Heelis electric potential empirical model as polar input. The TIEGCM can reproduce 3 of 4 large-scale structures (except SED) very well. We then further analyzed the altitudinal variations of these large-scale disturbances and found several interesting things, such as the altitude variation of SED, the rotation of positive/negative storm phase with local time. Those structures could not be identified clearly by traditional used data sources, which either has no gloval coverage or no vertical resolution. The drivers such as neutral wind/density and electric field from TIEGCM simulations are also analyzed to self-consistantly explain the identified disturbance features.
Potential climatic impacts and reliability of large-scale offshore wind farms
NASA Astrophysics Data System (ADS)
Wang, Chien; Prinn, Ronald G.
2011-04-01
The vast availability of wind power has fueled substantial interest in this renewable energy source as a potential near-zero greenhouse gas emission technology for meeting future world energy needs while addressing the climate change issue. However, in order to provide even a fraction of the estimated future energy needs, a large-scale deployment of wind turbines (several million) is required. The consequent environmental impacts, and the inherent reliability of such a large-scale usage of intermittent wind power would have to be carefully assessed, in addition to the need to lower the high current unit wind power costs. Our previous study (Wang and Prinn 2010 Atmos. Chem. Phys. 10 2053) using a three-dimensional climate model suggested that a large deployment of wind turbines over land to meet about 10% of predicted world energy needs in 2100 could lead to a significant temperature increase in the lower atmosphere over the installed regions. A global-scale perturbation to the general circulation patterns as well as to the cloud and precipitation distribution was also predicted. In the later study reported here, we conducted a set of six additional model simulations using an improved climate model to further address the potential environmental and intermittency issues of large-scale deployment of offshore wind turbines for differing installation areas and spatial densities. In contrast to the previous land installation results, the offshore wind turbine installations are found to cause a surface cooling over the installed offshore regions. This cooling is due principally to the enhanced latent heat flux from the sea surface to lower atmosphere, driven by an increase in turbulent mixing caused by the wind turbines which was not entirely offset by the concurrent reduction of mean wind kinetic energy. We found that the perturbation of the large-scale deployment of offshore wind turbines to the global climate is relatively small compared to the case of land-based installations. However, the intermittency caused by the significant seasonal wind variations over several major offshore sites is substantial, and demands further options to ensure the reliability of large-scale offshore wind power. The method that we used to simulate the offshore wind turbine effect on the lower atmosphere involved simply increasing the ocean surface drag coefficient. While this method is consistent with several detailed fine-scale simulations of wind turbines, it still needs further study to ensure its validity. New field observations of actual wind turbine arrays are definitely required to provide ultimate validation of the model predictions presented here.
Development and Validation of a Spanish Version of the Grit-S Scale
Arco-Tirado, Jose L.; Fernández-Martín, Francisco D.; Hoyle, Rick H.
2018-01-01
This paper describes the development and initial validation of a Spanish version of the Short Grit (Grit-S) Scale. The Grit-S Scale was adapted and translated into Spanish using the Translation, Review, Adjudication, Pre-testing, and Documentation model and responses to a preliminary set of items from a large sample of university students (N = 1,129). The resultant measure was validated using data from a large stratified random sample of young adults (N = 1,826). Initial validation involved evaluating the internal consistency of the adapted scale and its subscales and comparing the factor structure of the adapted version to that of the original scale. The results were comparable to results from similar analyses of the English version of the scale. Although the internal consistency of the subscales was low, the internal consistency of the full scale was well-within the acceptable range. A two-factor model offered an acceptable account of the data; however, when a single correlated error involving two highly similar items was included, a single factor model fit the data very well. The results support the use of overall scores from the Spanish Grit-S Scale in future research. PMID:29467705
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, W.
High-resolution satellite data provide detailed, quantitative descriptions of land surface characteristics over large areas so that objective scale linkage becomes feasible. With the aid of satellite data, Sellers et al. and Wood and Lakshmi examined the linearity of processes scaled up from 30 m to 15 km. If the phenomenon is scale invariant, then the aggregated value of a function or flux is equivalent to the function computed from aggregated values of controlling variables. The linear relation may be realistic for limited land areas having no large surface contrasts to cause significant horizontal exchange. However, for areas with sharp surfacemore » contrasts, horizontal exchange and different dynamics in the atmospheric boundary may induce nonlinear interactions, such as at interfaces of land-water, forest-farm land, and irrigated crops-desert steppe. The linear approach, however, represents the simplest scenario, and is useful for developing an effective scheme for incorporating subgrid land surface processes into large-scale models. Our studies focus on coupling satellite data and ground measurements with a satellite-data-driven land surface model to parameterize surface fluxes for large-scale climate models. In this case study, we used surface spectral reflectance data from satellite remote sensing to characterize spatial and temporal changes in vegetation and associated surface parameters in an area of about 350 {times} 400 km covering the southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site of the US Department of Energy`s Atmospheric Radiation Measurement (ARM) Program.« less
A large meteorological wind tunnel was used to simulate a suburban atmospheric boundary layer. The model-prototype scale was 1:300 and the roughness length was approximately 1.0 m full scale. The model boundary layer simulated full scale dispersion from ground-level and elevated ...
Thick strings, the liquid crystal blue phase, and cosmological large-scale structure
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1992-01-01
A phenomenological model based on the liquid crystal blue phase is proposed as a model for a late-time cosmological phase transition. Topological defects, in particular thick strings and/or domain walls, are presented as seeds for structure formation. It is shown that the observed large-scale structure, including quasi-periodic wall structure, can be well fitted in the model without violating the microwave background isotropy bound or the limits from induced gravitational waves and the millisecond pulsar timing. Furthermore, such late-time transitions can produce objects such as quasars at high redshifts. The model appears to work with either cold or hot dark matter.
NASA Astrophysics Data System (ADS)
Rasera, L. G.; Mariethoz, G.; Lane, S. N.
2017-12-01
Frequent acquisition of high-resolution digital elevation models (HR-DEMs) over large areas is expensive and difficult. Satellite-derived low-resolution digital elevation models (LR-DEMs) provide extensive coverage of Earth's surface but at coarser spatial and temporal resolutions. Although useful for large scale problems, LR-DEMs are not suitable for modeling hydrologic and geomorphic processes at scales smaller than their spatial resolution. In this work, we present a multiple-point geostatistical approach for downscaling a target LR-DEM based on available high-resolution training data and recurrent high-resolution remote sensing images. The method aims at generating several equiprobable HR-DEMs conditioned to a given target LR-DEM by borrowing small scale topographic patterns from an analogue containing data at both coarse and fine scales. An application of the methodology is demonstrated by using an ensemble of simulated HR-DEMs as input to a flow-routing algorithm. The proposed framework enables a probabilistic assessment of the spatial structures generated by natural phenomena operating at scales finer than the available terrain elevation measurements. A case study in the Swiss Alps is provided to illustrate the methodology.
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2017-07-01
A large eddy simulation (LES) methodology coupled with near-wall modeling has been implemented in the current study for high Re neutral atmospheric boundary layer flows using an exponentially accurate spectral element method in an open-source research code Nek 5000. The effect of artificial length scales due to subgrid scale (SGS) and near wall modeling (NWM) on the scaling laws and structure of the inner and outer layer eddies is studied using varying SGS and NWM parameters in the spectral element framework. The study provides an understanding of the various length scales and dynamics of the eddies affected by the LES model and also the fundamental physics behind the inner and outer layer eddies which are responsible for the correct behavior of the mean statistics in accordance with the definition of equilibrium layers by Townsend. An economical and accurate LES model based on capturing the near wall coherent eddies has been designed, which is successful in eliminating the artificial length scale effects like the log-layer mismatch or the secondary peak generation in the streamwise variance.
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...
2015-06-19
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less
Sensitivity simulations of superparameterised convection in a general circulation model
NASA Astrophysics Data System (ADS)
Rybka, Harald; Tost, Holger
2015-04-01
Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid variability (individual CRM cell output) is analysed in order to illustrate the importance of a highly varying atmospheric structure inside a single GCM grid box. Finally, the convective transport of Radon is observed comparing different transport procedures and their influence on the vertical tracer distribution.
Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks
NASA Astrophysics Data System (ADS)
Leube, P.; Nowak, W.; Sanchez-Vila, X.
2013-12-01
High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.
Large-Scale Machine Learning for Classification and Search
ERIC Educational Resources Information Center
Liu, Wei
2012-01-01
With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…
Architecture and Programming Models for High Performance Intensive Computation
2016-06-29
Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID
The potential for agricultural land use change to reduce flood risk in a large watershed
USDA-ARS?s Scientific Manuscript database
Effects of agricultural land management practices on surface runoff are evident at local scales, but evidence for watershed-scale impacts is limited. In this study, we used the Soil and Water Assessment Tool model to assess changes in downstream flood risks under different land uses for the large, ...
Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment
ERIC Educational Resources Information Center
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas
2015-01-01
Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…
Upper Washita River experimental watersheds: Multiyear stability of soil water content profiles
USDA-ARS?s Scientific Manuscript database
Scaling in situ soil water content time series data to a large spatial domain is a key element of watershed environmental monitoring and modeling. The primary method of estimating and monitoring large-scale soil water content distributions is via in situ networks. It is critical to establish the s...
Detonation failure characterization of non-ideal explosives
NASA Astrophysics Data System (ADS)
Janesheski, Robert S.; Groven, Lori J.; Son, Steven
2012-03-01
Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.
NASA Astrophysics Data System (ADS)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.
Evaluating scaling models in biology using hierarchical Bayesian approaches
Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S
2009-01-01
Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
NASA Astrophysics Data System (ADS)
Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.
2015-05-01
A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
Perspectives on integrated modeling of transport processes in semiconductor crystal growth
NASA Technical Reports Server (NTRS)
Brown, Robert A.
1992-01-01
The wide range of length and time scales involved in industrial scale solidification processes is demonstrated here by considering the Czochralski process for the growth of large diameter silicon crystals that become the substrate material for modern microelectronic devices. The scales range in time from microseconds to thousands of seconds and in space from microns to meters. The physics and chemistry needed to model processes on these different length scales are reviewed.
LARGE-SCALE PREDICTIONS OF MOBILE SOURCE CONTRIBUTIONS TO CONCENTRATIONS OF TOXIC AIR POLLUTANTS
This presentation shows concentrations and deposition of toxic air pollutants predicted by a 3-D air quality model, the Community Multi Scale Air Quality (CMAQ) modeling system. Contributions from both on-road and non-road mobile sources are analyzed.
Modelling disease outbreaks in realistic urban social networks
NASA Astrophysics Data System (ADS)
Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan
2004-05-01
Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.
Cruise noise of the 2/9th scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.
Cruise noise of the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.
NUMERICAL SIMULATIONS OF CORONAL HEATING THROUGH FOOTPOINT BRAIDING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansteen, V.; Pontieu, B. De; Carlsson, M.
2015-10-01
Advanced three-dimensional (3D) radiative MHD simulations now reproduce many properties of the outer solar atmosphere. When including a domain from the convection zone into the corona, a hot chromosphere and corona are self-consistently maintained. Here we study two realistic models, with different simulated areas, magnetic field strength and topology, and numerical resolution. These are compared in order to characterize the heating in the 3D-MHD simulations which self-consistently maintains the structure of the atmosphere. We analyze the heating at both large and small scales and find that heating is episodic and highly structured in space, but occurs along loop-shaped structures, andmore » moves along with the magnetic field. On large scales we find that the heating per particle is maximal near the transition region and that widely distributed opposite-polarity field in the photosphere leads to a greater heating scale height in the corona. On smaller scales, heating is concentrated in current sheets, the thicknesses of which are set by the numerical resolution. Some current sheets fragment in time, this process occurring more readily in the higher-resolution model leading to spatially highly intermittent heating. The large-scale heating structures are found to fade in less than about five minutes, while the smaller, local, heating shows timescales of the order of two minutes in one model and one minutes in the other, higher-resolution, model.« less
Robust large-scale parallel nonlinear solvers for simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less
Active Exploration of Large 3D Model Repositories.
Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min
2015-12-01
With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
Impacts of Large-Scale Circulation on Convection: A 2-D Cloud Resolving Model Study
NASA Technical Reports Server (NTRS)
Li, X; Sui, C.-H.; Lau, K.-M.
1999-01-01
Studies of impacts of large-scale circulation on convection, and the roles of convection in heat and water balances over tropical region are fundamentally important for understanding global climate changes. Heat and water budgets over warm pool (SST=29.5 C) and cold pool (SST=26 C) were analyzed based on simulations of the two-dimensional cloud resolving model. Here the sensitivity of heat and water budgets to different sizes of warm and cold pools is examined.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099
2015-03-15
A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less
On the characteristics of aerosol indirect effect based on dynamic regimes in global climate models
Zhang, Shipeng; Wang, Minghuai; Ghan, Steven J.; ...
2016-03-04
Aerosol–cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity ( ω 500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strongmore » large-scale ascent ( ω 500 < −25 hPa day −1) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is close to that in subsidence regimes, which indicates that regimes with strong large-scale ascent are as important as stratocumulus regimes in studying AIE. It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm day −1) contributes the most to the total aerosol indirect forcing (from 64 to nearly 100 %). Results show that the uncertainty in AIE is even larger within specific dynamical regimes compared to the uncertainty in its global mean values, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
New Models and Methods for the Electroweak Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Linda
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently beingmore » measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac Gaugino Models.« less
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2015-12-01
Runoff generated during heavy rainfall imposes quick, but often intense, changes in the flow of streams, which increase the chance of flash floods in the vicinity of the streams. Understanding the temporal response of streams to heavy rainfall requires a hydrological model that considers meteorological, hydrological, and geological components of the streams and their watersheds. SWAT is a physically-based, semi-distributed model that is capable of simulating water flow within watersheds with both long-term, i.e. annually and monthly, and short-term (daily and sub-daily) time scales. However, the capability of SWAT in sub-daily water flow modeling within large watersheds has not been studied much, compare to long-term and daily time scales. In this study we are investigating the water flow in a large, semi-arid watershed, Nueces River Basin (NRB) with the drainage area of 16950 mi2 located in South Texas, with daily and sub-daily time scales. The objectives of this study are: (1) simulating the response of streams to heavy, and often quick, rainfall, (2) evaluating SWAT performance in sub-daily modeling of water flow within a large watershed, and (3) examining means for model performance improvement during model calibration and verification based on results of sensitivity and uncertainty analysis. The results of this study can provide important information for water resources planning during flood seasons.
NASA Astrophysics Data System (ADS)
Lintner, B. R.; Loikith, P. C.; Pike, M.; Aragon, C.
2017-12-01
Climate change information is increasingly required at impact-relevant scales. However, most state-of-the-art climate models are not of sufficiently high spatial resolution to resolve features explicitly at such scales. This challenge is particularly acute in regions of complex topography, such as the Pacific Northwest of the United States. To address this scale mismatch problem, we consider large-scale meteorological patterns (LSMPs), which can be resolved by climate models and associated with the occurrence of local scale climate and climate extremes. In prior work, using self-organizing maps (SOMs), we computed LSMPs over the northwestern United States (NWUS) from daily reanalysis circulation fields and further related these to the occurrence of observed extreme temperatures and precipitation: SOMs were used to group LSMPs into 12 nodes or clusters spanning the continuum of synoptic variability over the regions. Here this observational foundation is utilized as an evaluation target for a suite of global climate models from the Fifth Phase of the Coupled Model Intercomparison Project (CMIP5). Evaluation is performed in two primary ways. First, daily model circulation fields are assigned to one of the 12 reanalysis nodes based on minimization of the mean square error. From this, a bulk model skill score is computed measuring the similarity between the model and reanalysis nodes. Next, SOMs are applied directly to the model output and compared to the nodes obtained from reanalysis. Results reveal that many of the models have LSMPs analogous to the reanalysis, suggesting that the models reasonably capture observed daily synoptic states.
Cloud-enabled large-scale land surface model simulations with the NASA Land Information System
NASA Astrophysics Data System (ADS)
Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.
2017-12-01
Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and describe the potential deployment of this information technology with other NASA applications.
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Hartmann, Andreas; Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Karst aquifers are an important source of drinking water in many regions of the world, but their resources are likely to be affected by changes in climate and land cover. Karst areas are highly permeable and produce large amounts of groundwater recharge, while surface runoff is typically negligible. As a result, recharge in karst systems may be particularly sensitive to environmental changes compared to other less permeable systems. However, current large-scale hydrological models poorly represent karst specificities. They tend to provide an erroneous water balance and to underestimate groundwater recharge over karst areas. A better understanding of karst hydrology and estimating karst groundwater resources at a large-scale is therefore needed for guiding water management in a changing world. The first objective of the present study is to introduce explicit vegetation processes into a previously developed karst recharge model (VarKarst) to better estimate evapotranspiration losses depending on the land cover characteristics. The novelty of the approach for large-scale modelling lies in the assessment of model output uncertainty, and parameter sensitivity to avoid over-parameterisation. We find that the model so modified is able to produce simulations consistent with observations of evapotranspiration and soil moisture at Fluxnet sites located in carbonate rock areas. Secondly, we aim to determine the model sensitivities to climate and land cover characteristics, and to assess the relative influence of changes in climate and land cover on aquifer recharge. We perform virtual experiments using synthetic climate inputs, and varying the value of land cover parameters. In this way, we can control for variations in climate input characteristics (e.g. precipitation intensity, precipitation frequency) and vegetation characteristics (e.g. canopy water storage capacity, rooting depth), and we can isolate the effect that each of these quantities has on recharge. Our results show that these factors are strongly interacting and are generating non-linear responses in recharge.
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1994-01-01
We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.
Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA)
NASA Technical Reports Server (NTRS)
Lichtwardt, Jonathan; Paciano, Eric; Jameson, Tina; Fong, Robert; Marshall, David
2012-01-01
With the very recent advent of NASA's Environmentally Responsible Aviation Project (ERA), which is dedicated to designing aircraft that will reduce the impact of aviation on the environment, there is a need for research and development of methodologies to minimize fuel burn, emissions, and reduce community noise produced by regional airliners. ERA tackles airframe technology, propulsion technology, and vehicle systems integration to meet performance objectives in the time frame for the aircraft to be at a Technology Readiness Level (TRL) of 4-6 by the year of 2020 (deemed N+2). The proceeding project that investigated similar goals to ERA was NASA's Subsonic Fixed Wing (SFW). SFW focused on conducting research to improve prediction methods and technologies that will produce lower noise, lower emissions, and higher performing subsonic aircraft for the Next Generation Air Transportation System. The work provided in this investigation was a NASA Research Announcement (NRA) contract #NNL07AA55C funded by Subsonic Fixed Wing. The project started in 2007 with a specific goal of conducting a large-scale wind tunnel test along with the development of new and improved predictive codes for the advanced powered-lift concepts. Many of the predictive codes were incorporated to refine the wind tunnel model outer mold line design. The large scale wind tunnel test goal was to investigate powered lift technologies and provide an experimental database to validate current and future modeling techniques. Powered-lift concepts investigated were Circulation Control (CC) wing in conjunction with over-the-wing mounted engines to entrain the exhaust to further increase the lift generated by CC technologies alone. The NRA was a five-year effort; during the first year the objective was to select and refine CESTOL concepts and then to complete a preliminary design of a large-scale wind tunnel model for the large scale test. During the second, third, and fourth years the large-scale wind tunnel model design would be completed, manufactured, and calibrated. During the fifth year the large scale wind tunnel test was conducted. This technical memo will describe all phases of the Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA) project and provide a brief summary of the background and modeling efforts involved in the NRA. The conceptual designs considered for this project and the decision process for the selected configuration adapted for a wind tunnel model will be briefly discussed. The internal configuration of AMELIA, and the internal measurements chosen in order to satisfy the requirements of obtaining a database of experimental data to be used for future computational model validations. The external experimental techniques that were employed during the test, along with the large-scale wind tunnel test facility are covered in great detail. Experimental measurements in the database include forces and moments, and surface pressure distributions, local skin friction measurements, boundary and shear layer velocity profiles, far-field acoustic data and noise signatures from turbofan propulsion simulators. Results and discussion of the circulation control performance, over-the-wing mounted engines, and the combined performance are also discussed in great detail.
A normal stress subgrid-scale eddy viscosity model in large eddy simulation
NASA Technical Reports Server (NTRS)
Horiuti, K.; Mansour, N. N.; Kim, John J.
1993-01-01
The Smagorinsky subgrid-scale eddy viscosity model (SGS-EVM) is commonly used in large eddy simulations (LES) to represent the effects of the unresolved scales on the resolved scales. This model is known to be limited because its constant must be optimized in different flows, and it must be modified with a damping function to account for near-wall effects. The recent dynamic model is designed to overcome these limitations but is compositionally intensive as compared to the traditional SGS-EVM. In a recent study using direct numerical simulation data, Horiuti has shown that these drawbacks are due mainly to the use of an improper velocity scale in the SGS-EVM. He also proposed the use of the subgrid-scale normal stress as a new velocity scale that was inspired by a high-order anisotropic representation model. The testing of Horiuti, however, was conducted using DNS data from a low Reynolds number channel flow simulation. It was felt that further testing at higher Reynolds numbers and also using different flows (other than wall-bounded shear flows) were necessary steps needed to establish the validity of the new model. This is the primary motivation of the present study. The objective is to test the new model using DNS databases of high Reynolds number channel and fully developed turbulent mixing layer flows. The use of both channel (wall-bounded) and mixing layer flows is important for the development of accurate LES models because these two flows encompass many characteristic features of complex turbulent flows.
Large eddy simulations of compressible magnetohydrodynamic turbulence
NASA Astrophysics Data System (ADS)
Grete, Philipp
2017-02-01
Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the subsonic (sonic Mach number M s ≈ 0.2) to the highly supersonic (M s ≈ 20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ≈ 3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.