Sample records for scale modeling approach

  1. A robust quantitative near infrared modeling approach for blend monitoring.

    PubMed

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  2. Three Approaches to Using Lengthy Ordinal Scales in Structural Equation Models: Parceling, Latent Scoring, and Shortening Scales

    ERIC Educational Resources Information Center

    Yang, Chongming; Nay, Sandra; Hoyle, Rick H.

    2010-01-01

    Lengthy scales or testlets pose certain challenges for structural equation modeling (SEM) if all the items are included as indicators of a latent construct. Three general approaches to modeling lengthy scales in SEM (parceling, latent scoring, and shortening) have been reviewed and evaluated. A hypothetical population model is simulated containing…

  3. Root Systems Biology: Integrative Modeling across Scales, from Gene Regulatory Networks to the Rhizosphere1

    PubMed Central

    Hill, Kristine; Porco, Silvana; Lobet, Guillaume; Zappala, Susan; Mooney, Sacha; Draye, Xavier; Bennett, Malcolm J.

    2013-01-01

    Genetic and genomic approaches in model organisms have advanced our understanding of root biology over the last decade. Recently, however, systems biology and modeling have emerged as important approaches, as our understanding of root regulatory pathways has become more complex and interpreting pathway outputs has become less intuitive. To relate root genotype to phenotype, we must move beyond the examination of interactions at the genetic network scale and employ multiscale modeling approaches to predict emergent properties at the tissue, organ, organism, and rhizosphere scales. Understanding the underlying biological mechanisms and the complex interplay between systems at these different scales requires an integrative approach. Here, we describe examples of such approaches and discuss the merits of developing models to span multiple scales, from network to population levels, and to address dynamic interactions between plants and their environment. PMID:24143806

  4. A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.

    2017-12-01

    Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.

  5. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  6. Accuracy and Reliability of Marker-Based Approaches to Scale the Pelvis, Thigh, and Shank Segments in Musculoskeletal Models.

    PubMed

    Kainz, Hans; Hoang, Hoa X; Stockton, Chris; Boyd, Roslyn R; Lloyd, David G; Carty, Christopher P

    2017-10-01

    Gait analysis together with musculoskeletal modeling is widely used for research. In the absence of medical images, surface marker locations are used to scale a generic model to the individual's anthropometry. Studies evaluating the accuracy and reliability of different scaling approaches in a pediatric and/or clinical population have not yet been conducted and, therefore, formed the aim of this study. Magnetic resonance images (MRI) and motion capture data were collected from 12 participants with cerebral palsy and 6 typically developed participants. Accuracy was assessed by comparing the scaled model's segment measures to the corresponding MRI measures, whereas reliability was assessed by comparing the model's segments scaled with the experimental marker locations from the first and second motion capture session. The inclusion of joint centers into the scaling process significantly increased the accuracy of thigh and shank segment length estimates compared to scaling with markers alone. Pelvis scaling approaches which included the pelvis depth measure led to the highest errors compared to the MRI measures. Reliability was similar between scaling approaches with mean ICC of 0.97. The pelvis should be scaled using pelvic width and height and the thigh and shank segment should be scaled using the proximal and distal joint centers.

  7. A review of analogue modelling of geodynamic processes: Approaches, scaling, materials and quantification, with an application to subduction experiments

    NASA Astrophysics Data System (ADS)

    Schellart, Wouter P.; Strak, Vincent

    2016-10-01

    We present a review of the analogue modelling method, which has been used for 200 years, and continues to be used, to investigate geological phenomena and geodynamic processes. We particularly focus on the following four components: (1) the different fundamental modelling approaches that exist in analogue modelling; (2) the scaling theory and scaling of topography; (3) the different materials and rheologies that are used to simulate the complex behaviour of rocks; and (4) a range of recording techniques that are used for qualitative and quantitative analyses and interpretations of analogue models. Furthermore, we apply these four components to laboratory-based subduction models and describe some of the issues at hand with modelling such systems. Over the last 200 years, a wide variety of analogue materials have been used with different rheologies, including viscous materials (e.g. syrups, silicones, water), brittle materials (e.g. granular materials such as sand, microspheres and sugar), plastic materials (e.g. plasticine), visco-plastic materials (e.g. paraffin, waxes, petrolatum) and visco-elasto-plastic materials (e.g. hydrocarbon compounds and gelatins). These materials have been used in many different set-ups to study processes from the microscale, such as porphyroclast rotation, to the mantle scale, such as subduction and mantle convection. Despite the wide variety of modelling materials and great diversity in model set-ups and processes investigated, all laboratory experiments can be classified into one of three different categories based on three fundamental modelling approaches that have been used in analogue modelling: (1) The external approach, (2) the combined (external + internal) approach, and (3) the internal approach. In the external approach and combined approach, energy is added to the experimental system through the external application of a velocity, temperature gradient or a material influx (or a combination thereof), and so the system is open. In the external approach, all deformation in the system is driven by the externally imposed condition, while in the combined approach, part of the deformation is driven by buoyancy forces internal to the system. In the internal approach, all deformation is driven by buoyancy forces internal to the system and so the system is closed and no energy is added during an experimental run. In the combined approach, the externally imposed force or added energy is generally not quantified nor compared to the internal buoyancy force or potential energy of the system, and so it is not known if these experiments are properly scaled with respect to nature. The scaling theory requires that analogue models are geometrically, kinematically and dynamically similar to the natural prototype. Direct scaling of topography in laboratory models indicates that it is often significantly exaggerated. This can be ascribed to (1) The lack of isostatic compensation, which causes topography to be too high. (2) The lack of erosion, which causes topography to be too high. (3) The incorrect scaling of topography when density contrasts are scaled (rather than densities); In isostatically supported models, scaling of density contrasts requires an adjustment of the scaled topography by applying a topographic correction factor. (4) The incorrect scaling of externally imposed boundary conditions in isostatically supported experiments using the combined approach; When externally imposed forces are too high, this creates topography that is too high. Other processes that also affect surface topography in laboratory models but not in nature (or only in a negligible way) include surface tension (for models using fluids) and shear zone dilatation (for models using granular material), but these will generally only affect the model surface topography on relatively short horizontal length scales of the order of several mm across material boundaries and shear zones, respectively.

  8. Application Perspective of 2D+SCALE Dimension

    NASA Astrophysics Data System (ADS)

    Karim, H.; Rahman, A. Abdul

    2016-09-01

    Different applications or users need different abstraction of spatial models, dimensionalities and specification of their datasets due to variations of required analysis and output. Various approaches, data models and data structures are now available to support most current application models in Geographic Information System (GIS). One of the focuses trend in GIS multi-dimensional research community is the implementation of scale dimension with spatial datasets to suit various scale application needs. In this paper, 2D spatial datasets that been scaled up as the third dimension are addressed as 2D+scale (or 3D-scale) dimension. Nowadays, various data structures, data models, approaches, schemas, and formats have been proposed as the best approaches to support variety of applications and dimensionality in 3D topology. However, only a few of them considers the element of scale as their targeted dimension. As the scale dimension is concerned, the implementation approach can be either multi-scale or vario-scale (with any available data structures and formats) depending on application requirements (topology, semantic and function). This paper attempts to discuss on the current and new potential applications which positively could be integrated upon 3D-scale dimension approach. The previous and current works on scale dimension as well as the requirements to be preserved for any given applications, implementation issues and future potential applications forms the major discussion of this paper.

  9. An integrated modeling approach for estimating the water quality benefits of conservation practices at the river basin scale

    USDA-ARS?s Scientific Manuscript database

    The USDA initiated the Conservation Effects Assessment Project (CEAP) to quantify the environmental benefits of conservation practices at regional and national scales. For this assessment, a sampling and modeling approach is used. This paper provides a technical overview of the modeling approach use...

  10. Downscaling modelling system for multi-scale air quality forecasting

    NASA Astrophysics Data System (ADS)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.

  11. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  12. Macroscopic and mesoscopic approach to the alkali-silica reaction in concrete

    NASA Astrophysics Data System (ADS)

    Grymin, Witold; Koniorczyk, Marcin; Pesavento, Francesco; Gawin, Dariusz

    2018-01-01

    A model of the alkali-silica reaction, which takes into account couplings between thermal, hygral, mechanical and chemical phenomena in concrete, has been discussed. The ASR may be considered at macroscopic or mesoscopic scale. The main features of each approach have been summarized and development of the model for both scales has been briefly described. Application of the model to experimental results for both scales has been presented. Even though good accordance of the model has been obtained for both approaches, consideration of the model at the mesoscopic scale allows to model different mortar mixes, prepared with the same aggregate, but of different grain size, using the same set of parameters. It enables also to predict reaction development assuming different alkali sources, such as de-icing salts or alkali leaching.

  13. Accounting for small scale heterogeneity in ecohydrologic watershed models

    NASA Astrophysics Data System (ADS)

    Bhaskar, A.; Fleming, B.; Hogan, D. M.

    2016-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach including characterizing urban vegetation and storm water management features and their impact on watershed scale hydrology and biogeochemical cycling.

  14. Accounting for small scale heterogeneity in ecohydrologic watershed models

    NASA Astrophysics Data System (ADS)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach including characterizing urban vegetation and storm water management features and their impact on watershed scale hydrology and biogeochemical cycling.

  15. Multi-scale modelling of rubber-like materials and soft tissues: an appraisal

    PubMed Central

    Puglisi, G.

    2016-01-01

    We survey, in a partial way, multi-scale approaches for the modelling of rubber-like and soft tissues and compare them with classical macroscopic phenomenological models. Our aim is to show how it is possible to obtain practical mathematical models for the mechanical behaviour of these materials incorporating mesoscopic (network scale) information. Multi-scale approaches are crucial for the theoretical comprehension and prediction of the complex mechanical response of these materials. Moreover, such models are fundamental in the perspective of the design, through manipulation at the micro- and nano-scales, of new polymeric and bioinspired materials with exceptional macroscopic properties. PMID:27118927

  16. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  17. Bridging Scales: A Model-Based Assessment of the Technical Tidal-Stream Energy Resource off Massachusetts, USA

    NASA Astrophysics Data System (ADS)

    Cowles, G. W.; Hakim, A.; Churchill, J. H.

    2016-02-01

    Tidal in-stream energy conversion (TISEC) facilities provide a highly predictable and dependable source of energy. Given the economic and social incentives to migrate towards renewable energy sources there has been tremendous interest in the technology. Key challenges to the design process stem from the wide range of problem scales extending from device to array. In the present approach we apply a multi-model approach to bridge the scales of interest and select optimal device geometries to estimate the technical resource for several realistic sites in the coastal waters of Massachusetts, USA. The approach links two computational models. To establish flow conditions at site scales ( 10m), a barotropic setup of the unstructured grid ocean model FVCOM is employed. The model is validated using shipboard and fixed ADCP as well as pressure data. For device scale, the structured multiblock flow solver SUmb is selected. A large ensemble of simulations of 2D cross-flow tidal turbines is used to construct a surrogate design model. The surrogate model is then queried using velocity profiles extracted from the tidal model to determine the optimal geometry for the conditions at each site. After device selection, the annual technical yield of the array is evaluated with FVCOM using a linear momentum actuator disk approach to model the turbines. Results for several key Massachusetts sites including comparison with theoretical approaches will be presented.

  18. Near infrared spectroscopy to estimate the temperature reached on burned soils: strategies to develop robust models.

    NASA Astrophysics Data System (ADS)

    Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob

    2014-05-01

    The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust models, since this step is the bottle-neck of this technique. In the first approach, a plot-scale model was used to predict the temperature reached in samples collected in other plots from the same site. In a plot-scale model, all the heated aliquots come from a unique plot-scale sample. As expected, the results obtained with this approach were deceptive, because this approach was assuming that a plot-scale model would be enough to represent the whole variability of the site. The accuracy (measured as the root mean square error of prediction, thereinafter RMSEP) was 86ºC, and the bias was also high (>30ºC). In the second approach, the temperatures predicted through several plot-scale models were averaged. The accuracy was improved (RMSEP=65ºC) respect the first approach, because the variability from several plots was considered and biased predictions were partially counterbalanced. However, this approach implies more efforts, since several plot-scale models are needed. In the third approach, the predictions were obtained with site-scale models. These models were constructed with aliquots from several plots. In this case, the results were accurate, since the RMSEP was around 40ºC, the bias was very small (<1ºC) and the R2 was 0.92. As expected, this approach clearly outperformed the second approach, in spite of the fact that the same efforts were needed. In a plot-scale model, only one interaction between temperature and soil components was modelled. However, several different interactions between temperature and soil components were present in the calibration matrix of a site-scale model. Consequently, the site-scale models were able to model the temperature reached excluding the influence of the differences in soil composition, resulting in more robust models respect that variation. Summarizing, the results were highlighting the importance of an adequate strategy to develop robust and accurate models with moderate efforts, and how a wrong strategy can result in deceptive predictions.

  19. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  20. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  1. Bridging scales through multiscale modeling: a case study on protein kinase A.

    PubMed

    Boras, Britton W; Hirakis, Sophia P; Votapka, Lane W; Malmstrom, Robert D; Amaro, Rommie E; McCulloch, Andrew D

    2015-01-01

    The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained from various scales and analysis approaches, such as atomistic molecular models, Markov state models (MSM), subcellular network models, and whole cell models. In this paper, we use protein kinase A (PKA) activation as a case study to explore how computational methods that model different physical scales can complement each other and integrate into an improved multiscale representation of the biological mechanisms. Using measured crystal structures, we show how molecular dynamics (MD) simulations coupled with atomic-scale MSMs can provide conformations for Brownian dynamics (BD) simulations to feed transitional states and kinetic parameters into protein-scale MSMs. We discuss how milestoning can give reaction probabilities and forward-rate constants of cAMP association events by seamlessly integrating MD and BD simulation scales. These rate constants coupled with MSMs provide a robust representation of the free energy landscape, enabling access to kinetic, and thermodynamic parameters unavailable from current experimental data. These approaches have helped to illuminate the cooperative nature of PKA activation in response to distinct cAMP binding events. Collectively, this approach exemplifies a general strategy for multiscale model development that is applicable to a wide range of biological problems.

  2. Evaluating scaling models in biology using hierarchical Bayesian approaches

    PubMed Central

    Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S

    2009-01-01

    Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621

  3. Simulating mesoscale coastal evolution for decadal coastal management: A new framework integrating multiple, complementary modelling approaches

    NASA Astrophysics Data System (ADS)

    van Maanen, Barend; Nicholls, Robert J.; French, Jon R.; Barkwith, Andrew; Bonaldo, Davide; Burningham, Helene; Brad Murray, A.; Payo, Andres; Sutherland, James; Thornhill, Gillian; Townend, Ian H.; van der Wegen, Mick; Walkden, Mike J. A.

    2016-03-01

    Coastal and shoreline management increasingly needs to consider morphological change occurring at decadal to centennial timescales, especially that related to climate change and sea-level rise. This requires the development of morphological models operating at a mesoscale, defined by time and length scales of the order 101 to 102 years and 101 to 102 km. So-called 'reduced complexity' models that represent critical processes at scales not much smaller than the primary scale of interest, and are regulated by capturing the critical feedbacks that govern landform behaviour, are proving effective as a means of exploring emergent coastal behaviour at a landscape scale. Such models tend to be computationally efficient and are thus easily applied within a probabilistic framework. At the same time, reductionist models, built upon a more detailed description of hydrodynamic and sediment transport processes, are capable of application at increasingly broad spatial and temporal scales. More qualitative modelling approaches are also emerging that can guide the development and deployment of quantitative models, and these can be supplemented by varied data-driven modelling approaches that can achieve new explanatory insights from observational datasets. Such disparate approaches have hitherto been pursued largely in isolation by mutually exclusive modelling communities. Brought together, they have the potential to facilitate a step change in our ability to simulate the evolution of coastal morphology at scales that are most relevant to managing erosion and flood risk. Here, we advocate and outline a new integrated modelling framework that deploys coupled mesoscale reduced complexity models, reductionist coastal area models, data-driven approaches, and qualitative conceptual models. Integration of these heterogeneous approaches gives rise to model compositions that can potentially resolve decadal- to centennial-scale behaviour of diverse coupled open coast, estuary and inner shelf settings. This vision is illustrated through an idealised composition of models for a ~ 70 km stretch of the Suffolk coast, eastern England. A key advantage of model linking is that it allows a wide range of real-world situations to be simulated from a small set of model components. However, this process involves more than just the development of software that allows for flexible model coupling. The compatibility of radically different modelling assumptions remains to be carefully assessed and testing as well as evaluating uncertainties of models in composition are areas that require further attention.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranosian, Antranik Antonio; Schembri, Philip Edward; Luscher, Darby Jon

    The Los Alamos National Laboratory's Weapon Systems Engineering division's Advanced Engineering Analysis group employs material constitutive models of composites for use in simulations of components and assemblies of interest. Experimental characterization, modeling and prediction of the macro-scale (i.e. continuum) behaviors of these composite materials is generally difficult because they exhibit nonlinear behaviors on the meso- (e.g. micro-) and macro-scales. Furthermore, it can be difficult to measure and model the mechanical responses of the individual constituents and constituent interactions in the composites of interest. Current efforts to model such composite materials rely on semi-empirical models in which meso-scale properties are inferredmore » from continuum level testing and modeling. The proposed approach involves removing the difficulties of interrogating and characterizing micro-scale behaviors by scaling-up the problem to work with macro-scale composites, with the intention of developing testing and modeling capabilities that will be applicable to the mesoscale. This approach assumes that the physical mechanisms governing the responses of the composites on the meso-scale are reproducible on the macro-scale. Working on the macro-scale simplifies the quantification of composite constituents and constituent interactions so that efforts can be focused on developing material models and the testing techniques needed for calibration and validation. Other benefits to working with macro-scale composites include the ability to engineer and manufacture—potentially using additive manufacturing techniques—composites that will support the application of advanced measurement techniques such as digital volume correlation and three-dimensional computed tomography imaging, which would aid in observing and quantifying complex behaviors that are exhibited in the macro-scale composites of interest. Ultimately, the goal of this new approach is to develop a meso-scale composite modeling framework, applicable to many composite materials, and the corresponding macroscale testing and test data interrogation techniques to support model calibration.« less

  5. FINE SCALE AIR QUALITY MODELING USING DISPERSION AND CMAQ MODELING APPROACHES: AN EXAMPLE APPLICATION IN WILMINGTON, DE

    EPA Science Inventory

    Characterization of spatial variability of air pollutants in an urban setting at fine scales is critical for improved air toxics exposure assessments, for model evaluation studies and also for air quality regulatory applications. For this study, we investigate an approach that su...

  6. An approach to multiscale modelling with graph grammars.

    PubMed

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  7. An approach to multiscale modelling with graph grammars

    PubMed Central

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-01-01

    Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929

  8. Prospective and participatory integrated assessment of agricultural systems from farm to regional scales: Comparison of three modeling approaches.

    PubMed

    Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques

    2013-11-15

    Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Complexity-aware simple modeling.

    PubMed

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Do we have the right models for scaling up health services to achieve the Millennium Development Goals?

    PubMed

    Subramanian, Savitha; Naimoli, Joseph; Matsubayashi, Toru; Peters, David H

    2011-12-14

    There is widespread agreement on the need for scaling up in the health sector to achieve the Millennium Development Goals (MDGs). But many countries are not on track to reach the MDG targets. The dominant approach used by global health initiatives promotes uniform interventions and targets, assuming that specific technical interventions tested in one country can be replicated across countries to rapidly expand coverage. Yet countries scale up health services and progress against the MDGs at very different rates. Global health initiatives need to take advantage of what has been learned about scaling up. A systematic literature review was conducted to identify conceptual models for scaling up health in developing countries, with the articles assessed according to the practical concerns of how to scale up, including the planning, monitoring and implementation approaches. We identified six conceptual models for scaling up in health based on experience with expanding pilot projects and diffusion of innovations. They place importance on paying attention to enhancing organizational, functional, and political capabilities through experimentation and adaptation of strategies in addition to increasing the coverage and range of health services. These scaling up approaches focus on fostering sustainable institutions and the constructive engagement between end users and the provider and financing organizations. The current approaches to scaling up health services to reach the MDGs are overly simplistic and not working adequately. Rather than relying on blueprint planning and raising funds, an approach characteristic of current global health efforts, experience with alternative models suggests that more promising pathways involve "learning by doing" in ways that engage key stakeholders, uses data to address constraints, and incorporates results from pilot projects. Such approaches should be applied to current strategies to achieve the MDGs.

  11. Finite Element Method (FEM) Modeling of Freeze-drying: Monitoring Pharmaceutical Product Robustness During Lyophilization.

    PubMed

    Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V

    2015-12-01

    Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.

  12. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    USGS Publications Warehouse

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  13. RESOLVING NEIGHBORHOOD-SCALE AIR TOXICS MODELING: A CASE STUDY IN WILMINGTON, CALIFORNIA

    EPA Science Inventory

    Air quality modeling is useful for characterizing exposures to air pollutants. While models typically provide results on regional scales, there is a need for refined modeling approaches capable of resolving concentrations on the scale of tens of meters, across modeling domains 1...

  14. Technical note: Comparison of methane ebullition modelling approaches used in terrestrial wetland models

    NASA Astrophysics Data System (ADS)

    Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo

    2018-02-01

    Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.

    Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.

  15. A Sub-filter Scale Noise Equation far Hybrid LES Simulations

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid LES/subscale modeling approaches have an important advantage over the current noise prediction methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence . Previous hybrid approaches use approximate statistical techniques or extrapolation methods to obtain the requisite information about the sub-filter scale motion. An alternative approach would be to adopt the modeling techniques used in the current noise prediction methods and determine the unknown stresses from experimental data. The present paper derives an equation for predicting the sub scale sound from information that can be obtained with currently available experimental procedures. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid techniques.

  16. Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks

    NASA Astrophysics Data System (ADS)

    Fahrenthold, Eric; Lee, Sangyup

    2015-06-01

    The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.

  17. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  18. Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques.

    PubMed

    Lee, Yi-Hsuan; von Davier, Alina A

    2013-07-01

    Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gang

    Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less

  20. Multi-scale hydrometeorological observation and modelling for flash flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-09-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.

  1. Multi-scale hydrometeorological observation and modelling for flash-flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-02-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.

  2. Using Multiscale Modeling to Study Coupled Flow, Transport, Reaction and Biofilm Growth Processes in Porous Media

    NASA Astrophysics Data System (ADS)

    Valocchi, A. J.; Laleian, A.; Werth, C. J.

    2017-12-01

    Perturbation of natural subsurface systems by fluid inputs may induce geochemical or microbiological reactions that change porosity and permeability, leading to complex coupled feedbacks between reaction and transport processes. Some examples are precipitation/dissolution processes associated with carbon capture and storage and biofilm growth associated with contaminant transport and remediation. We study biofilm growth due to mixing controlled reaction of multiple substrates. As biofilms grow, pore clogging occurs which alters pore-scale flow paths thus changing the mixing and reaction. These interactions are challenging to quantify using conventional continuum-scale porosity-permeability relations. Pore-scale models can accurately resolve coupled reaction, biofilm growth and transport processes, but modeling at this scale is not feasible for practical applications. There are two approaches to address this challenge. Results from pore-scale models in generic pore structures can be used to develop empirical relations between porosity and continuum-scale parameters, such as permeability and dispersion coefficients. The other approach is to develop a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled by a suitable method that ensures continuity of flux across the interface. Thus, regions of high reactivity where flow alteration occurs are resolved at the pore scale for accuracy while regions of low reactivity are resolved at the continuum scale for efficiency. This approach thus avoids the need for empirical upscaling relations in regions with strong feedbacks between reaction and porosity change. We explore and compare these approaches for several two-dimensional cases.

  3. Modeling Alaska boreal forests with a controlled trend surface approach

    Treesearch

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  4. Development of a scaled-down aerobic fermentation model for scale-up in recombinant protein vaccine manufacturing.

    PubMed

    Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony

    2012-08-17

    A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Single-particle dynamics of the Anderson model: a local moment approach

    NASA Astrophysics Data System (ADS)

    Glossop, Matthew T.; Logan, David E.

    2002-07-01

    A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.

  6. An approach to studying scale for students in higher education: a Rasch measurement model analysis.

    PubMed

    Waugh, R F; Hii, T K; Islam, A

    2000-01-01

    A questionnaire comprising 80 self-report items was designed to measure student Approaches to Studying in a higher education context. The items were conceptualized and designed from five learning orientations: a Deep Approach, a Surface Approach, a Strategic Approach, Clarity of Direction and Academic Self-Confidence, to include 40 attitude items and 40 corresponding behavior items. The study aimed to create a scale and investigate its psychometric properties using a Rasch measurement model. The convenience sample consisted of 350 students at an Australian university in 1998. The analysis supported the conceptual structure of the Scale as involving studying attitudes and behaviors towards five orientations to learning. Attitudes are mostly easier than behaviors, in line with the theory. Sixty-eight items fit the model and have good psychometric properties. The proportion of observed variance considered true is 92% and the Scale is well-targeted against the students. Some harder items are needed to improve the targeting and some further testing work needs to be done on the Surface Approach. In the Surface Approach and Clarity of Direction in Studying, attitudes make a lesser contribution than behaviors to the variable, Approaches to Studying.

  7. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    NASA Astrophysics Data System (ADS)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The study has shown that the use of multiple methods facilitates the calibration and validation of models and might provide a more accurate measure for soil erosion rates in ungauged catchments. Moreover, the approach could be used to identify the most appropriate working and operational scales for soil erosion modelling.

  8. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  9. Modeling gene expression measurement error: a quasi-likelihood approach

    PubMed Central

    Strimmer, Korbinian

    2003-01-01

    Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637

  10. Fine-Scale Exposure to Allergenic Pollen in the Urban Environment: Evaluation of Land Use Regression Approach.

    PubMed

    Hjort, Jan; Hugg, Timo T; Antikainen, Harri; Rusanen, Jarmo; Sofiev, Mikhail; Kukkonen, Jaakko; Jaakkola, Maritta S; Jaakkola, Jouni J K

    2016-05-01

    Despite the recent developments in physically and chemically based analysis of atmospheric particles, no models exist for resolving the spatial variability of pollen concentration at urban scale. We developed a land use regression (LUR) approach for predicting spatial fine-scale allergenic pollen concentrations in the Helsinki metropolitan area, Finland, and evaluated the performance of the models against available empirical data. We used grass pollen data monitored at 16 sites in an urban area during the peak pollen season and geospatial environmental data. The main statistical method was generalized linear model (GLM). GLM-based LURs explained 79% of the spatial variation in the grass pollen data based on all samples, and 47% of the variation when samples from two sites with very high concentrations were excluded. In model evaluation, prediction errors ranged from 6% to 26% of the observed range of grass pollen concentrations. Our findings support the use of geospatial data-based statistical models to predict the spatial variation of allergenic grass pollen concentrations at intra-urban scales. A remote sensing-based vegetation index was the strongest predictor of pollen concentrations for exposure assessments at local scales. The LUR approach provides new opportunities to estimate the relations between environmental determinants and allergenic pollen concentration in human-modified environments at fine spatial scales. This approach could potentially be applied to estimate retrospectively pollen concentrations to be used for long-term exposure assessments. Hjort J, Hugg TT, Antikainen H, Rusanen J, Sofiev M, Kukkonen J, Jaakkola MS, Jaakkola JJ. 2016. Fine-scale exposure to allergenic pollen in the urban environment: evaluation of land use regression approach. Environ Health Perspect 124:619-626; http://dx.doi.org/10.1289/ehp.1509761.

  11. COMPARING AND LINKING PLUMES ACROSS MODELING APPROACHES

    EPA Science Inventory

    River plumes carry many pollutants, including microorganisms, into lakes and the coastal ocean. The physical scales of many stream and river plumes often lie between the scales for mixing zone plume models, such as the EPA Visual Plumes model, and larger-sized grid scales for re...

  12. Numerical models for fluid-grains interactions: opportunities and limitations

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony

    2017-06-01

    In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.

  13. Scale and the representation of human agency in the modeling of agroecosystems

    DOE PAGES

    Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; ...

    2015-07-17

    Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less

  14. Multiscale Modeling in the Clinic: Drug Design and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, Colleen E.; An, Gary; Cannon, William R.

    A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less

  15. A quantum wave based compact modeling approach for the current in ultra-short DG MOSFETs suitable for rapid multi-scale simulations

    NASA Astrophysics Data System (ADS)

    Hosenfeld, Fabian; Horst, Fabian; Iñíguez, Benjamín; Lime, François; Kloes, Alexander

    2017-11-01

    Source-to-drain (SD) tunneling decreases the device performance in MOSFETs falling below the 10 nm channel length. Modeling quantum mechanical effects including SD tunneling has gained more importance specially for compact model developers. The non-equilibrium Green's function (NEGF) has become a state-of-the-art method for nano-scaled device simulation in the past years. In the sense of a multi-scale simulation approach it is necessary to bridge the gap between compact models with their fast and efficient calculation of the device current, and numerical device models which consider quantum effects of nano-scaled devices. In this work, an NEGF based analytical model for nano-scaled double-gate (DG) MOSFETs is introduced. The model consists of a closed-form potential solution of a classical compact model and a 1D NEGF formalism for calculating the device current, taking into account quantum mechanical effects. The potential calculation omits the iterative coupling and allows the straightforward current calculation. The model is based on a ballistic NEGF approach whereby backscattering effects are considered as second order effect in a closed-form. The accuracy and scalability of the non-iterative DG MOSFET model is inspected in comparison with numerical NanoMOS TCAD data for various channel lengths. With the help of this model investigations on short-channel and temperature effects are performed.

  16. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  17. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.

    1994-06-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  18. Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation

    PubMed Central

    Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan

    2010-01-01

    Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939

  19. Evaluating Change in Behavioral Preferences: Multidimensional Scaling Single-Ideal Point Model

    ERIC Educational Resources Information Center

    Ding, Cody

    2016-01-01

    The purpose of the article is to propose a multidimensional scaling single-ideal point model as a method to evaluate changes in individuals' preferences under the explicit methodological framework of behavioral preference assessment. One example is used to illustrate the approach for a clear idea of what this approach can accomplish.

  20. Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2007-01-01

    A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…

  1. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE PAGES

    Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; ...

    2015-09-28

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less

  2. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.

    2016-09-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less

  3. Airframe Noise Prediction of a Full Aircraft in Model and Full Scale Using a Lattice Boltzmann Approach

    NASA Technical Reports Server (NTRS)

    Fares, Ehab; Duda, Benjamin; Khorrami, Mehdi R.

    2016-01-01

    Unsteady flow computations are presented for a Gulfstream aircraft model in landing configuration, i.e., flap deflected 39deg and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW(Trademark) to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. Two geometry representations of the same aircraft are analyzed: an 18% scale, high-fidelity, semi-span model at wind tunnel Reynolds number and a full-scale, full-span model at half-flight Reynolds number. Previously published and newly generated model-scale results are presented; all full-scale data are disclosed here for the first time. Reynolds number and geometrical fidelity effects are carefully examined to discern aerodynamic and aeroacoustic trends with a special focus on the scaling of surface pressure fluctuations and farfield noise. An additional study of the effects of geometrical detail on farfield noise is also documented. The present investigation reveals that, overall, the model-scale and full-scale aeroacoustic results compare rather well. Nevertheless, the study also highlights that finer geometrical details that are typically not captured at model scales can have a non-negligible contribution to the farfield noise signature.

  4. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).

  5. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  6. Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hales, J. D.; Tonks, M. R.; Chockalingam, K.

    2015-03-01

    Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less

  7. Downscaling ocean conditions: Experiments with a quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Katavouta, A.; Thompson, K. R.

    2013-12-01

    The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.

    Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less

  9. LINKING BROAD-SCALE LANDSCAPE APPROACHES WITH FINE-SCALE PROCESS MODELS: THE SEQL PROJECT

    EPA Science Inventory

    Regional landscape models have been shown to be useful in targeting watersheds in need of further attention at a local scale. However, knowing the proximate causes of environmental degradation at a regional scale, such as impervious surface, is not enough to help local decision m...

  10. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    NASA Astrophysics Data System (ADS)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  11. A stochastic two-scale model for pressure-driven flow between rough surfaces

    PubMed Central

    Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas

    2016-01-01

    Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975

  12. The Scaled SLW model of gas radiation in non-uniform media based on Planck-weighted moments of gas absorption cross-section

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.

    2018-02-01

    The Scaled SLW model for prediction of radiation transfer in non-uniform gaseous media is presented. The paper considers a new approach for construction of a Scaled SLW model. In order to maintain the SLW method as a simple and computationally efficient engineering method special attention is paid to explicit non-iterative methods of calculation of the scaling coefficient. The moments of gas absorption cross-section weighted by the Planck blackbody emissive power (in particular, the first moment - Planck mean, and first inverse moment - Rosseland mean) are used as the total characteristics of the absorption spectrum to be preserved by scaling. Generalized SLW modelling using these moments including both discrete gray gases and the continuous formulation is presented. Application of line-by-line look-up table for corresponding ALBDF and inverse ALBDF distribution functions (such that no solution of implicit equations is needed) ensures that the method is flexible and efficient. Predictions for radiative transfer using the Scaled SLW model are compared to line-by-line benchmark solutions, and predictions using the Rank Correlated SLW model and SLW Reference Approach. Conclusions and recommendations regarding application of the Scaled SLW model are made.

  13. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  14. Modelling strategies to predict the multi-scale effects of rural land management change

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.

    2011-12-01

    Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.

  15. Improved pattern scaling approaches for the use in climate impact studies

    NASA Astrophysics Data System (ADS)

    Herger, Nadja; Sanderson, Benjamin M.; Knutti, Reto

    2015-05-01

    Pattern scaling is a simple way to produce climate projections beyond the scenarios run with expensive global climate models (GCMs). The simplest technique has known limitations and assumes that a spatial climate anomaly pattern obtained from a GCM can be scaled by the global mean temperature (GMT) anomaly. We propose alternatives and assess their skills and limitations. One approach which avoids scaling is to consider a period in a different scenario with the same GMT change. It is attractive as it provides patterns of any temporal resolution that are consistent across variables, and it does not distort variability. Second, we extend the traditional approach with a land-sea contrast term, which provides the largest improvements over the traditional technique. When interpolating between known bounding scenarios, the proposed methods significantly improve the accuracy of the pattern scaled scenario with little computational cost. The remaining errors are much smaller than the Coupled Model Intercomparison Project Phase 5 model spread.

  16. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  17. Mesoscale Models of Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.

    During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.

  18. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE PAGES

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  19. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  20. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    NASA Astrophysics Data System (ADS)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  1. RESOLVING FINE SCALE IN AIR TOXICS MODELING AND THE IMPORTANCE OF ITS SUB-GRID VARIABILITY FOR EXPOSURE ESTIMATES

    EPA Science Inventory

    This presentation explains the importance of the fine-scale features for air toxics exposure modeling. The paper presents a new approach to combine local-scale and regional model results for the National Air Toxic Assessment. The technique has been evaluated with a chemical tra...

  2. Assessment of the scale effect on statistical downscaling quality at a station scale using a weather generator-based model

    USDA-ARS?s Scientific Manuscript database

    The resolution of General Circulation Models (GCMs) is too coarse to assess the fine scale or site-specific impacts of climate change. Downscaling approaches including dynamical and statistical downscaling have been developed to meet this requirement. As the resolution of climate model increases, it...

  3. A methodology for least-squares local quasi-geoid modelling using a noisy satellite-only gravity field model

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-04-01

    The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.

  4. Scale separation for multi-scale modeling of free-surface and two-phase flows with the conservative sharp interface method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de

    In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less

  5. Estimating returns to scale and scale efficiency for energy consuming appliances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, Helcio; Okwelum, Edson O.

    Energy consuming appliances accounted for over 40% of the energy use and $17 billion in sales in the U.S. in 2014. Whether such amounts of money and energy were optimally combined to produce household energy services is not straightforwardly determined. The efficient allocation of capital and energy to provide an energy service has been previously approached, and solved with Data Envelopment Analysis (DEA) under constant returns to scale. That approach, however, lacks the scale dimension of the problem and may restrict the economic efficient models of an appliance available in the market when constant returns to scale does not hold.more » We expand on that approach to estimate returns to scale for energy using appliances. We further calculate DEA scale efficiency scores for the technically efficient models that comprise the economic efficient frontier of the energy service delivered, under different assumptions of returns to scale. We then apply this approach to evaluate dishwashers available in the market in the U.S. Our results show that (a) for the case of dishwashers scale matters, and (b) the dishwashing energy service is delivered under non-decreasing returns to scale. The results further demonstrate that this method contributes to increase consumers’ choice of appliances.« less

  6. Characterization of double continuum formulations of transport through pore-scale information

    NASA Astrophysics Data System (ADS)

    Porta, G.; Ceriotti, G.; Bijeljic, B.

    2016-12-01

    Information on pore-scale characteristics is becoming increasingly available at unprecedented levels of detail from modern visualization/data-acquisition techniques. These advancements are not completely matched by corresponding developments of operational procedures according to which we can engineer theoretical findings aiming at improving our ability to reduce the uncertainty associated with the outputs of continuum-scale models to be employed at large scales. We present here a modeling approach which rests on pore-scale information to achieve a complete characterization of a double continuum model of transport and fluid-fluid reactive processes. Our model makes full use of pore-scale velocity distributions to identify mobile and immobile regions. We do so on the basis of a pointwise (in the pore space) evaluation of the relative strength of advection and diffusion time scales, as rendered by spatially variable values of local Péclet numbers. After mobile and immobile regions are demarcated, we build a simplified unit cell which is employed as a representative proxy of the real porous domain. This model geometry is then employed to simplify the computation of the effective parameters embedded in the double continuum transport model, while retaining relevant information from the pore-scale characterization of the geometry and velocity field. We document results which illustrate the applicability of the methodology to predict transport of a passive tracer within two- and three-dimensional media upon comparison with direct pore-scale numerical simulation of transport in the same geometrical settings. We also show preliminary results about the extension of this model to fluid-fluid reactive transport processes. In this context, we focus on results obtained in two-dimensional porous systems. We discuss the impact of critical quantities required as input to our modeling approach to obtain continuum-scale outputs. We identify the key limitations of the proposed methodology and discuss its capability also in comparison with alternative approaches grounded, e.g., on nonlocal and particle-based approximations.

  7. A Dynamical System Approach Explaining the Process of Development by Introducing Different Time-scales.

    PubMed

    Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh

    2018-06-11

    A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.

  8. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  9. Multi-level molecular modelling for plasma medicine

    NASA Astrophysics Data System (ADS)

    Bogaerts, Annemie; Khosravian, Narjes; Van der Paal, Jonas; Verlackt, Christof C. W.; Yusupov, Maksudbek; Kamaraj, Balu; Neyts, Erik C.

    2016-02-01

    Modelling at the molecular or atomic scale can be very useful for obtaining a better insight in plasma medicine. This paper gives an overview of different atomic/molecular scale modelling approaches that can be used to study the direct interaction of plasma species with biomolecules or the consequences of these interactions for the biomolecules on a somewhat longer time-scale. These approaches include density functional theory (DFT), density functional based tight binding (DFTB), classical reactive and non-reactive molecular dynamics (MD) and united-atom or coarse-grained MD, as well as hybrid quantum mechanics/molecular mechanics (QM/MM) methods. Specific examples will be given for three important types of biomolecules, present in human cells, i.e. proteins, DNA and phospholipids found in the cell membrane. The results show that each of these modelling approaches has its specific strengths and limitations, and is particularly useful for certain applications. A multi-level approach is therefore most suitable for obtaining a global picture of the plasma-biomolecule interactions.

  10. Reverse engineering systems models of regulation: discovery, prediction and mechanisms.

    PubMed

    Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S

    2012-08-01

    Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Network rewiring dynamics with convergence towards a star network

    PubMed Central

    Dick, G.; Parry, M.

    2016-01-01

    Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz (Nature 393, 440–442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach. PMID:27843396

  12. Network rewiring dynamics with convergence towards a star network.

    PubMed

    Whigham, P A; Dick, G; Parry, M

    2016-10-01

    Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz ( Nature 393 , 440-442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach.

  13. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.

    2016-09-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include methods that 1) explicitly model the three-dimensional geometry of pore spaces and 2) those that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of class 1, based on direct numerical simulation using computational fluid dynamics (CFD) codes, against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of class 1 based on the immersed-boundary method (IMB),more » lattice Boltzmann method (LBM), smoothed particle hydrodynamics (SPH), as well as a model of class 2 (a pore-network model or PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results with previously reported experimental observations. Experimental observations are limited to measured pore-scale velocities, so solute transport comparisons are made only among the various models. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations).« less

  14. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback: The PCN Incubation-Panarctic Thermal (PInc-PanTher) Scaling Approach

    NASA Astrophysics Data System (ADS)

    Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.

    2015-12-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.

  15. A Chimpanzee (Pan troglodytes) Model of Triarchic Psychopathy Constructs: Development and Initial Validation

    PubMed Central

    Latzman, Robert D.; Drislane, Laura E.; Hecht, Lisa K.; Brislin, Sarah J.; Patrick, Christopher J.; Lilienfeld, Scott O.; Freeman, Hani J.; Schapiro, Steven J.; Hopkins, William D.

    2015-01-01

    The current work sought to operationalize constructs of the triarchic model of psychopathy in chimpanzees (Pan troglodytes), a species well-suited for investigations of basic biobehavioral dispositions relevant to psychopathology. Across three studies, we generated validity evidence for scale measures of the triarchic model constructs in a large sample (N=238) of socially-housed chimpanzees. Using a consensus-based rating approach, we first identified candidate items for the chimpanzee triarchic (CHMP-Tri) scales from an existing primate personality instrument and refined these into scales. In Study 2, we collected data for these scales from human informants (N=301), and examined their convergent and divergent relations with scales from another triarchic inventory developed for human use. In Study 3, we undertook validation work examining associations between CHMP-Tri scales and task measures of approach-avoidance behavior (N=73) and ability to delay gratification (N=55). Current findings provide support for a chimpanzee model of core dispositions relevant to psychopathy and other forms of psychopathology. PMID:26779396

  16. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Treesearch

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  17. Using Hybrid Techniques for Generating Watershed-scale Flood Models in an Integrated Modeling Framework

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Merwade, V.; Singhofen, P.

    2017-12-01

    There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.

  18. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less

  19. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less

  20. A holistic approach for large-scale derived flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno

    2017-04-01

    Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.

  1. Adjacent-Categories Mokken Models for Rater-Mediated Assessments

    PubMed Central

    Wind, Stefanie A.

    2016-01-01

    Molenaar extended Mokken’s original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken’s original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are used, including rater-mediated educational assessments. Because their underlying item step response functions (i.e., category response functions) are defined using cumulative probabilities, polytomous Mokken models can be classified as cumulative models based on the classifications of polytomous item response theory models proposed by several scholars. In order to permit a closer conceptual alignment with educational performance assessments, this study presents an adjacent-categories variation on the polytomous monotone homogeneity and double monotonicity models. Data from a large-scale rater-mediated writing assessment are used to illustrate the adjacent-categories approach, and results are compared with the original formulations. Major findings suggest that the adjacent-categories models provide additional diagnostic information related to individual raters’ use of rating scale categories that is not observed under the original formulation. Implications are discussed in terms of methods for evaluating rating quality. PMID:29795916

  2. Multiscale modeling and simulation of brain blood flow

    NASA Astrophysics Data System (ADS)

    Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em

    2016-02-01

    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.

  3. Prospects for improving the representation of coastal and shelf seas in global ocean models

    NASA Astrophysics Data System (ADS)

    Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard

    2017-02-01

    Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.

  4. Measuring self-reported studying and learning for university students: linking attitudes and behaviours on the same scale.

    PubMed

    Waugh, Russell F

    2002-12-01

    The relationships between self-reported Approaches to Studying and Self-concept, Self-capability and Studying and Learning Behaviour are usually studied by measuring the variables separately (using factor analysis and Cronbach Alphas) and then using various correlation techniques (such as multiple regression and path analysis). This procedure has measurement problems and is called into question. To create a single scale of Studying and Learning using a model with subsets of ordered stem-items based on a Deep Approach, a Surface Approach and a Strategic Approach, integrated with three self-reported aspects (an Ideal Self-view, a Capability Self-view and a Studying and Learning Behaviour Self-view). The stem-item sample was 33, all answered in three aspects, that produced an effective item sample of 99. The person convenience sample was 431 students in education (1(st) to 4(th) year) at an Australian university during 2000. The latest Rasch Unidimensional Measurement Model Computer Program (Andrich, Lyne, Sheridan, & Luo, 2000) was used to analyse the data and create a single scale of Studying and Learning. Altogether 77 items fitted a Rasch Measurement Model and formed a scale in which the 'difficulties' of the items were ordered from 'easy' to 'hard' and the student measures of Studying and Learning were ordered from 'low' to 'high'. The proportion of observed student variance considered true was 0.96. The response categories were answered consistently and logically and the results supported many, but not all, the conceptualised ordering of the subscales. Students found it 'easy' to report a high Ideal Self-view, 'much harder' to report a high Capability Self-view, and 'harder still' to report a high Studying and Learning Behaviour for the stem-items, in accordance with the model, where items fit the measurement model. The Ideal Self-view Surface Approach items provided the most non-fit to the model. This method was highly successful in producing a single scale of Studying and Learning from self-reported Self-concepts, Self-capabilities, and Studying and Learning Behaviours, based on a Deep Approach, a Surface Approach and a Strategic Approach.

  5. Fine-Scale Mapping by Spatial Risk Distribution Modeling for Regional Malaria Endemicity and Its Implications under the Low-to-Moderate Transmission Setting in Western Cambodia

    PubMed Central

    Okami, Suguru; Kohtake, Naohiko

    2016-01-01

    The disease burden of malaria has decreased as malaria elimination efforts progress. The mapping approach that uses spatial risk distribution modeling needs some adjustment and reinvestigation in accordance with situational changes. Here we applied a mathematical modeling approach for standardized morbidity ratio (SMR) calculated by annual parasite incidence using routinely aggregated surveillance reports, environmental data such as remote sensing data, and non-environmental anthropogenic data to create fine-scale spatial risk distribution maps of western Cambodia. Furthermore, we incorporated a combination of containment status indicators into the model to demonstrate spatial heterogeneities of the relationship between containment status and risks. The explanatory model was fitted to estimate the SMR of each area (adjusted Pearson correlation coefficient R2 = 0.774; Akaike information criterion AIC = 149.423). A Bayesian modeling framework was applied to estimate the uncertainty of the model and cross-scale predictions. Fine-scale maps were created by the spatial interpolation of estimated SMRs at each village. Compared with geocoded case data, corresponding predicted values showed conformity [Spearman’s rank correlation r = 0.662 in the inverse distance weighed interpolation and 0.645 in ordinal kriging (95% confidence intervals of 0.414–0.827 and 0.368–0.813, respectively), Welch’s t-test; Not significant]. The proposed approach successfully explained regional malaria risks and fine-scale risk maps were created under low-to-moderate malaria transmission settings where reinvestigations of existing risk modeling approaches were needed. Moreover, different representations of simulated outcomes of containment status indicators for respective areas provided useful insights for tailored interventional planning, considering regional malaria endemicity. PMID:27415623

  6. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Liu, Yangang

    2014-12-18

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less

  7. Development of a hybrid modeling approach for predicting intensively managed Douglas-fir growth at multiple scales.

    Treesearch

    A. Weiskittel; D. Maguire; R. Monserud

    2007-01-01

    Hybrid models offer the opportunity to improve future growth projections by combining advantages of both empirical and process-based modeling approaches. Hybrid models have been constructed in several regions and their performance relative to a purely empirical approach has varied. A hybrid model was constructed for intensively managed Douglas-fir plantations in the...

  8. EPA RESEARCH HIGHLIGHTS -- MODELS-3/CMAQ OFFERS COMPREHENSIVE APPROACH TO AIR QUALITY MODELING

    EPA Science Inventory

    Regional and global coordinated efforts are needed to address air quality problems that are growing in complexity and scope. Models-3 CMAQ contains a community multi-scale air quality modeling system for simulating urban to regional scale pollution problems relating to troposphe...

  9. Adjacent-Categories Mokken Models for Rater-Mediated Assessments

    ERIC Educational Resources Information Center

    Wind, Stefanie A.

    2017-01-01

    Molenaar extended Mokken's original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken's original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are…

  10. Multi-scale Mexican spotted owl (Strix occidentalis lucida) nest/roost habitat selection in Arizona and a comparison with single-scale modeling results

    Treesearch

    Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey

    2016-01-01

    Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...

  11. Atomistic to continuum modeling of solidification microstructures

    DOE PAGES

    Karma, Alain; Tourret, Damien

    2015-09-26

    We summarize recent advances in modeling of solidification microstructures using computational methods that bridge atomistic to continuum scales. We first discuss progress in atomistic modeling of equilibrium and non-equilibrium solid–liquid interface properties influencing microstructure formation, as well as interface coalescence phenomena influencing the late stages of solidification. The latter is relevant in the context of hot tearing reviewed in the article by M. Rappaz in this issue. We then discuss progress to model microstructures on a continuum scale using phase-field methods. We focus on selected examples in which modeling of 3D cellular and dendritic microstructures has been directly linked tomore » experimental observations. Finally, we discuss a recently introduced coarse-grained dendritic needle network approach to simulate the formation of well-developed dendritic microstructures. The approach reliably bridges the well-separated scales traditionally simulated by phase-field and grain structure models, hence opening new avenues for quantitative modeling of complex intra- and inter-grain dynamical interactions on a grain scale.« less

  12. DoD Acquisitions Reform: Embracing and Implementing Agile

    DTIC Science & Technology

    2015-12-01

    required in the traditional waterfall approach.   Specific models for enterprise level efforts include Scaled Agile Framework, Disciplined Agile...and Acquisition Concerns. Pittsburgh: Carnegie Mellon.  Leffingwell, D. (2007). Why The Waterfall Model Doesn’t Work. In D. Leffingwell, Scaling...serious issue might be the acquisitions process itself. For the last twenty plus years, the Air Force has used the waterfall approach for software

  13. Applying Rasch Model and Generalizability Theory to Study Modified-Angoff Cut Scores

    ERIC Educational Resources Information Center

    Arce, Alvaro J.; Wang, Ze

    2012-01-01

    The traditional approach to scale modified-Angoff cut scores transfers the raw cuts to an existing raw-to-scale score conversion table. Under the traditional approach, cut scores and conversion table raw scores are not only seen as interchangeable but also as originating from a common scaling process. In this article, we propose an alternative…

  14. Probabilistic, meso-scale flood loss modelling

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  15. Estimating the spatial scales of landscape effects on abundance

    Treesearch

    Richard Chandler; Jeffrey Hepinstall-Cymerman

    2016-01-01

    Spatial variation in abundance is influenced by local- and landscape-level environmental variables, but modeling landscape effects is challenging because the spatial scales of the relationships are unknown. Current approaches involve buffering survey locations with polygons of various sizes and using model selection to identify the best scale. The buffering...

  16. Phanerozoic marine diversity: rock record modelling provides an independent test of large-scale trends.

    PubMed

    Smith, Andrew B; Lloyd, Graeme T; McGowan, Alistair J

    2012-11-07

    Sampling bias created by a heterogeneous rock record can seriously distort estimates of marine diversity and makes a direct reading of the fossil record unreliable. Here we compare two independent estimates of Phanerozoic marine diversity that explicitly take account of variation in sampling-a subsampling approach that standardizes for differences in fossil collection intensity, and a rock area modelling approach that takes account of differences in rock availability. Using the fossil records of North America and Western Europe, we demonstrate that a modelling approach applied to the combined data produces results that are significantly correlated with those derived from subsampling. This concordance between independent approaches argues strongly for the reality of the large-scale trends in diversity we identify from both approaches.

  17. Intercellular Genomics of Subsurface Microbial Colonies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortoleva, Peter; Tuncay, Kagan; Gannon, Dennis

    2007-02-14

    This report summarizes progress in the second year of this project. The objective is to develop methods and software to predict the spatial configuration, properties and temporal evolution of microbial colonies in the subsurface. To accomplish this, we integrate models of intracellular processes, cell-host medium exchange and reaction-transport dynamics on the colony scale. At the conclusion of the project, we aim to have the foundations of a predictive mathematical model and software that captures the three scales of these systems – the intracellular, pore, and colony wide spatial scales. In the second year of the project, we refined our transcriptionalmore » regulatory network discovery (TRND) approach that utilizes gene expression data along with phylogenic similarity and gene ontology analyses and applied it successfully to E.coli, human B cells, and Geobacter sulfurreducens. We have developed a new Web interface, GeoGen, which is tailored to the reconstruction of microbial TRNs and solely focuses on Geobacter as one of DOE’s high priority microbes. Our developments are designed such that the frameworks for the TRND and GeoGen can readily be used for other microbes of interest to the DOE. In the context of modeling a single bacterium, we are actively pursuing both steady-state and kinetic approaches. The steady-state approach is based on a flux balance that uses maximizing biomass growth rate as its objective, subjected to various biochemical constraints, for the optimal values of reaction rates and uptake/release of metabolites. For the kinetic approach, we use Karyote, a rigorous cell model developed by us for an earlier DOE grant and the DARPA BioSPICE Project. We are also investigating the interplay between bacterial colonies and environment at both pore and macroscopic scales. The pore scale models use detailed representations for realistic porous media accounting for the distribution of grain size whereas the macroscopic models employ the Darcy-type flow equations and up-scaled advective-diffusive transport equations for chemical species. We are rigorously testing the relationship between these two scales by evaluating macroscopic parameters using the volume averaging methodology applied to pore scale model results.« less

  18. Improvement of distributed snowmelt energy balance modeling with MODIS-based NDSI-derived fractional snow-covered area data

    Treesearch

    Joel W. Homan; Charles H. Luce; James P. McNamara; Nancy F. Glenn

    2011-01-01

    Describing the spatial variability of heterogeneous snowpacks at a watershed or mountain-front scale is important for improvements in large-scale snowmelt modelling. Snowmelt depletion curves, which relate fractional decreases in snowcovered area (SCA) against normalized decreases in snow water equivalent (SWE), are a common approach to scale-up snowmelt models....

  19. Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B

    2011-01-01

    In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less

  20. A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations

    DOE PAGES

    Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; ...

    2015-06-01

    Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormore » system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application-specific and sometimes ad-hoc approaches for model coupling. We are developing a generalized approach to hierarchical model coupling designed for high-performance computational systems, based on the Swift computing workflow framework. In this presentation we will describe the generalized approach and provide two use cases: 1) simulation of a mixing-controlled biogeochemical reaction coupling pore- and continuum-scale models, and 2) simulation of biogeochemical impacts of groundwater – river water interactions coupling fine- and coarse-grid model representations. This generalized framework can be customized for use with any pair of linked models (microscale and macroscale) with minimal intrusiveness to the at-scale simulators. It combines a set of python scripts with the Swift workflow environment to execute a complex multiscale simulation utilizing an approach similar to the well-known Heterogeneous Multiscale Method. User customization is facilitated through user-provided input and output file templates and processing function scripts, and execution within a high-performance computing environment is handled by Swift, such that minimal to no user modification of at-scale codes is required.« less

  1. Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture: Original Research Article: Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture

    DOE PAGES

    Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling; ...

    2018-03-25

    In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less

  2. Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture: Original Research Article: Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling

    In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less

  3. Grade 12 Students' Conceptual Understanding and Mental Models of Galvanic Cells before and after Learning by Using Small-Scale Experiments in Conjunction with a Model Kit

    ERIC Educational Resources Information Center

    Supasorn, Saksri

    2015-01-01

    This study aimed to develop the small-scale experiments involving electrochemistry and the galvanic cell model kit featuring the sub-microscopic level. The small-scale experiments in conjunction with the model kit were implemented based on the 5E inquiry learning approach to enhance students' conceptual understanding of electrochemistry. The…

  4. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment-scale water management.

    PubMed

    Refsgaard, A; Jacobsen, T; Jacobsen, B; Ørum, J-E

    2007-01-01

    The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied in river basin management. Point sources (e.g. sewage treatment plant discharges) and distributed diffuse sources (nitrate leakage) are included to provide a modelling tool capable of simulating pollution transport from source to recipient to analyse the effects of specific, localized basin water management plans. The paper also includes a land rent modelling approach which can be used to choose the most cost-effective measures and the location of these measures. As a forerunner to the use of basin-scale models in WFD basin water management plans this project demonstrates the potential and limitations of comprehensive, integrated modelling tools.

  5. Generalizability in Item Response Modeling

    ERIC Educational Resources Information Center

    Briggs, Derek C.; Wilson, Mark

    2007-01-01

    An approach called generalizability in item response modeling (GIRM) is introduced in this article. The GIRM approach essentially incorporates the sampling model of generalizability theory (GT) into the scaling model of item response theory (IRT) by making distributional assumptions about the relevant measurement facets. By specifying a random…

  6. Multiscale Cloud System Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell W.

    2009-01-01

    The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

  7. A scale-based approach to interdisciplinary research and expertise in sports.

    PubMed

    Ibáñez-Gijón, Jorge; Buekers, Martinus; Morice, Antoine; Rao, Guillaume; Mascret, Nicolas; Laurin, Jérome; Montagne, Gilles

    2017-02-01

    After more than 20 years since the introduction of ecological and dynamical approaches in sports research, their promising opportunity for interdisciplinary research has not been fulfilled yet. The complexity of the research process and the theoretical and empirical difficulties associated with an integrated ecological-dynamical approach have been the major factors hindering the generalisation of interdisciplinary projects in sports sciences. To facilitate this generalisation, we integrate the major concepts from the ecological and dynamical approaches to study behaviour as a multi-scale process. Our integration gravitates around the distinction between functional (ecological) and execution (organic) scales, and their reciprocal intra- and inter-scale constraints. We propose an (epistemological) scale-based definition of constraints that accounts for the concept of synergies as emergent coordinative structures. To illustrate how we can operationalise the notion of multi-scale synergies we use an interdisciplinary model of locomotor pointing. To conclude, we show the value of this approach for interdisciplinary research in sport sciences, as we discuss two examples of task-specific dimensionality reduction techniques in the context of an ongoing project that aims to unveil the determinants of expertise in basketball free throw shooting. These techniques provide relevant empirical evidence to help bootstrap the challenging modelling efforts required in sport sciences.

  8. Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.

    PubMed

    Gustafsson, Lena; Perhans, Karin

    2010-12-01

    A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.

  9. Multi-scale computational modeling of developmental biology.

    PubMed

    Setty, Yaki

    2012-08-01

    Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.

  10. Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport

    NASA Astrophysics Data System (ADS)

    Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.

    2016-12-01

    Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.

  11. Determination of Scaled Wind Turbine Rotor Characteristics from Three Dimensional RANS Calculations

    NASA Astrophysics Data System (ADS)

    Burmester, S.; Gueydon, S.; Make, M.

    2016-09-01

    Previous studies have shown the importance of 3D effects when calculating the performance characteristics of a scaled down turbine rotor [1-4]. In this paper the results of 3D RANS (Reynolds-Averaged Navier-Stokes) computations by Make and Vaz [1] are taken to calculate 2D lift and drag coefficients. These coefficients are assigned to FAST (Blade Element Momentum Theory (BEMT) tool from NREL) as input parameters. Then, the rotor characteristics (power and thrust coefficients) are calculated using BEMT. This coupling of RANS and BEMT was previously applied by other parties and is termed here the RANS-BEMT coupled approach. Here the approach is compared to measurements carried out in a wave basin at MARIN applying Froude scaled wind, and the direct 3D RANS computation. The data of both a model and full scale wind turbine are used for the validation and verification. The flow around a turbine blade at full scale has a more 2D character than the flow properties around a turbine blade at model scale (Make and Vaz [1]). Since BEMT assumes 2D flow behaviour, the results of the RANS-BEMT coupled approach agree better with the results of the CFD (Computational Fluid Dynamics) simulation at full- than at model-scale.

  12. A Bayesian hierarchical latent trait model for estimating rater bias and reliability in large-scale performance assessment

    PubMed Central

    2018-01-01

    We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129

  13. Two-Scale Simulation of Drop-Induced Failure of Polysilicon MEMS Sensors

    PubMed Central

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Martini, Roberto; Simoni, Barbara

    2011-01-01

    In this paper, an industrially-oriented two-scale approach is provided to model the drop-induced brittle failure of polysilicon MEMS sensors. The two length-scales here investigated are the package (macroscopic) and the sensor (mesoscopic) ones. Issues related to the polysilicon morphology at the micro-scale are disregarded; an upscaled homogenized constitutive law, able to describe the brittle cracking of silicon, is instead adopted at the meso-scale. The two-scale approach is validated against full three-scale Monte-Carlo simulations, which allow for stochastic effects linked to the microstructural properties of polysilicon. Focusing on inertial MEMS sensors exposed to drops, it is shown that the offered approach matches well the experimentally observed failure mechanisms. PMID:22163885

  14. A conceptual cross-scale approach for linking empirical discharge measurements and regional groundwater models with application to legacy nitrogen transport and coastal nitrogen management

    NASA Astrophysics Data System (ADS)

    Barclay, J. R.; Helton, A. M.; Starn, J. J.; Briggs, M. A.

    2016-12-01

    Despite years of management, seasonal hypoxia from excess nitrogen (N) is a pervasive problem in many coastal waters. Current approaches to managing coastal eutrophication in the United States (USA) focus on surface runoff and river transport of nutrients, and often assume that groundwater N is at steady state. This is not necessarily the case, as terrestrial N inputs are affected by changing land use and nutrient management practices. Furthermore, approximately 70% of surface water in the USA is derived from groundwater and there is widespread N contamination in many of our nation's aquifers. Nitrogen export via groundwater discharge to streams during baseflow may be the reason many impaired coastal systems show little improvement. There is a critical need to develop approaches that consider the effects of groundwater transport on N loading to surface waters. Aquifer transport times, which can be decades or even centuries longer than surface water transport times, introduce lags between changes in terrestrial management and reductions in coastal loads. Ignoring these lags can lead to overly ambitious and unrealistic load reduction goals, or incorrect conclusions regarding the effectiveness of management strategies. Additionally, regional groundwater models typically have a coarse resolution that makes it difficult to incorporate fine-scale processes that drive N transformations, such as groundwater-surface water exchange across steep redox gradients at stream bed interfaces. Despite this challenge, representing these important fine-scale processes well is essential to modeling groundwater transport of N across regional scales and to making informed management decisions. We present 1) a conceptual approach to linking regional models and fine-scale empirical measurements, and 2) preliminary groundwater flow and transport model results for the Housatonic and Farmington Rivers in Connecticut, USA. Our cross-scale approach utilizes thermal infrared imaging and vertical temperature profiling to calculate groundwater discharge and to iteratively refine and downscale the groundwater flow model. Model results may improve management of N loading from groundwater to sensitive coastal systems, such as the Long Island Sound.

  15. Coastal Foredune Evolution, Part 2: Modeling Approaches for Meso-Scale Morphologic Evolution

    DTIC Science & Technology

    2017-03-01

    ERDC/CHL CHETN-II-57 March 2017 Approved for public release; distribution is unlimited. Coastal Foredune Evolution, Part 2: Modeling Approaches...for Meso-Scale Morphologic Evolution by Margaret L. Palmsten1, Katherine L. Brodie2, and Nicholas J. Spore2 PURPOSE: This Coastal and Hydraulics...Engineering Technical Note (CHETN) is the second of two CHETNs focused on improving technologies to forecast coastal foredune evolution. Part 1

  16. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  17. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perdikaris, Paris, E-mail: parisp@mit.edu; Grinberg, Leopold, E-mail: leopoldgrinberg@us.ibm.com; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu

    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process takingmore » place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.« less

  19. Reconciling Basin-Scale Top-Down and Bottom-Up Methane Emission Measurements for Onshore Oil and Gas Development: Cooperative Research and Development Final Report, CRADA Number CRD-14-572

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heath, Garvin A.

    The overall objective of the Research Partnership to Secure Energy for America (RPSEA)-funded research project is to develop independent estimates of methane emissions using top-down and bottom-up measurement approaches and then to compare the estimates, including consideration of uncertainty. Such approaches will be applied at two scales: basin and facility. At facility scale, multiple methods will be used to measure methane emissions of the whole facility (controlled dual tracer and single tracer releases, aircraft-based mass balance and Gaussian back-trajectory), which are considered top-down approaches. The bottom-up approach will sum emissions from identified point sources measured using appropriate source-level measurement techniquesmore » (e.g., high-flow meters). At basin scale, the top-down estimate will come from boundary layer airborne measurements upwind and downwind of the basin, using a regional mass balance model plus approaches to separate atmospheric methane emissions attributed to the oil and gas sector. The bottom-up estimate will result from statistical modeling (also known as scaling up) of measurements made at selected facilities, with gaps filled through measurements and other estimates based on other studies. The relative comparison of the bottom-up and top-down estimates made at both scales will help improve understanding of the accuracy of the tested measurement and modeling approaches. The subject of this CRADA is NREL's contribution to the overall project. This project resulted from winning a competitive solicitation no. RPSEA RFP2012UN001, proposal no. 12122-95, which is the basis for the overall project. This Joint Work Statement (JWS) details the contributions of NREL and Colorado School of Mines (CSM) in performance of the CRADA effort.« less

  20. Progress in the Phase 0 Model Development of a STAR Concept for Dynamics and Control Testing

    NASA Technical Reports Server (NTRS)

    Woods-Vedeler, Jessica A.; Armand, Sasan C.

    2003-01-01

    The paper describes progress in the development of a lightweight, deployable passive Synthetic Thinned Aperture Radiometer (STAR). The spacecraft concept presented will enable the realization of 10 km resolution global soil moisture and ocean salinity measurements at 1.41 GHz. The focus of this work was on definition of an approximately 1/3-scaled, 5-meter Phase 0 test article for concept demonstration and dynamics and control testing. Design requirements, parameters and a multi-parameter, hybrid scaling approach for the dynamically scaled test model were established. The El Scaling Approach that was established allows designers freedom to define the cross section of scaled, lightweight structural components that is most convenient for manufacturing when the mass of the component is small compared to the overall system mass. Static and dynamic response analysis was conducted on analytical models to evaluate system level performance and to optimize panel geometry for optimal tension load distribution.

  1. Solar Activity Across the Scales: From Small-Scale Quiet-Sun Dynamics to Magnetic Activity Cycles

    NASA Technical Reports Server (NTRS)

    Kitiashvili, Irina N.; Collins, Nancy N.; Kosovichev, Alexander G.; Mansour, Nagi N.; Wray, Alan A.

    2017-01-01

    Observations as well as numerical and theoretical models show that solar dynamics is characterized by complicated interactions and energy exchanges among different temporal and spatial scales. It reveals magnetic self-organization processes from the smallest scale magnetized vortex tubes to the global activity variation known as the solar cycle. To understand these multiscale processes and their relationships, we use a two-fold approach: 1) realistic 3D radiative MHD simulations of local dynamics together with high resolution observations by IRIS, Hinode, and SDO; and 2) modeling of solar activity cycles by using simplified MHD dynamo models and mathematical data assimilation techniques. We present recent results of this approach, including the interpretation of observational results from NASA heliophysics missions and predictive capabilities. In particular, we discuss the links between small-scale dynamo processes in the convection zone and atmospheric dynamics, as well as an early prediction of Solar Cycle 25.

  2. Solar activity across the scales: from small-scale quiet-Sun dynamics to magnetic activity cycles

    NASA Astrophysics Data System (ADS)

    Kitiashvili, I.; Collins, N.; Kosovichev, A. G.; Mansour, N. N.; Wray, A. A.

    2017-12-01

    Observations as well as numerical and theoretical models show that solar dynamics is characterized by complicated interactions and energy exchanges among different temporal and spatial scales. It reveals magnetic self-organization processes from the smallest scale magnetized vortex tubes to the global activity variation known as the solar cycle. To understand these multiscale processes and their relationships, we use a two-fold approach: 1) realistic 3D radiative MHD simulations of local dynamics together with high-resolution observations by IRIS, Hinode, and SDO; and 2) modeling of solar activity cycles by using simplified MHD dynamo models and mathematical data assimilation techniques. We present recent results of this approach, including the interpretation of observational results from NASA heliophysics missions and predictive capabilities. In particular, we discuss the links between small-scale dynamo processes in the convection zone and atmospheric dynamics, as well as an early prediction of Solar Cycle 25.

  3. Combining remote sensing and watershed modeling for regional-scale carbon cycling studies in disturbance-prone systems

    NASA Astrophysics Data System (ADS)

    Hanan, E. J.; Tague, C.; Choate, J.; Liu, M.; Adam, J. C.

    2016-12-01

    Disturbance is a major force regulating C dynamics in terrestrial ecosystems. Evaluating future C balance in disturbance-prone systems requires understanding the underlying mechanisms that drive ecosystem processes over multiple scales of space and time. Simulation modeling is a powerful tool for bridging these scales, however, model projections are limited by large uncertainties in the initial state of vegetation C and N stores. Watershed models typically use one of two methods to initialize these stores. Spin up involves running a model until vegetation reaches steady state based on climate. This "potential" state however assumes the vegetation across the entire watershed has reached maturity and has a homogeneous age distribution. Yet to reliably represent C and N dynamics in disturbance-prone systems, models should be initialized to reflect their non-equilibrium conditions. Alternatively, remote sensing of a single vegetation parameter (typically leaf area index; LAI) can be combined with allometric relationships to allocate C and N to model stores and can reflect non-steady-state conditions. However, allometric relationships are species and region specific and do not account for environmental variation, thus resulting in C and N stores that may be unstable. To address this problem, we developed a new approach for initializing C and N pools using the watershed-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spinup with the spatial fidelity of remote sensing. Unlike traditional spin up, this approach supports non-homogeneous stand ages. We tested our approach in a pine-dominated watershed in central Idaho, which partially burned in July of 2000. We used LANDSAT and MODIS data to calculate LAI across the watershed following the 2000 fire. We then ran three sets of simulations using spin up, direct measurements, and the combined approach to initialize vegetation C and N stores, and compared our results to remotely sensed LAI following the simulation period. Model estimates of C, N, and water fluxes varied depending on which approach was used. The combined approach provided the best LAI estimates after 10 years of simulation. This method shows promise for improving projections of C, N, and water fluxes in disturbance-prone watersheds.

  4. From point-wise stress data to a continuous description of the 3D crustal in situ stress state

    NASA Astrophysics Data System (ADS)

    Heidbach, O.; Ziegler, M.; Reiter, K.; Hergert, T.

    2017-12-01

    The in situ stress is a key parameter for the safe and sustainable management of geo-reservoirs or storage of waste and energy in deep geological repositories. It is also an essential initial condition for thermo-hydro-mechanical (THM) models that investigate man-made induced processes e.g. seismicity due to fluid injection/extraction, reservoir depletion or storage of heat producing high-level radioactive waste. Without a reasonable assumption on the initial stress condition it is not possible to assess if a man-made process is pushing the system into a critical state or not. However, modelling the initial 3D stress state on reservoir scale is challenging since data are hardly available before drilling in the area of interest. This is in particular the case for the stress magnitude data which are a prerequisite for a reliable model calibration. Here, we present a multi-stage 3D geomechani­cal-numerical model approach to estimate for a reservoir-scale volume the 3D in situ stress state. First, we set up a large-scale model which is calibrated by stress data and use the modelled stress field subsequently to calibrate a small-scale model located within the large-scale model. The local model contains a significantly higher resolution representation of the subsurface geometry around boreholes of a projected geothermal power plant. This approach incorporates two models and is an alternative to the required trade-off between resolution, computational cost and calibration data which is inevitable for a single model; an extension to a three-stage approach would be straight forward. We exemplify the two-stage approach for the area around Munich in the German Molasse Basin. The results of the reservoir-scale model are presented in terms of values for slip tendency as a measure for the criticality of fault reactivation. The model results show that variations due to uncertainties in the input data are mainly introduced by the uncertain material properties and missing estimates for the magnitude of the maximum horizontal stress SHmax, needed for a more reliable model calibration. This leads to the conclusion that at this stage the model's reliability depends only on the amount and quality of input data records such as available stress information rather than on the modelling technique itself.

  5. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    PubMed

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  6. Characterizing the performance of ecosystem models across time scales: A spectral analysis of the North American Carbon Program site-level synthesis

    Treesearch

    Michael C. Dietze; Rodrigo Vargas; Andrew D. Richardson; Paul C. Stoy; Alan G. Barr; Ryan S. Anderson; M. Altaf Arain; Ian T. Baker; T. Andrew Black; Jing M. Chen; Philippe Ciais; Lawrence B. Flanagan; Christopher M. Gough; Robert F. Grant; David Hollinger; R. Cesar Izaurralde; Christopher J. Kucharik; Peter Lafleur; Shugang Liu; Erandathie Lokupitiya; Yiqi Luo; J. William Munger; Changhui Peng; Benjamin Poulter; David T. Price; Daniel M. Ricciuto; William J. Riley; Alok Kumar Sahoo; Kevin Schaefer; Andrew E. Suyker; Hanqin Tian; Christina Tonitto; Hans Verbeeck; Shashi B. Verma; Weifeng Wang; Ensheng Weng

    2011-01-01

    Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Despite the fact that ecosystems respond to drivers at multiple time scales, most assessments of model performance do not discriminate different time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the...

  7. Importance of a Global Approach to Using Regional Models in the Assessment of Source-Receptor Relationships of Mercury

    EPA Science Inventory

    Regional atmospheric models simulate their pertinent processes over a limited portion of the global atmosphere. This portion of the atmosphere can be a large fraction, as in the case of continental-scale modeling, or small fraction, as in the case of urban-scale modeling. Regio...

  8. Development of Gridded Fields of Urban Canopy Parameters for Advanced Urban Meteorological and Air Quality Models

    EPA Science Inventory

    Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...

  9. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    PubMed

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  10. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  11. Measuring the Cognitions, Emotions, and Motivation Associated With Avoidance Behaviors in the Context of Pain: Preliminary Development of the Negative Responsivity to Pain Scales.

    PubMed

    Jensen, Mark P; Ward, L Charles; Thorn, Beverly E; Ehde, Dawn M; Day, Melissa A

    2017-04-01

    We recently proposed a Behavioral Inhibition System-Behavioral Activation System (BIS-BAS) model to help explain the effects of pain treatments. In this model, treatments are hypothesized to operate primarily through their effects on the domains within 2 distinct neurophysiological systems that underlie approach (BAS) and avoidance (BIS) behaviors. Measures of the model's domains are needed to evaluate and modify the model. An item pool of negative responses to pain (NRP; hypothesized to be BIS related) and positive responses (PR; hypothesized to be BAS related) were administered to 395 undergraduates, 325 of whom endorsed recurrent pain. The items were administered to 176 of these individuals again 1 week later. Analyses were conducted to develop and validate scales assessing NRP and PR domains. Three NRP scales (Despondent Response to Pain, Fear of Pain, and Avoidant Response to Pain) and 2 PR scales (Happy/Hopeful Responses and Approach Response) emerged. Consistent with the model, the scales formed 2 relatively independent overarching domains. The scales also demonstrated excellent internal consistency, and associations with criterion variables supported their validity. However, whereas the NRP scales evidenced adequate test-retest stability, the 2 PR scales were not adequately stable. The study yielded 3 brief scales assessing NRP, which may be used to further evaluate the BIS-BAS model and to advance research elucidating the mechanisms of psychosocial pain treatments. The findings also provide general support for the BIS-BAS model, while also suggesting that some minor modifications in the model are warranted.

  12. SWARM : a scientific workflow for supporting Bayesian approaches to improve metabolic models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, X.; Stevens, R.; Mathematics and Computer Science

    2008-01-01

    With the exponential growth of complete genome sequences, the analysis of these sequences is becoming a powerful approach to build genome-scale metabolic models. These models can be used to study individual molecular components and their relationships, and eventually study cells as systems. However, constructing genome-scale metabolic models manually is time-consuming and labor-intensive. This property of manual model-building process causes the fact that much fewer genome-scale metabolic models are available comparing to hundreds of genome sequences available. To tackle this problem, we design SWARM, a scientific workflow that can be utilized to improve genome-scale metabolic models in high-throughput fashion. SWARM dealsmore » with a range of issues including the integration of data across distributed resources, data format conversions, data update, and data provenance. Putting altogether, SWARM streamlines the whole modeling process that includes extracting data from various resources, deriving training datasets to train a set of predictors and applying Bayesian techniques to assemble the predictors, inferring on the ensemble of predictors to insert missing data, and eventually improving draft metabolic networks automatically. By the enhancement of metabolic model construction, SWARM enables scientists to generate many genome-scale metabolic models within a short period of time and with less effort.« less

  13. No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.

    PubMed

    Liu, Tsung-Jung; Liu, Kuan-Hsien

    2018-03-01

    A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.

  14. Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach

    NASA Astrophysics Data System (ADS)

    Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.

    2017-08-01

    Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a realistic coastal region.

  15. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  16. Representative Sinusoids for Hepatic Four-Scale Pharmacokinetics Simulations

    PubMed Central

    Schwen, Lars Ole; Schenk, Arne; Kreutz, Clemens; Timmer, Jens; Bartolomé Rodríguez, María Matilde; Kuepfer, Lars; Preusser, Tobias

    2015-01-01

    The mammalian liver plays a key role for metabolism and detoxification of xenobiotics in the body. The corresponding biochemical processes are typically subject to spatial variations at different length scales. Zonal enzyme expression along sinusoids leads to zonated metabolization already in the healthy state. Pathological states of the liver may involve liver cells affected in a zonated manner or heterogeneously across the whole organ. This spatial heterogeneity, however, cannot be described by most computational models which usually consider the liver as a homogeneous, well-stirred organ. The goal of this article is to present a methodology to extend whole-body pharmacokinetics models by a detailed liver model, combining different modeling approaches from the literature. This approach results in an integrated four-scale model, from single cells via sinusoids and the organ to the whole organism, capable of mechanistically representing metabolization inhomogeneity in livers at different spatial scales. Moreover, the model shows circulatory mixing effects due to a delayed recirculation through the surrounding organism. To show that this approach is generally applicable for different physiological processes, we show three applications as proofs of concept, covering a range of species, compounds, and diseased states: clearance of midazolam in steatotic human livers, clearance of caffeine in mouse livers regenerating from necrosis, and a parameter study on the impact of different cell entities on insulin uptake in mouse livers. The examples illustrate how variations only discernible at the local scale influence substance distribution in the plasma at the whole-body level. In particular, our results show that simultaneously considering variations at all relevant spatial scales may be necessary to understand their impact on observations at the organism scale. PMID:26222615

  17. Bridging the scales in a eulerian air quality model to assess megacity export of pollution

    NASA Astrophysics Data System (ADS)

    Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.

    2013-08-01

    In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.

  18. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    NASA Astrophysics Data System (ADS)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the outcrops. Our results suggest that, for the LPC at least, global scale models might be useful for highlighting the locations with higher recharge rates potential and nitrate contamination risk. Global modelling simulations appear ideal as a primary step in recognizing locations which require investigations at the plot, field and local scales.

  19. Assessing global vegetation activity using spatio-temporal Bayesian modelling

    NASA Astrophysics Data System (ADS)

    Mulder, Vera L.; van Eck, Christel M.; Friedlingstein, Pierre; Regnier, Pierre A. G.

    2016-04-01

    This work demonstrates the potential of modelling vegetation activity using a hierarchical Bayesian spatio-temporal model. This approach allows modelling changes in vegetation and climate simultaneous in space and time. Changes of vegetation activity such as phenology are modelled as a dynamic process depending on climate variability in both space and time. Additionally, differences in observed vegetation status can be contributed to other abiotic ecosystem properties, e.g. soil and terrain properties. Although these properties do not change in time, they do change in space and may provide valuable information in addition to the climate dynamics. The spatio-temporal Bayesian models were calibrated at a regional scale because the local trends in space and time can be better captured by the model. The regional subsets were defined according to the SREX segmentation, as defined by the IPCC. Each region is considered being relatively homogeneous in terms of large-scale climate and biomes, still capturing small-scale (grid-cell level) variability. Modelling within these regions is hence expected to be less uncertain due to the absence of these large-scale patterns, compared to a global approach. This overall modelling approach allows the comparison of model behavior for the different regions and may provide insights on the main dynamic processes driving the interaction between vegetation and climate within different regions. The data employed in this study encompasses the global datasets for soil properties (SoilGrids), terrain properties (Global Relief Model based on SRTM DEM and ETOPO), monthly time series of satellite-derived vegetation indices (GIMMS NDVI3g) and climate variables (Princeton Meteorological Forcing Dataset). The findings proved the potential of a spatio-temporal Bayesian modelling approach for assessing vegetation dynamics, at a regional scale. The observed interrelationships of the employed data and the different spatial and temporal trends support our hypothesis. That is, the change of vegetation in space and time may be better understood when modelling vegetation change as both a dynamic and multivariate process. Therefore, future research will focus on a multivariate dynamical spatio-temporal modelling approach. This ongoing research is performed within the context of the project "Global impacts of hydrological and climatic extremes on vegetation" (project acronym: SAT-EX) which is part of the Belgian research programme for Earth Observation Stereo III.

  20. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    PubMed

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes. The good agreement between the two modeling approaches is very important for defining the tradeoff between data availability and the information provided by the models. The results we present define the possibility of hybrid models combining the agent-based and the metapopulation approaches according to the available data and computational resources.

  1. Assessing sufficiency of thermal riverscapes for resilient ...

    EPA Pesticide Factsheets

    Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  2. Modeling a full-scale primary sedimentation tank using artificial neural networks.

    PubMed

    Gamal El-Din, A; Smith, D W

    2002-05-01

    Modeling the performance of full-scale primary sedimentation tanks has been commonly done using regression-based models, which are empirical relationships derived strictly from observed daily average influent and effluent data. Another approach to model a sedimentation tank is using a hydraulic efficiency model that utilizes tracer studies to characterize the performance of model sedimentation tanks based on eddy diffusion. However, the use of hydraulic efficiency models to predict the dynamic behavior of a full-scale sedimentation tank is very difficult as the development of such models has been done using controlled studies of model tanks. In this paper, another type of model, namely artificial neural network modeling approach, is used to predict the dynamic response of a full-scale primary sedimentation tank. The neuralmodel consists of two separate networks, one uses flow and influent total suspended solids data in order to predict the effluent total suspended solids from the tank, and the other makes predictions of the effluent chemical oxygen demand using data of the flow and influent chemical oxygen demand as inputs. An extensive sampling program was conducted in order to collect a data set to be used in training and validating the networks. A systematic approach was used in the building process of the model which allowed the identification of a parsimonious neural model that is able to learn (and not memorize) from past data and generalize very well to unseen data that were used to validate the model. Theresults seem very promising. The potential of using the model as part of a real-time process control system isalso discussed.

  3. A multi-scale approach to monitor urban carbon-dioxide emissions in the atmosphere over Vancouver, Canada

    NASA Astrophysics Data System (ADS)

    Christen, A.; Crawford, B.; Ketler, R.; Lee, J. K.; McKendry, I. G.; Nesic, Z.; Caitlin, S.

    2015-12-01

    Measurements of long-lived greenhouse gases in the urban atmosphere are potentially useful to constrain and validate urban emission inventories, or space-borne remote-sensing products. We summarize and compare three different approaches, operating at different scales, that directly or indirectly identify, attribute and quantify emissions (and uptake) of carbon dioxide (CO2) in urban environments. All three approaches are illustrated using in-situ measurements in the atmosphere in and over Vancouver, Canada. Mobile sensing may be a promising way to quantify and map CO2 mixing ratios at fine scales across heterogenous and complex urban environments. We developed a system for monitoring CO2 mixing ratios at street level using a network of mobile CO2 sensors deployable on vehicles and bikes. A total of 5 prototype sensors were built and simultaneously used in a measurement campaign across a range of urban land use types and densities within a short time frame (3 hours). The dataset is used to aid in fine scale emission mapping in combination with simultaneous tower-based flux measurements. Overall, calculated CO2 emissions are realistic when compared against a spatially disaggregated scale emission inventory. The second approach is based on mass flux measurements of CO2 using a tower-based eddy covariance (EC) system. We present a continuous 7-year long dataset of CO2 fluxes measured by EC at the 28m tall flux tower 'Vancouver-Sunset'. We show how this dataset can be combined with turbulent source area models to quantify and partition different emission processes at the neighborhood-scale. The long-term EC measurements are within 10% of a spatially disaggregated scale emission inventory. Thirdly, at the urban scale, we present a dataset of CO2 mixing ratios measured using a tethered balloon system in the urban boundary layer above Vancouver. Using a simple box model, net city-scale CO2 emissions can be determined using measured rate of change of CO2 mixing ratios, estimated CO2 advection and entrainment fluxes. Daily city-scale emissions totals predicted by the model are within 32% of a spatially scaled municipal greenhouse gas inventory. In summary, combining information from different approaches and scales is a promising approach to establish long-term emission monitoring networks in cities.

  4. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in space. We undertake a number of quality checks of the stochastic model and compare real and simulated footprints to show that the method is able to re-create realistic patterns even at continental scales where there is large variation in flood generating mechanisms. We then show how these patterns can be used to drive a large scale 2D hydraulic to predict regional scale flooding.

  5. Modeling Individual Differences in Unfolding Preference Data: A Restricted Latent Class Approach.

    ERIC Educational Resources Information Center

    Bockenholt, Ulf; Bockenholt, Ingo

    1990-01-01

    A latent-class scaling approach is presented for modeling paired comparison and "pick any/t" data obtained in preference studies. The utility of this approach is demonstrated through analysis of data from studies involving consumer preference and preference for political candidates. (SLD)

  6. From Single-Cell Dynamics to Scaling Laws in Oncology

    NASA Astrophysics Data System (ADS)

    Chignola, Roberto; Sega, Michela; Stella, Sabrina; Vyshemirsky, Vladislav; Milotti, Edoardo

    We are developing a biophysical model of tumor biology. We follow a strictly quantitative approach where each step of model development is validated by comparing simulation outputs with experimental data. While this strategy may slow down our advancements, at the same time it provides an invaluable reward: we can trust simulation outputs and use the model to explore territories of cancer biology where current experimental techniques fail. Here, we review our multi-scale biophysical modeling approach and show how a description of cancer at the cellular level has led us to general laws obeyed by both in vitro and in vivo tumors.

  7. Compaction of North-sea chalk by pore-failure and pressure solution in a producing reservoir

    NASA Astrophysics Data System (ADS)

    Keszthelyi, Daniel; Dysthe, Dag; Jamtveit, Bjorn

    2016-02-01

    The Ekofisk field, Norwegian North sea,is an example of compacting chalk reservoir with considerable subsequent seafloor subsidence due to petroleum production. Previously, a number of models were created to predict the compaction using different phenomenological approaches. Here we present a different approach, we use a new creep model based on microscopic mechanisms with no fitting parameters to predict strain rate at core scale and at reservoir scale. The model is able to reproduce creep experiments and the magnitude of the observed subsidence making it the first microstructural model which can explain the Ekofisk compaction.

  8. Thermo-Oxidative Induced Damage in Polymer Composites: Microstructure Image-Based Multi-Scale Modeling and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Hussein, Rafid M.; Chandrashekhara, K.

    2017-11-01

    A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.

  9. Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data

    NASA Astrophysics Data System (ADS)

    Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia

    Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.

  10. An integrated modeling approach for estimating the water quality benefits of conservation practices at the river basin scale.

    PubMed

    Santhi, C; Kannan, N; White, M; Di Luzio, M; Arnold, J G; Wang, X; Williams, J R

    2014-01-01

    The USDA initiated the Conservation Effects Assessment Project (CEAP) to quantify the environmental benefits of conservation practices at regional and national scales. For this assessment, a sampling and modeling approach is used. This paper provides a technical overview of the modeling approach used in CEAP cropland assessment to estimate the off-site water quality benefits of conservation practices using the Ohio River Basin (ORB) as an example. The modeling approach uses a farm-scale model, Agricultural Policy Environmental Extender (APEX), and a watershed scale model (the Soil and Water Assessment Tool [SWAT]) and databases in the Hydrologic Unit Modeling for the United States system. Databases of land use, soils, land use management, topography, weather, point sources, and atmospheric depositions were developed to derive model inputs. APEX simulates the cultivated cropland, Conserve Reserve Program land, and the practices implemented on them, whereas SWAT simulates the noncultivated land (e.g., pasture, range, urban, and forest) and point sources. Simulation results from APEX are input into SWAT. SWAT routes all sources, including APEX's, to the basin outlet through each eight-digit watershed. Each basin is calibrated for stream flow, sediment, and nutrient loads at multiple gaging sites and turned in for simulating the effects of conservation practice scenarios on water quality. Results indicate that sediment, nitrogen, and phosphorus loads delivered to the Mississippi River from ORB could be reduced by 16, 15, and 23%, respectively, due to current conservation practices. Modeling tools are useful to provide science-based information for assessing existing conservation programs, developing future programs, and developing insights on load reductions necessary for hypoxia in the Gulf of Mexico. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. Geoscience Meets Social Science: A Flexible Data Driven Approach for Developing High Resolution Population Datasets at Global Scale

    NASA Astrophysics Data System (ADS)

    Rose, A.; McKee, J.; Weber, E.; Bhaduri, B. L.

    2017-12-01

    Leveraging decades of expertise in population modeling, and in response to growing demand for higher resolution population data, Oak Ridge National Laboratory is now generating LandScan HD at global scale. LandScan HD is conceived as a 90m resolution population distribution where modeling is tailored to the unique geography and data conditions of individual countries or regions by combining social, cultural, physiographic, and other information with novel geocomputation methods. Similarities among these areas are exploited in order to leverage existing training data and machine learning algorithms to rapidly scale development. Drawing on ORNL's unique set of capabilities, LandScan HD adapts highly mature population modeling methods developed for LandScan Global and LandScan USA, settlement mapping research and production in high-performance computing (HPC) environments, land use and neighborhood mapping through image segmentation, and facility-specific population density models. Adopting a flexible methodology to accommodate different geographic areas, LandScan HD accounts for the availability, completeness, and level of detail of relevant ancillary data. Beyond core population and mapped settlement inputs, these factors determine the model complexity for an area, requiring that for any given area, a data-driven model could support either a simple top-down approach, a more detailed bottom-up approach, or a hybrid approach.

  12. Cascade model for fluvial geomorphology

    NASA Technical Reports Server (NTRS)

    Newman, W. I.; Turcotte, D. L.

    1990-01-01

    Erosional landscapes are generally scale invariant and fractal. Spectral studies provide quantitative confirmation of this statement. Linear theories of erosion will not generate scale-invariant topography. In order to explain the fractal behavior of landscapes a modified Fourier series has been introduced that is the basis for a renormalization approach. A nonlinear dynamical model has been introduced for the decay of the modified Fourier series coefficients that yield a fractal spectra. It is argued that a physical basis for this approach is that a fractal (or nearly fractal) distribution of storms (floods) continually renews erosional features on all scales.

  13. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.

  14. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  15. Estimation of net ecosystem carbon exchange for the conterminous United States by combining MODIS and AmeriFlux data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.

    Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board the National Aeronautics and Space Administration's (NASA) Terra satellite to scale up AmeriFlux NEE measurements to themore » continental scale. We first combined MODIS and AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a modified regression tree approach. The predictive model was trained and validated using eddy flux NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE well (r = 0.73, p < 0.001). We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day interval in 2005 using spatially explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE as determined from measurements and the literature. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets over large areas.« less

  16. Estimation of Net Ecosystem Carbon Exchange for the Conterminous UnitedStates by Combining MODIS and AmeriFlux Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.

    Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely-sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board NASA's Terra satellite to scale up AmeriFlux NEE measurements to the continental scale. We first combined MODIS andmore » AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a regression tree approach. The predictive model was trained and validated using NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE reasonably well at the site level. We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day period in 2005 using spatially-explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets for large areas.« less

  17. Pore-scale and continuum simulations of solute transport micromodel benchmark experiments

    DOE PAGES

    Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...

    2014-06-18

    Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less

  18. Accounting for microbial habitats in modeling soil organic matter dynamics

    NASA Astrophysics Data System (ADS)

    Chenu, Claire; Garnier, Patricia; Nunan, Naoise; Pot, Valérie; Raynaud, Xavier; Vieublé, Laure; Otten, Wilfred; Falconer, Ruth; Monga, Olivier

    2017-04-01

    The extreme heterogeneity of soils constituents, architecture and inhabitants at the microscopic scale is increasingly recognized. Microbial communities exist and are active in a complex 3-D physical framework of mineral and organic particles defining pores of various sizes, more or less inter-connected. This results in a frequent spatial disconnection between soil carbon, energy sources and the decomposer organisms and a variety of microhabitats that are more or less suitable for microbial growth and activity. However, current biogeochemical models account for C dynamics at the macroscale (cm, m) and consider time- and spatially averaged relationships between microbial activity and soil characteristics. Different modelling approaches have intended to account for this microscale heterogeneity, based either on considering aggregates as surrogates for microbial habitats, or pores. Innovative modelling approaches are based on an explicit representation of soil structure at the fine scale, i.e. at µm to mm scales: pore architecture and their saturation with water, localization of organic resources and of microorganisms. Three recent models are presented here, that describe the heterotrophic activity of either bacteria or fungi and are based upon different strategies to represent the complex soil pore system (Mosaic, LBios and µFun). These models allow to hierarchize factors of microbial activity in soil's heterogeneous architecture. Present limits of these approaches and challenges are presented, regarding the extensive information required on soils at the microscale and to up-scale microbial functioning from the pore to the core scale.

  19. Hybrid LES RANS technique based on a one-equation near-wall model

    NASA Astrophysics Data System (ADS)

    Breuer, M.; Jaffrézic, B.; Arora, K.

    2008-05-01

    In order to reduce the high computational effort of wall-resolved large-eddy simulations (LES), the present paper suggests a hybrid LES RANS approach which splits up the simulation into a near-wall RANS part and an outer LES part. Generally, RANS is adequate for attached boundary layers requiring reasonable CPU-time and memory, where LES can also be applied but demands extremely large resources. Contrarily, RANS often fails in flows with massive separation or large-scale vortical structures. Here, LES is without a doubt the best choice. The basic concept of hybrid methods is to combine the advantages of both approaches yielding a prediction method, which, on the one hand, assures reliable results for complex turbulent flows, including large-scale flow phenomena and massive separation, but, on the other hand, consumes much fewer resources than LES, especially for high Reynolds number flows encountered in technical applications. In the present study, a non-zonal hybrid technique is considered (according to the signification retained by the authors concerning the terms zonal and non-zonal), which leads to an approach where the suitable simulation technique is chosen more or less automatically. For this purpose the hybrid approach proposed relies on a unique modeling concept. In the LES mode a subgrid-scale model based on a one-equation model for the subgrid-scale turbulent kinetic energy is applied, where the length scale is defined by the filter width. For the viscosity-affected near-wall RANS mode the one-equation model proposed by Rodi et al. (J Fluids Eng 115:196 205, 1993) is used, which is based on the wall-normal velocity fluctuations as the velocity scale and algebraic relations for the length scales. Although the idea of combined LES RANS methods is not new, a variety of open questions still has to be answered. This includes, in particular, the demand for appropriate coupling techniques between LES and RANS, adaptive control mechanisms, and proper subgrid-scale and RANS models. Here, in addition to the study on the behavior of the suggested hybrid LES RANS approach, special emphasis is put on the investigation of suitable interface criteria and the adjustment of the RANS model. To investigate these issues, two different test cases are considered. Besides the standard plane channel flow test case, the flow over a periodic arrangement of hills is studied in detail. This test case includes a pressure-induced flow separation and subsequent reattachment. In comparison with a wall-resolved LES prediction encouraging results are achieved.

  20. Simulation of nitrate reduction in groundwater - An upscaling approach from small catchments to the Baltic Sea basin

    NASA Astrophysics Data System (ADS)

    Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.

    2018-01-01

    This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.

  1. Multi-physics modelling approach for oscillatory microengines: application for a microStirling generator design

    NASA Astrophysics Data System (ADS)

    Formosa, F.; Fréchette, L. G.

    2015-12-01

    An electrical circuit equivalent (ECE) approach has been set up allowing elementary oscillatory microengine components to be modelled. They cover gas channel/chamber thermodynamics, viscosity and thermal effects, mechanical structure and electromechanical transducers. The proposed tool has been validated on a centimeter scale Free Piston membrane Stirling engine [1]. We propose here new developments taking into account scaling effects to establish models suitable for any microengines. They are based on simplifications derived from the comparison of the hydraulic radius with respect to the viscous and thermal penetration depths respectively).

  2. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework

    PubMed Central

    Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-01-01

    Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698

  3. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.

    PubMed

    Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-02-01

    Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.

  4. Item Response Theory Models for Wording Effects in Mixed-Format Scales

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu

    2015-01-01

    Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…

  5. HIGH-RESOLUTION DATASET OF URBAN CANOPY PARAMETERS FOR HOUSTON, TEXAS

    EPA Science Inventory

    Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...

  6. Scale-dependent approaches to modeling spatial epidemiology of chronic wasting disease.

    USGS Publications Warehouse

    Conner, Mary M.; Gross, John E.; Cross, Paul C.; Ebinger, Michael R.; Gillies, Robert; Samuel, Michael D.; Miller, Michael W.

    2007-01-01

    For each scale, we presented a focal approach that would be useful for understanding the spatial pattern and epidemiology of CWD, as well as being a useful tool for CWD management. The focal approaches include risk analysis and micromaps for the regional scale, cluster analysis for the landscape scale, and individual based modeling for the fine scale of within population. For each of these methods, we used simulated data and walked through the method step by step to fully illustrate the “how to”, with specifics about what is input and output, as well as what questions the method addresses. We also provided a summary table to, at a glance, describe the scale, questions that can be addressed, and general data required for each method described in this e-book. We hope that this review will be helpful to biologists and managers by increasing the utility of their surveillance data, and ultimately be useful for increasing our understanding of CWD and allowing wildlife biologists and managers to move beyond retroactive fire-fighting to proactive preventative action.

  7. Towards generalised reference condition models for environmental assessment: a case study on rivers in Atlantic Canada.

    PubMed

    Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J

    2013-08-01

    Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.

  8. Nutritional Systems Biology Modeling: From Molecular Mechanisms to Physiology

    PubMed Central

    de Graaf, Albert A.; Freidig, Andreas P.; De Roos, Baukje; Jamshidi, Neema; Heinemann, Matthias; Rullmann, Johan A.C.; Hall, Kevin D.; Adiels, Martin; van Ommen, Ben

    2009-01-01

    The use of computational modeling and simulation has increased in many biological fields, but despite their potential these techniques are only marginally applied in nutritional sciences. Nevertheless, recent applications of modeling have been instrumental in answering important nutritional questions from the cellular up to the physiological levels. Capturing the complexity of today's important nutritional research questions poses a challenge for modeling to become truly integrative in the consideration and interpretation of experimental data at widely differing scales of space and time. In this review, we discuss a selection of available modeling approaches and applications relevant for nutrition. We then put these models into perspective by categorizing them according to their space and time domain. Through this categorization process, we identified a dearth of models that consider processes occurring between the microscopic and macroscopic scale. We propose a “middle-out” strategy to develop the required full-scale, multilevel computational models. Exhaustive and accurate phenotyping, the use of the virtual patient concept, and the development of biomarkers from “-omics” signatures are identified as key elements of a successful systems biology modeling approach in nutrition research—one that integrates physiological mechanisms and data at multiple space and time scales. PMID:19956660

  9. Modeling canopy-level productivity: is the "big-leaf" simplification acceptable?

    NASA Astrophysics Data System (ADS)

    Sprintsin, M.; Chen, J. M.

    2009-05-01

    The "big-leaf" approach to calculating the carbon balance of plant canopies assumes that canopy carbon fluxes have the same relative responses to the environment as any single unshaded leaf in the upper canopy. Widely used light use efficiency models are essentially simplified versions of the big-leaf model. Despite its wide acceptance, subsequent developments in the modeling of leaf photosynthesis and measurements of canopy physiology have brought into question the assumptions behind this approach showing that big leaf approximation is inadequate for simulating canopy photosynthesis because of the additional leaf internal control on carbon assimilation and because of the non-linear response of photosynthesis on leaf nitrogen and absorbed light, and changes in leaf microenvironment with canopy depth. To avoid this problem a sunlit/shaded leaf separation approach, within which the vegetation is treated as two big leaves under different illumination conditions, is gradually replacing the "big-leaf" strategy, for applications at local and regional scales. Such separation is now widely accepted as a more accurate and physiologically based approach for modeling canopy photosynthesis. Here we compare both strategies for Gross Primary Production (GPP) modeling using the Boreal Ecosystem Productivity Simulator (BEPS) at local (tower footprint) scale for different land cover types spread over North America: two broadleaf forests (Harvard, Massachusetts and Missouri Ozark, Missouri); two coniferous forests (Howland, Maine and Old Black Spruce, Saskatchewan); Lost Creek shrubland site (Wisconsin) and Mer Bleue petland (Ontario). BEPS calculates carbon fixation by scaling Farquhar's leaf biochemical model up to canopy level with stomatal conductance estimated by a modified version of the Ball-Woodrow-Berry model. The "big-leaf" approach was parameterized using derived leaf level parameters scaled up to canopy level by means of Leaf Area Index. The influence of sunlit/shaded leaf separation on GPP prediction was evaluated accounting for the degree of the deviation of 3-dimensional leaf spatial distribution from the random case. More specifically, we compared and evaluated the behavior of both models showing the advantages of sunlit/shaded leaf separation strategy over a simplified big-leaf approach. Keywords: canopy photosynthesis, leaf area index, clumping index, remote sensing.

  10. Dense image matching of terrestrial imagery for deriving high-resolution topographic properties of vegetation locations in alpine terrain

    NASA Astrophysics Data System (ADS)

    Niederheiser, R.; Rutzinger, M.; Bremer, M.; Wichmann, V.

    2018-04-01

    The investigation of changes in spatial patterns of vegetation and identification of potential micro-refugia requires detailed topographic and terrain information. However, mapping alpine topography at very detailed scales is challenging due to limited accessibility of sites. Close-range sensing by photogrammetric dense matching approaches based on terrestrial images captured with hand-held cameras offers a light-weight and low-cost solution to retrieve high-resolution measurements even in steep terrain and at locations, which are difficult to access. We propose a novel approach for rapid capturing of terrestrial images and a highly automated processing chain for retrieving detailed dense point clouds for topographic modelling. For this study, we modelled 249 plot locations. For the analysis of vegetation distribution and location properties, topographic parameters, such as slope, aspect, and potential solar irradiation were derived by applying a multi-scale approach utilizing voxel grids and spherical neighbourhoods. The result is a micro-topography archive of 249 alpine locations that includes topographic parameters at multiple scales ready for biogeomorphological analysis. Compared with regional elevation models at larger scales and traditional 2D gridding approaches to create elevation models, we employ analyses in a fully 3D environment that yield much more detailed insights into interrelations between topographic parameters, such as potential solar irradiation, surface area, aspect and roughness.

  11. Predicting Future-Year Ozone Concentrations: Integrated Observational-Modeling Approach for Probabilistic Evaluation of the Efficacy of Emission Control strategies

    EPA Science Inventory

    Regional-scale air quality models are being used to demonstrate attainment of the ozone air quality standard. In current regulatory applications, a regional-scale air quality model is applied for a base year and a future year with reduced emissions using the same meteorological ...

  12. Landscape-based population viability models demonstrate importance of strategic conservation planning for birds

    Treesearch

    Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh; D. Todd Jones-Farland

    2013-01-01

    Efforts to conserve regional biodiversity in the face of global climate change, habitat loss and fragmentation will depend on approaches that consider population processes at multiple scales. By combining habitat and demographic modeling, landscape-based population viability models effectively relate small-scale habitat and landscape patterns to regional population...

  13. Multi-scale Modeling of Arctic Clouds

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  14. New Statistical Model for Variability of Aerosol Optical Thickness: Theory and Application to MODIS Data over Ocean

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian

    2016-01-01

    A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.

  15. Apportioning Sources of Riverine Nitrogen at Multiple Watershed Scales

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Alexander, R. B.; Sebestyen, S. D.

    2005-05-01

    Loadings of reactive nitrogen (N) entering terrestrial landscapes have increased in recent decades due to anthropogenic activities associated with food and energy production. In the northeastern USA, this enhanced supply of N has been linked to many environmental concerns in both terrestrial and aquatic ecosystems, such as forest decline, lake and stream acidification, human respiratory problems, and coastal eutrophication. Thus N is a priority pollutant with regard to a whole host of air, land, and water quality issues, highlighting the need for methods to identify and quantify various N sources. Further, understanding precursor sources of N is critical to current and proposed public policies targeted at the reduction of N inputs to the terrestrial landscape and receiving waters. We present results from published and ongoing studies using multiple approaches to fingerprint sources of N in the northeastern USA, at watershed scales ranging from the headwaters to the coastal zone. The approaches include: 1) a mass balance model with a nitrogen-budgeting approach for analyses of large watersheds; 2) a spatially-referenced regression model with an empirical modeling approach for analyses of water quality at regional scales; and 3) a meta-analysis of monitoring data with a chemical tracer approach, utilizing concentrations of multiple elements and isotopic composition of N from water samples collected in the streams and rivers. We discuss the successes and limitations of these various approaches for apportioning contributions of N from multiple sources to receiving waters at regional scales.

  16. An integrated approach to reconstructing genome-scale transcriptional regulatory networks

    DOE PAGES

    Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.; ...

    2015-02-27

    Transcriptional regulatory networks (TRNs) program cells to dynamically alter their gene expression in response to changing internal or environmental conditions. In this study, we develop a novel workflow for generating large-scale TRN models that integrates comparative genomics data, global gene expression analyses, and intrinsic properties of transcription factors (TFs). An assessment of this workflow using benchmark datasets for the well-studied γ-proteobacterium Escherichia coli showed that it outperforms expression-based inference approaches, having a significantly larger area under the precision-recall curve. Further analysis indicated that this integrated workflow captures different aspects of the E. coli TRN than expression-based approaches, potentially making themmore » highly complementary. We leveraged this new workflow and observations to build a large-scale TRN model for the α-Proteobacterium Rhodobacter sphaeroides that comprises 120 gene clusters, 1211 genes (including 93 TFs), 1858 predicted protein-DNA interactions and 76 DNA binding motifs. We found that ~67% of the predicted gene clusters in this TRN are enriched for functions ranging from photosynthesis or central carbon metabolism to environmental stress responses. We also found that members of many of the predicted gene clusters were consistent with prior knowledge in R. sphaeroides and/or other bacteria. Experimental validation of predictions from this R. sphaeroides TRN model showed that high precision and recall was also obtained for TFs involved in photosynthesis (PpsR), carbon metabolism (RSP_0489) and iron homeostasis (RSP_3341). In addition, this integrative approach enabled generation of TRNs with increased information content relative to R. sphaeroides TRN models built via other approaches. We also show how this approach can be used to simultaneously produce TRN models for each related organism used in the comparative genomics analysis. Our results highlight the advantages of integrating comparative genomics of closely related organisms with gene expression data to assemble large-scale TRN models with high-quality predictions.« less

  17. From global circulation to flood loss: Coupling models across the scales

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Gomez-Navarro, Juan Jose; Bozhinova, Denica; Zischg, Andreas; Raible, Christoph C.; Ole, Roessler; Martius, Olivia; Weingartner, Rolf

    2017-04-01

    The prediction and the prevention of flood losses requires an extensive understanding of underlying meteorological, hydrological, hydraulic and damage processes. Coupled models help to improve the understanding of such underlying processes and therefore contribute the understanding of flood risk. Using such a modelling approach to determine potentially flood-affected areas and damages requires a complex coupling between several models operating at different spatial and temporal scales. Although the isolated parts of the single modelling components are well established and commonly used in the literature, a full coupling including a mesoscale meteorological model driven by a global circulation one, a hydrologic model, a hydrodynamic model and a flood impact and loss model has not been reported so far. In the present study, we tackle the application of such a coupled model chain in terms of computational resources, scale effects, and model performance. From a technical point of view, results show the general applicability of such a coupled model, as well as good model performance. From a practical point of view, such an approach enables the prediction of flood-induced damages, although some future challenges have been identified.

  18. Pesticide fate at regional scale: Development of an integrated model approach and application

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  19. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.

  20. Watershed Nitrogen Modeling: Benefits of Diverse Approaches Using a Case Study from New York State

    EPA Science Inventory

    Watershed-scale models have evolved as an important tool for estimating the sources, transformation, and transport of contaminants to surface water systems. A wide variety of modeling approaches exist for estimating inputs, fate, and transport of constituents but most are broadl...

  1. Global analysis of approaches for deriving total water storage changes from GRACE satellites and implications for groundwater storage change estimation

    NASA Astrophysics Data System (ADS)

    Long, D.; Scanlon, B. R.; Longuevergne, L.; Chen, X.

    2015-12-01

    Increasing interest in use of GRACE satellites and a variety of new products to monitor changes in total water storage (TWS) underscores the need to assess the reliability of output from different products. The objective of this study was to assess skills and uncertainties of different approaches for processing GRACE data to restore signal losses caused by spatial filtering based on analysis of 1°×1° grid scale data and basin scale data in 60 river basins globally. Results indicate that scaling factors from six land surface models (LSMs), including four models from GLDAS-1 (Noah 2.7, Mosaic, VIC, and CLM 2.0), CLM 4.0, and WGHM, are similar over most humid, sub-humid, and high-latitude regions but can differ by up to 100% over arid and semi-arid basins and areas with intensive irrigation. Large differences in TWS anomalies from three processing approaches (scaling factor, additive, and multiplicative corrections) were found in arid and semi-arid regions, areas with intensive irrigation, and relatively small basins (e.g., ≤ 200,000 km2). Furthermore, TWS anomaly products from gridded data with CLM4.0 scaling factors and the additive correction approach more closely agree with WGHM output than the multiplicative correction approach. Estimation of groundwater storage changes using GRACE satellites requires caution in selecting an appropriate approach for restoring TWS changes. A priori ground-based data used in forward modeling can provide a powerful tool for explaining the distribution of signal gains or losses caused by low-pass filtering in specific regions of interest and should be very useful for more reliable estimation of groundwater storage changes using GRACE satellites.

  2. A review of numerical models to predict the atmospheric dispersion of radionuclides.

    PubMed

    Leelőssy, Ádám; Lagzi, István; Kovács, Attila; Mészáros, Róbert

    2018-02-01

    The field of atmospheric dispersion modeling has evolved together with nuclear risk assessment and emergency response systems. Atmospheric concentration and deposition of radionuclides originating from an unintended release provide the basis of dose estimations and countermeasure strategies. To predict the atmospheric dispersion and deposition of radionuclides several numerical models are available coupled with numerical weather prediction (NWP) systems. This work provides a review of the main concepts and different approaches of atmospheric dispersion modeling. Key processes of the atmospheric transport of radionuclides are emission, advection, turbulent diffusion, dry and wet deposition, radioactive decay and other physical and chemical transformations. A wide range of modeling software are available to simulate these processes with different physical assumptions, numerical approaches and implementation. The most appropriate modeling tool for a specific purpose can be selected based on the spatial scale, the complexity of meteorology, land surface and physical and chemical transformations, also considering the available data and computational resource. For most regulatory and operational applications, offline coupled NWP-dispersion systems are used, either with a local scale Gaussian, or a regional to global scale Eulerian or Lagrangian approach. The dispersion model results show large sensitivity on the accuracy of the coupled NWP model, especially through the description of planetary boundary layer turbulence, deep convection and wet deposition. Improvement of dispersion predictions can be achieved by online coupling of mesoscale meteorology and atmospheric transport models. The 2011 Fukushima event was the first large-scale nuclear accident where real-time prognostic dispersion modeling provided decision support. Dozens of dispersion models with different approaches were used for prognostic and retrospective simulations of the Fukushima release. An unknown release rate proved to be the largest factor of uncertainty, underlining the importance of inverse modeling and data assimilation in future developments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.

  4. Multiple scale dynamo

    PubMed Central

    Le Mouël, Jean-Louis; Allègre, Claude J.; Narteau, Clément

    1997-01-01

    A scaling law approach is used to simulate the dynamo process of the Earth’s core. The model is made of embedded turbulent domains of increasing dimensions, until the largest whose size is comparable with the site of the core, pervaded by large-scale magnetic fields. Left-handed or right-handed cyclones appear at the lowest scale, the scale of the elementary domains of the hierarchical model, and disappear. These elementary domains then behave like electromotor generators with opposite polarities depending on whether they contain a left-handed or a right-handed cyclone. To transfer the behavior of the elementary domains to larger ones, a dynamic renormalization approach is used. A simple rule is adopted to determine whether a domain of scale l is a generator—and what its polarity is—in function of the state of the (l − 1) domains it is made of. This mechanism is used as the main ingredient of a kinematic dynamo model, which displays polarity intervals, excursions, and reversals of the geomagnetic field. PMID:11038547

  5. Predicting the breakdown strength and lifetime of nanocomposites using a multi-scale modeling approach

    NASA Astrophysics Data System (ADS)

    Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.

    2017-08-01

    It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.

  6. Multiscale modeling of porous ceramics using movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  7. On the physically based modeling of surface tension and moving contact lines with dynamic contact angles on the continuum scale

    NASA Astrophysics Data System (ADS)

    Huber, M.; Keller, F.; Säckel, W.; Hirschler, M.; Kunz, P.; Hassanizadeh, S. M.; Nieken, U.

    2016-04-01

    The description of wetting phenomena is a challenging problem on every considerable length-scale. The behavior of interfaces and contact lines on the continuum scale is caused by intermolecular interactions like the Van der Waals forces. Therefore, to describe surface tension and the resulting dynamics of interfaces and contact lines on the continuum scale, appropriate formulations must be developed. While the Continuum Surface Force (CSF) model is well-engineered for the description of interfaces, there is still a lack of treatment of contact lines, which are defined by the intersection of an ending fluid interface and a solid boundary surface. In our approach we use a balance equation for the contact line and extend the Navier-Stokes equations in analogy to the extension of a two-phase interface in the CSF model. Since this model depicts a physically motivated approach on the continuum scale, no fitting parameters are introduced and the deterministic description leads to a dynamical evolution of the system. As verification of our theory, we show a Smoothed Particle Hydrodynamics (SPH) model and simulate the evolution of droplet shapes and their corresponding contact angles.

  8. Coupling biomechanics to a cellular level model: an approach to patient-specific image driven multi-scale and multi-physics tumor simulation.

    PubMed

    May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe

    2011-10-01

    Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models

    PubMed Central

    Ataman, Meric

    2017-01-01

    Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these “consistently-reduced” models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models. PMID:28727725

  10. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.

  11. Preferential flow across scales: how important are plot scale processes for a catchment scale model?

    NASA Astrophysics Data System (ADS)

    Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Klaus, Julian

    2017-04-01

    Numerous experimental studies showed the importance of preferential flow for solute transport and runoff generation. As a consequence, various approaches exist to incorporate preferential flow in hydrological models. However, few studies have applied models that incorporate preferential flow at hillslope scale and even fewer at catchment scale. Certainly, one main difficulty for progress is the determination of an adequate parameterization for preferential flow at these spatial scales. This study applies a 3D physically based model (HydroGeoSphere) of a headwater region (6 ha) of the Weierbach catchment (Luxembourg). The base model was implemented without preferential flow and was limited in simulating fast catchment responses. Thus we hypothesized that the discharge performance can be improved by utilizing a dual permeability approach for a representation of preferential flow. We used the information of bromide irrigation experiments performed on three 1m2 plots to parameterize preferential flow. In a first step we ran 20.000 Monte Carlo simulations of these irrigation experiments in a 1m2 column of the headwater catchment model, varying the dual permeability parameters (15 variable parameters). These simulations identified many equifinal, yet very different parameter sets that reproduced the bromide depth profiles well. Therefore, in the next step we chose 52 parameter sets (the 40 best and 12 low performing sets) for testing the effect of incorporating preferential flow in the headwater catchment scale model. The variability of the flow pattern responses at the headwater catchment scale was small between the different parameterizations and did not coincide with the variability at plot scale. The simulated discharge time series of the different parameterizations clustered in six groups of similar response, ranging from nearly unaffected to completely changed responses compared to the base case model without dual permeability. Yet, in none of the groups the simulated discharge response clearly improved compared to the base case. Same held true for some observed soil moisture time series, although at plot scale the incorporation of preferential flow was necessary to simulate the irrigation experiments correctly. These results rejected our hypothesis and open a discussion on how important plot scale processes and heterogeneities are at catchment scale. Our preliminary conclusion is that vertical preferential flow is important for the irrigation experiments at the plot scale, while discharge generation at the catchment scale is largely controlled by lateral preferential flow. The lateral component, however, was already considered in the base case model with different hydraulic conductivities in different soil layers. This can explain why the internal behavior of the model at single spots seems not to be relevant for the overall hydrometric catchment response. Nonetheless, the inclusion of vertical preferential flow improved the realism of internal processes of the model (fitting profiles at plot scale, unchanged response at catchment scale) and should be considered depending on the intended use of the model. Furthermore, we cannot exclude with certainty yet that the quantitative discharge performance at catchment scale cannot be improved by utilizing a dual permeability approach, which will be tested in parameter optimization process.

  12. Accuracy of the actuator disc-RANS approach for predicting the performance and wake of tidal turbines.

    PubMed

    Batten, W M J; Harrison, M E; Bahaj, A S

    2013-02-28

    The actuator disc-RANS model has widely been used in wind and tidal energy to predict the wake of a horizontal axis turbine. The model is appropriate where large-scale effects of the turbine on a flow are of interest, for example, when considering environmental impacts, or arrays of devices. The accuracy of the model for modelling the wake of tidal stream turbines has not been demonstrated, and flow predictions presented in the literature for similar modelled scenarios vary significantly. This paper compares the results of the actuator disc-RANS model, where the turbine forces have been derived using a blade-element approach, to experimental data measured in the wake of a scaled turbine. It also compares the results with those of a simpler uniform actuator disc model. The comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, therefore demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads. It can therefore be applied to similar scenarios with confidence.

  13. BAYESIAN METHODS FOR REGIONAL-SCALE EUTROPHICATION MODELS. (R830887)

    EPA Science Inventory

    We demonstrate a Bayesian classification and regression tree (CART) approach to link multiple environmental stressors to biological responses and quantify uncertainty in model predictions. Such an approach can: (1) report prediction uncertainty, (2) be consistent with the amou...

  14. A Multi-Scale, Integrated Approach to Representing Watershed Systems

    NASA Astrophysics Data System (ADS)

    Ivanov, Valeriy; Kim, Jongho; Fatichi, Simone; Katopodes, Nikolaos

    2014-05-01

    Understanding and predicting process dynamics across a range of scales are fundamental challenges for basic hydrologic research and practical applications. This is particularly true when larger-spatial-scale processes, such as surface-subsurface flow and precipitation, need to be translated to fine space-time scale dynamics of processes, such as channel hydraulics and sediment transport, that are often of primary interest. Inferring characteristics of fine-scale processes from uncertain coarse-scale climate projection information poses additional challenges. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion, and sediment transport, tRIBS+VEGGIE-FEaST. The model targets to take the advantage of the current generation of wealth of data representing watershed topography, vegetation, soil, and landuse, as well as to explore the hydrological effects of physical factors and their feedback mechanisms over a range of scales. We illustrate how the modeling system connects precipitation-hydrologic runoff partition process to the dynamics of flow, erosion, and sedimentation, and how the soil's substrate condition can impact the latter processes, resulting in a non-unique response. We further illustrate an approach to using downscaled climate change information with a process-based model to infer the moments of hydrologic variables in future climate conditions and explore the impact of climate information uncertainty.

  15. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 3; Disaggregation

    NASA Technical Reports Server (NTRS)

    Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius

    1998-01-01

    This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.

  16. Importance of ecohydrological modelling approaches in the prediction of plant behaviour and water balance at different scales

    NASA Astrophysics Data System (ADS)

    García-Arias, Alicia; Ruiz-Pérez, Guiomar; Francés, Félix

    2017-04-01

    Vegetation plays a main role in the water balance of most hydrological systems. However, in the past it has been barely considered the effect of the interception and evapotranspiration for hydrological modelling purposes. During the last years many authors have recognised and supported ecohydrological approaches instead of traditional strategies. This contribution is aimed to demonstrate the pivotal role of the vegetation in ecohydrological models and that a better understanding of the hydrological systems can be achieved by considering the appropriate processes related to plants. The study is performed in two scales: the plot scale and the reach scale. At plot scale, only zonal vegetation was considered while at reach scale both zonal and riparian were taken into account. In order to assure the main role of the water on the vegetation development, semiarid environments have been selected for the case studies. Results show an increase of the capabilities to predict plant behaviour and water balance when interception and evapotranspiration are taken into account in the soil water balance

  17. Modeling of Texture Evolution During Hot Forging of Alpha/Beta Titanium Alloys (Preprint)

    DTIC Science & Technology

    2007-06-01

    treatment. The approach was validated via an industrial -scale trail comprising hot pancake forging of Ti- 6Al-4V. 15. SUBJECT TERMS titanium... industrial -scale trial comprising hot pancake forging of Ti-6Al-4V. Keywords: Titanium, Texture, Modeling, Strain Partitioning, Variant Selection... industrial -scale forging of Ti- 6Al-4V. 2. Background A brief review of pertinent previous efforts in the area of texture modeling is presented below

  18. Multimodel analysis of anisotropic diffusive tracer-gas transport in a deep arid unsaturated zone

    USGS Publications Warehouse

    Green, Christopher T.; Walvoord, Michelle Ann; Andraski, Brian J.; Striegl, Robert G.; Stonestrom, David A.

    2015-01-01

    Gas transport in the unsaturated zone affects contaminant flux and remediation, interpretation of groundwater travel times from atmospheric tracers, and mass budgets of environmentally important gases. Although unsaturated zone transport of gases is commonly treated as dominated by diffusion, the characteristics of transport in deep layered sediments remain uncertain. In this study, we use a multimodel approach to analyze results of a gas-tracer (SF6) test to clarify characteristics of gas transport in deep unsaturated alluvium. Thirty-five separate models with distinct diffusivity structures were calibrated to the tracer-test data and were compared on the basis of Akaike Information Criteria estimates of posterior model probability. Models included analytical and numerical solutions. Analytical models provided estimates of bulk-scale apparent diffusivities at the scale of tens of meters. Numerical models provided information on local-scale diffusivities and feasible lithological features producing the observed tracer breakthrough curves. The combined approaches indicate significant anisotropy of bulk-scale diffusivity, likely associated with high-diffusivity layers. Both approaches indicated that diffusivities in some intervals were greater than expected from standard models relating porosity to diffusivity. High apparent diffusivities and anisotropic diffusivity structures were consistent with previous observations at the study site of rapid lateral transport and limited vertical spreading of gas-phase contaminants. Additional processes such as advective oscillations may be involved. These results indicate that gases in deep, layered unsaturated zone sediments can spread laterally more quickly, and produce higher peak concentrations, than predicted by homogeneous, isotropic diffusion models.

  19. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    DOE PAGES

    Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...

    2014-02-24

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less

  20. A time for multi-scale modeling of anti-fibrotic therapies. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Wu, Min

    2016-07-01

    The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.

  1. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  2. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  3. Point-to-point migration functions and gravity model renormalization: approaches to aggregation in spatial interaction modeling.

    PubMed

    Slater, P B

    1985-08-01

    Two distinct approaches to assessing the effect of geographic scale on spatial interactions are modeled. In the first, the question of whether a distance deterrence function, which explains interactions for one system of zones, can also succeed on a more aggregate scale, is examined. Only the two-parameter function for which it is found that distances between macrozones are weighted averaged of distances between component zones is satisfactory in this regard. Estimation of continuous (point-to-point) functions--in the form of quadrivariate cubic polynomials--for US interstate migration streams, is then undertaken. Upon numerical integration, these higher order surfaces yield predictions of interzonal and intrazonal movements at any scale of interest. Test of spatial stationarity, isotropy, and symmetry of interstate migration are conducted in this framework.

  4. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    PubMed

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  5. Scaling wetland green infrastructure?practices to watersheds using modeling approaches

    EPA Science Inventory

    Green infrastructure practices are typically implemented at the plot or local scale. Wetlands in the landscape can serve important functions at these scales and can mediate biogeochemical and hydrological processes, particularly when juxtaposed with low impact development (LID)....

  6. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  7. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  8. Simulation of all-scale atmospheric dynamics on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Szmelter, Joanna; Xiao, Feng

    2016-10-01

    The advance of massively parallel computing in the nineteen nineties and beyond encouraged finer grid intervals in numerical weather-prediction models. This has improved resolution of weather systems and enhanced the accuracy of forecasts, while setting the trend for development of unified all-scale atmospheric models. This paper first outlines the historical background to a wide range of numerical methods advanced in the process. Next, the trend is illustrated with a technical review of a versatile nonoscillatory forward-in-time finite-volume (NFTFV) approach, proven effective in simulations of atmospheric flows from small-scale dynamics to global circulations and climate. The outlined approach exploits the synergy of two specific ingredients: the MPDATA methods for the simulation of fluid flows based on the sign-preserving properties of upstream differencing; and the flexible finite-volume median-dual unstructured-mesh discretisation of the spatial differential operators comprising PDEs of atmospheric dynamics. The paper consolidates the concepts leading to a family of generalised nonhydrostatic NFTFV flow solvers that include soundproof PDEs of incompressible Boussinesq, anelastic and pseudo-incompressible systems, common in large-eddy simulation of small- and meso-scale dynamics, as well as all-scale compressible Euler equations. Such a framework naturally extends predictive skills of large-eddy simulation to the global atmosphere, providing a bottom-up alternative to the reverse approach pursued in the weather-prediction models. Theoretical considerations are substantiated by calculations attesting to the versatility and efficacy of the NFTFV approach. Some prospective developments are also discussed.

  9. An Expanded Multi-scale Monte Carlo Simulation Method for Personalized Radiobiological Effect Estimation in Radiotherapy: a feasibility study

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping

    2017-03-01

    A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.

  10. Integrated Resilient Aircraft Control Project Full Scale Flight Validation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    2009-01-01

    Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.

  11. A Multi-Scale Distribution Model for Non-Equilibrium Populations Suggests Resource Limitation in an Endangered Rodent

    PubMed Central

    Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.

    2014-01-01

    Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807

  12. Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, John M.; Coffin, Peter; Robbins, Brian A.

    Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less

  13. Structures and Intermittency in a Passive Scalar Model

    NASA Astrophysics Data System (ADS)

    Vergassola, M.; Mazzino, A.

    1997-09-01

    Perturbative expansions for intermittency scaling exponents in the Kraichnan passive scalar model [Phys. Rev. Lett. 72, 1016 (1994)] are investigated. A one-dimensional compressible model is considered for this purpose. High resolution Monte Carlo simulations using an Ito approach adapted to an advecting velocity field with a very short correlation time are performed and lead to clean scaling behavior for passive scalar structure functions. Perturbative predictions for the scaling exponents around the Gaussian limit of the model are derived as in the Kraichnan model. Their comparison with the simulations indicates that the scale-invariant perturbative scheme correctly captures the inertial range intermittency corrections associated with the intense localized structures observed in the dynamics.

  14. Large-Scale Modeling of Wordform Learning and Representation

    ERIC Educational Resources Information Center

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn…

  15. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  16. The hydrologic implications of alternative prioritizations of landscape-scale geographically isolated wetlands conservation

    NASA Astrophysics Data System (ADS)

    Evenson, G. R.; Golden, H. E.; Lane, C.; Mclaughlin, D. L.; D'Amico, E.

    2016-12-01

    Geographically isolated wetlands (GIWs), defined as upland embedded wetlands, provide an array of ecosystem goods and services. Wetland conservation efforts aim to protect GIWs in the face of continued threats from anthropogenic activities. Given limited conservation resources, there is a critical need for methods capable of evaluating the watershed-scale hydrologic implications of alternative approaches to GIW conservation. Further, there is a need for methods that quantify the watershed-scale aggregate effects of GIWs to determine their regulatory status within the United States. We applied the Soil and Water Assessment Tool (SWAT), a popular watershed-scale hydrologic model, to represent the 1,700 km2 Pipestem Creek watershed in North Dakota, USA. We modified the model to incorporate an improved representation of GIW hydrologic processes via hydrologic response unit (HRU) redefinition and modifications to the model source code. We then used the model to evaluate the hydrologic effects of alternative approaches to GIW conservation prioritization by simulating the destruction/removal of GIWs by sub-classes defined by their relative position within the simulated fill-spill GIW network and their surface area characteristics. We evaluated the alternative conservation approaches as impacting (1) simulated streamflow at the Pipestem Creek watershed outlet; (2) simulated water-levels within the GIWs; and (3) simulated hydrologic connections between the GIWs. Our approach to modifying SWAT and evaluating alternative GIW conservation strategies may be replicated in different watersheds and physiographic regions to aid the development of GIW conservation priorities.

  17. Spatially explicit and stochastic simulation of forest landscape fire disturbance and succession

    Treesearch

    Hong S. He; David J. Mladenoff

    1999-01-01

    Understanding disturbance and recovery of forest landscapes is a challenge because of complex interactions over a range of temporal and spatial scales. Landscape simulation models offer an approach to studying such systems at broad scales. Fire can be simulated spatially using mechanistic or stochastic approaches. We describe the fire module in a spatially explicit,...

  18. Climatic and physiographic controls on catchment-scale nitrate loss at different spatial scales: insights from a top-down model development approach

    NASA Astrophysics Data System (ADS)

    Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe

    2017-04-01

    Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.

  19. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    DTIC Science & Technology

    2017-09-01

    efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components

  20. Hierarchical algorithms for modeling the ocean on hierarchical architectures

    NASA Astrophysics Data System (ADS)

    Hill, C. N.

    2012-12-01

    This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.

  1. A regional-scale ecological risk framework for environmental flow evaluations

    NASA Astrophysics Data System (ADS)

    O'Brien, Gordon C.; Dickens, Chris; Hines, Eleanor; Wepener, Victor; Stassen, Retha; Quayle, Leo; Fouchy, Kelly; MacKenzie, James; Graham, P. Mark; Landis, Wayne G.

    2018-02-01

    Environmental flow (E-flow) frameworks advocate holistic, regional-scale, probabilistic E-flow assessments that consider flow and non-flow drivers of change in a socio-ecological context as best practice. Regional-scale ecological risk assessments of multiple stressors to social and ecological endpoints, which address ecosystem dynamism, have been undertaken internationally at different spatial scales using the relative-risk model since the mid-1990s. With the recent incorporation of Bayesian belief networks into the relative-risk model, a robust regional-scale ecological risk assessment approach is available that can contribute to achieving the best practice recommendations of E-flow frameworks. PROBFLO is a holistic E-flow assessment method that incorporates the relative-risk model and Bayesian belief networks (BN-RRM) into a transparent probabilistic modelling tool that addresses uncertainty explicitly. PROBFLO has been developed to evaluate the socio-ecological consequences of historical, current and future water resource use scenarios and generate E-flow requirements on regional spatial scales. The approach has been implemented in two regional-scale case studies in Africa where its flexibility and functionality has been demonstrated. In both case studies the evidence-based outcomes facilitated informed environmental management decision making, with trade-off considerations in the context of social and ecological aspirations. This paper presents the PROBFLO approach as applied to the Senqu River catchment in Lesotho and further developments and application in the Mara River catchment in Kenya and Tanzania. The 10 BN-RRM procedural steps incorporated in PROBFLO are demonstrated with examples from both case studies. PROBFLO can contribute to the adaptive management of water resources and contribute to the allocation of resources for sustainable use of resources and address protection requirements.

  2. Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture

    NASA Astrophysics Data System (ADS)

    Hassan, Ezeldin A.

    Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.

  3. AIR QUALITY MODELING AT COARSE-TO-FINE SCALES IN URBAN AREAS

    EPA Science Inventory

    Urban air toxics control strategies are moving towards a community based modeling approach, with an emphasis on assessing those areas that experience high air toxic concentration levels, the so-called "hot spots". This approach will require information that accurately maps and...

  4. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    EPA Science Inventory

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  5. A multi-scale, multi-disciplinary approach for assessing the technological, economic and environmental performance of bio-based chemicals.

    PubMed

    Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai

    2015-12-01

    In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.

  6. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  7. Simultaneous estimation of local-scale and flow path-scale dual-domain mass transfer parameters using geoelectrical monitoring

    USGS Publications Warehouse

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Curtis, Gary P.; Lane, John W.

    2013-01-01

    Anomalous solute transport, modeled as rate-limited mass transfer, has an observable geoelectrical signature that can be exploited to infer the controlling parameters. Previous experiments indicate the combination of time-lapse geoelectrical and fluid conductivity measurements collected during ionic tracer experiments provides valuable insight into the exchange of solute between mobile and immobile porosity. Here, we use geoelectrical measurements to monitor tracer experiments at a former uranium mill tailings site in Naturita, Colorado. We use nonlinear regression to calibrate dual-domain mass transfer solute-transport models to field data. This method differs from previous approaches by calibrating the model simultaneously to observed fluid conductivity and geoelectrical tracer signals using two parameter scales: effective parameters for the flow path upgradient of the monitoring point and the parameters local to the monitoring point. We use regression statistics to rigorously evaluate the information content and sensitivity of fluid conductivity and geophysical data, demonstrating multiple scales of mass transfer parameters can simultaneously be estimated. Our results show, for the first time, field-scale spatial variability of mass transfer parameters (i.e., exchange-rate coefficient, porosity) between local and upgradient effective parameters; hence our approach provides insight into spatial variability and scaling behavior. Additional synthetic modeling is used to evaluate the scope of applicability of our approach, indicating greater range than earlier work using temporal moments and a Lagrangian-based Damköhler number. The introduced Eulerian-based Damköhler is useful for estimating tracer injection duration needed to evaluate mass transfer exchange rates that range over several orders of magnitude.

  8. Multi Length Scale Finite Element Design Framework for Advanced Woven Fabrics

    NASA Astrophysics Data System (ADS)

    Erol, Galip Ozan

    Woven fabrics are integral parts of many engineering applications spanning from personal protective garments to surgical scaffolds. They provide a wide range of opportunities in designing advanced structures because of their high tenacity, flexibility, high strength-to-weight ratios and versatility. These advantages result from their inherent multi scale nature where the filaments are bundled together to create yarns while the yarns are arranged into different weave architectures. Their highly versatile nature opens up potential for a wide range of mechanical properties which can be adjusted based on the application. While woven fabrics are viable options for design of various engineering systems, being able to understand the underlying mechanisms of the deformation and associated highly nonlinear mechanical response is important and necessary. However, the multiscale nature and relationships between these scales make the design process involving woven fabrics a challenging task. The objective of this work is to develop a multiscale numerical design framework using experimentally validated mesoscopic and macroscopic length scale approaches by identifying important deformation mechanisms and recognizing the nonlinear mechanical response of woven fabrics. This framework is exercised by developing mesoscopic length scale constitutive models to investigate plain weave fabric response under a wide range of loading conditions. A hyperelastic transversely isotropic yarn material model with transverse material nonlinearity is developed for woven yarns (commonly used in personal protection garments). The material properties/parameters are determined through an inverse method where unit cell finite element simulations are coupled with experiments. The developed yarn material model is validated by simulating full scale uniaxial tensile, bias extension and indentation experiments, and comparing to experimentally observed mechanical response and deformation mechanisms. Moreover, mesoscopic unit cell finite elements are coupled with a design-of-experiments method to systematically identify the important yarn material properties for the macroscale response of various weave architectures. To demonstrate the macroscopic length scale approach, two new material models for woven fabrics were developed. The Planar Material Model (PMM) utilizes two important deformation mechanisms in woven fabrics: (1) yarn elongation, and (2) relative yarn rotation due to shear loads. The yarns' uniaxial tensile response is modeled with a nonlinear spring using constitutive relations while a nonlinear rotational spring is implemented to define fabric's shear stiffness. The second material model, Sawtooth Material Model (SMM) adopts the sawtooth geometry while recognizing the biaxial nature of woven fabrics by implementing the interactions between the yarns. Material properties/parameters required by both PMM and SMM can be directly determined from standard experiments. Both macroscopic material models are implemented within an explicit finite element code and validated by comparing to the experiments. Then, the developed macroscopic material models are compared under various loading conditions to determine their accuracy. Finally, the numerical models developed in the mesoscopic and macroscopic length scales are linked thus demonstrating the new systematic design framework involving linked mesoscopic and macroscopic length scale modeling approaches. The approach is demonstrated with both Planar and Sawtooth Material Models and the simulation results are verified by comparing the results obtained from meso and macro models.

  9. Oil price and exchange rate co-movements in Asian countries: Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Hussain, Muntazir; Zebende, Gilney Figueira; Bashir, Usman; Donghong, Ding

    2017-01-01

    Most empirical literature investigates the relation between oil prices and exchange rate through different models. These models measure this relationship on two time scales (long and short terms), and often fail to observe the co-movement of these variables at different time scales. We apply a detrended cross-correlation approach (DCCA) to investigate the co-movements of the oil price and exchange rate in 12 Asian countries. This model determines the co-movements of oil price and exchange rate at different time scale. The exchange rate and oil price time series indicate unit root problem. Their correlation and cross-correlation are very difficult to measure. The result becomes spurious when periodic trend or unit root problem occurs in these time series. This approach measures the possible cross-correlation at different time scale and controlling the unit root problem. Our empirical results support the co-movements of oil prices and exchange rate. Our results support a weak negative cross-correlation between oil price and exchange rate for most Asian countries included in our sample. The results have important monetary, fiscal, inflationary, and trade policy implications for these countries.

  10. Systems metabolic engineering: genome-scale models and beyond.

    PubMed

    Blazeck, John; Alper, Hal

    2010-07-01

    The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches--based on the data collected with high throughput technologies--to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems.

  11. A multi-scale approach to designing therapeutics for tuberculosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje

    Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less

  12. A multi-scale approach to designing therapeutics for tuberculosis

    DOE PAGES

    Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...

    2015-04-20

    Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less

  13. Patterns and multi-scale drivers of phytoplankton species richness in temperate peri-urban lakes.

    PubMed

    Catherine, Arnaud; Selma, Maloufi; Mouillot, David; Troussellier, Marc; Bernard, Cécile

    2016-07-15

    Local species richness (SR) is a key characteristic affecting ecosystem functioning. Yet, the mechanisms regulating phytoplankton diversity in freshwater ecosystems are not fully understood, especially in peri-urban environments where anthropogenic pressures strongly impact the quality of aquatic ecosystems. To address this issue, we sampled the phytoplankton communities of 50 lakes in the Paris area (France) characterized by a large gradient of physico-chemical and catchment-scale characteristics. We used large phytoplankton datasets to describe phytoplankton diversity patterns and applied a machine-learning algorithm to test the degree to which species richness patterns are potentially controlled by environmental factors. Selected environmental factors were studied at two scales: the lake-scale (e.g. nutrients concentrations, water temperature, lake depth) and the catchment-scale (e.g. catchment, landscape and climate variables). Then, we used a variance partitioning approach to evaluate the interaction between lake-scale and catchment-scale variables in explaining local species richness. Finally, we analysed the residuals of predictive models to identify potential vectors of improvement of phytoplankton species richness predictive models. Lake-scale and catchment-scale drivers provided similar predictive accuracy of local species richness (R(2)=0.458 and 0.424, respectively). Both models suggested that seasonal temperature variations and nutrient supply strongly modulate local species richness. Integrating lake- and catchment-scale predictors in a single predictive model did not provide increased predictive accuracy; therefore suggesting that the catchment-scale model probably explains observed species richness variations through the impact of catchment-scale variables on in-lake water quality characteristics. Models based on catchment characteristics, which include simple and easy to obtain variables, provide a meaningful way of predicting phytoplankton species richness in temperate lakes. This approach may prove useful and cost-effective for the management and conservation of aquatic ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Linking definitions, mechanisms, and modeling of drought-induced tree death.

    PubMed

    Anderegg, William R L; Berry, Joseph A; Field, Christopher B

    2012-12-01

    Tree death from drought and heat stress is a critical and uncertain component in forest ecosystem responses to a changing climate. Recent research has illuminated how tree mortality is a complex cascade of changes involving interconnected plant systems over multiple timescales. Explicit consideration of the definitions, dynamics, and temporal and biological scales of tree mortality research can guide experimental and modeling approaches. In this review, we draw on the medical literature concerning human death to propose a water resource-based approach to tree mortality that considers the tree as a complex organism with a distinct growth strategy. This approach provides insight into mortality mechanisms at the tree and landscape scales and presents promising avenues into modeling tree death from drought and temperature stress. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Atmospheric flow over two-dimensional bluff surface obstructions

    NASA Technical Reports Server (NTRS)

    Bitte, J.; Frost, W.

    1976-01-01

    The phenomenon of atmospheric flow over a two-dimensional surface obstruction, such as a building (modeled as a rectangular block, a fence or a forward-facing step), is analyzed by three methods: (1) an inviscid free streamline approach, (2) a turbulent boundary layer approach using an eddy viscosity turbulence model and a horizontal pressure gradient determined by the inviscid model, and (3) an approach using the full Navier-Stokes equations with three turbulence models; i.e., an eddy viscosity model, a turbulence kinetic-energy model and a two-equation model with an additional transport equation for the turbulence length scale. A comparison of the performance of the different turbulence models is given, indicating that only the two-equation model adequately accounts for the convective character of turbulence. Turbulence flow property predictions obtained from the turbulence kinetic-energy model with prescribed length scale are only insignificantly better than those obtained from the eddy viscosity model. A parametric study includes the effects of the variation of the characteristics parameters of the assumed logarithmic approach velocity profile. For the case of the forward-facing step, it is shown that in the downstream flow region an increase of the surface roughness gives rise to higher turbulence levels in the shear layer originating from the step corner.

  16. Multi-scale modelling of elastic moduli of trabecular bone

    PubMed Central

    Hamed, Elham; Jasiuk, Iwona; Yoo, Andrew; Lee, YikHan; Liszka, Tadeusz

    2012-01-01

    We model trabecular bone as a nanocomposite material with hierarchical structure and predict its elastic properties at different structural scales. The analysis involves a bottom-up multi-scale approach, starting with nanoscale (mineralized collagen fibril) and moving up the scales to sub-microscale (single lamella), microscale (single trabecula) and mesoscale (trabecular bone) levels. Continuum micromechanics methods, composite materials laminate theory and finite-element methods are used in the analysis. Good agreement is found between theoretical and experimental results. PMID:22279160

  17. Study of Varying Boundary Layer Height on Turret Flow Structures

    DTIC Science & Technology

    2011-06-01

    fluid dynamics. The difficulties of the problem arise in modeling several complex flow features including separation, reattachment, three-dimensional...impossible. In this case, the approach is to create a model to calculate the properties of interest. The main issue with resolving turbulent flows...operation and their effect is modeled through subgrid scale models . As a result, the the most important turbulent scales are resolved and the

  18. Formulating a subgrid-scale breakup model for microbubble generation from interfacial collisions

    NASA Astrophysics Data System (ADS)

    Chan, Wai Hong Ronald; Mirjalili, Shahab; Urzay, Javier; Mani, Ali; Moin, Parviz

    2017-11-01

    Multiphase flows often involve impact events that engender important effects like the generation of a myriad of tiny bubbles that are subsequently transported in large liquid bodies. These impact events are created by large-scale phenomena like breaking waves on ocean surfaces, and often involve the relative approach of liquid surfaces. This relative motion generates continuously shrinking length scales as the entrapped gas layer thins and eventually breaks up into microbubbles. The treatment of this disparity in length scales is computationally challenging. In this presentation, a framework is presented that addresses a subgrid-scale (SGS) model aimed at capturing the process of microbubble generation. This work sets up the components in an overarching volume-of-fluid (VoF) toolset and investigates the analytical foundations of an SGS model for describing the breakup of a thin air film trapped between two approaching water bodies in a physical regime corresponding to Mesler entrainment. Constituents of the SGS model, such as the identification of impact events and the accurate computation of the local characteristic curvature in a VoF-based architecture, and the treatment of the air layer breakup, are discussed and illustrated in simplified scenarios. Supported by Office of Naval Research (ONR)/A*STAR (Singapore).

  19. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  20. Large-scale model quality assessment for improving protein tertiary structure prediction.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-06-15

    Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.

  1. A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2016-12-01

    Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.

  2. A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Zhang, Y; Goldgof, D B

    2004-04-02

    A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars ismore » quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.« less

  3. Dynamic-landscape metapopulation models predict complex response of wildlife populations to climate and landscape change

    Treesearch

    Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh

    2017-01-01

    The increasing need to predict how climate change will impact wildlife species has exposed limitations in how well current approaches model important biological processes at scales at which those processes interact with climate. We used a comprehensive approach that combined recent advances in landscape and population modeling into dynamic-landscape metapopulation...

  4. Comparison of statistical and theoretical habitat models for conservation planning: the benefit of ensemble prediction

    Treesearch

    D. Todd Jones-Farrand; Todd M. Fearer; Wayne E. Thogmartin; Frank R. Thompson; Mark D. Nelson; John M. Tirpak

    2011-01-01

    Selection of a modeling approach is an important step in the conservation planning process, but little guidance is available. We compared two statistical and three theoretical habitat modeling approaches representing those currently being used for avian conservation planning at landscape and regional scales: hierarchical spatial count (HSC), classification and...

  5. Least Squares Method for Equating Logistic Ability Scales: A General Approach and Evaluation. Iowa Testing Programs Occasional Papers, Number 30.

    ERIC Educational Resources Information Center

    Haebara, Tomokazu

    When several ability scales in item response models are separately derived from different test forms administered to different samples of examinees, these scales must be equated to a common scale because their units and origins are arbitrarily determined and generally different from scale to scale. A general method for equating logistic ability…

  6. Dimensionality of the 9-item Utrecht Work Engagement Scale revisited: A Bayesian structural equation modeling approach.

    PubMed

    Fong, Ted C T; Ho, Rainbow T H

    2015-01-01

    The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.

  7. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  8. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    NASA Technical Reports Server (NTRS)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  9. Module Degradation Mechanisms Studied by a Multi-Scale Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Steve; Al-Jassim, Mowafak; Hacke, Peter

    2016-11-21

    A key pathway to meeting the Department of Energy SunShot 2020 goals is to reduce financing costs by improving investor confidence through improved photovoltaic (PV) module reliability. A comprehensive approach to further understand and improve PV reliability includes characterization techniques and modeling from module to atomic scale. Imaging techniques, which include photoluminescence, electroluminescence, and lock-in thermography, are used to locate localized defects responsible for module degradation. Small area samples containing such defects are prepared using coring techniques and are then suitable and available for microscopic study and specific defect modeling and analysis.

  10. Modelling approaches for evaluating multiscale tendon mechanics

    PubMed Central

    Fang, Fei; Lake, Spencer P.

    2016-01-01

    Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure–function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments. PMID:26855747

  11. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  12. Multi-scale occupancy estimation and modelling using multiple detection methods

    USGS Publications Warehouse

    Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.

    2008-01-01

    Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.

  13. Approaches for Subgrid Parameterization: Does Scaling Help?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-04-01

    Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.

  14. Constraining regional scale carbon budgets at the US West Coast using a high-resolution atmospheric inverse modeling approach

    NASA Astrophysics Data System (ADS)

    Goeckede, M.; Michalak, A. M.; Vickers, D.; Turner, D.; Law, B.

    2009-04-01

    The study presented is embedded within the NACP (North American Carbon Program) West Coast project ORCA2, which aims at determining the regional carbon balance of the US states Oregon, California and Washington. Our work specifically focuses on the effect of disturbance history and climate variability, aiming at improving our understanding of e.g. drought stress and stand age on carbon sources and sinks in complex terrain with fine-scale variability in land cover types. The ORCA2 atmospheric inverse modeling approach has been set up to capture flux variability on the regional scale at high temporal and spatial resolution. Atmospheric transport is simulated coupling the mesoscale model WRF (Weather Research and Forecast) with the STILT (Stochastic Time Inverted Lagrangian Transport) footprint model. This setup allows identifying sources and sinks that influence atmospheric observations with highly resolved mass transport fields and realistic turbulent mixing. Terrestrial biosphere carbon fluxes are simulated at spatial resolutions of up to 1km and subdaily timesteps, considering effects of ecoregion, land cover type and disturbance regime on the carbon budgets. Our approach assimilates high-precision atmospheric CO2 concentration measurements and eddy-covariance data from several sites throughout the model domain, as well as high-resolution remote sensing products (e.g. LandSat, MODIS) and interpolated surface meteorology (DayMet, SOGS, PRISM). We present top-down modeling results that have been optimized using Bayesian inversion, reflecting the information on regional scale carbon processes provided by the network of high-precision CO2 observations. We address the level of detail (e.g. spatial and temporal resolution) that can be resolved by top-down modeling on the regional scale, given the uncertainties introduced by various sources for model-data mismatch. Our results demonstrate the importance of accurate modeling of carbon-water coupling, with the representation of water availability and drought stress playing a dominant role to capture spatially variable CO2 exchange rates in a region characterized by strong climatic gradients.

  15. A systems biology approach to investigate the antimicrobial activity of oleuropein.

    PubMed

    Li, Xianhua; Liu, Yanhong; Jia, Qian; LaMacchia, Virginia; O'Donoghue, Kathryn; Huang, Zuyi

    2016-12-01

    Oleuropein and its hydrolysis products are olive phenolic compounds that have antimicrobial effects on a variety of pathogens, with the potential to be utilized in food and pharmaceutical products. While the existing research is mainly focused on individual genes or enzymes that are regulated by oleuropein for antimicrobial activities, little work has been done to integrate intracellular genes, enzymes and metabolic reactions for a systematic investigation of antimicrobial mechanism of oleuropein. In this study, the first genome-scale modeling method was developed to predict the system-level changes of intracellular metabolism triggered by oleuropein in Staphylococcus aureus, a common food-borne pathogen. To simulate the antimicrobial effect, an existing S. aureus genome-scale metabolic model was extended by adding the missing nitric oxide reactions, and exchange rates of potassium, phosphate and glutamate were adjusted in the model as suggested by previous research to mimic the stress imposed by oleuropein on S. aureus. The developed modeling approach was able to match S. aureus growth rates with experimental data for five oleuropein concentrations. The reactions with large flux change were identified and the enzymes of fifteen of these reactions were validated by existing research for their important roles in oleuropein metabolism. When compared with experimental data, the up/down gene regulations of 80% of these enzymes were correctly predicted by our modeling approach. This study indicates that the genome-scale modeling approach provides a promising avenue for revealing the intracellular metabolism of oleuropein antimicrobial properties.

  16. Generating clustered scale-free networks using Poisson based localization of edges

    NASA Astrophysics Data System (ADS)

    Türker, İlker

    2018-05-01

    We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.

  17. Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2013-12-01

    In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.

  18. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    PubMed Central

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  19. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    PubMed

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  20. A Decade-long Continental-Scale Convection-Resolving Climate Simulation on GPUs

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph

    2016-04-01

    The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. Using horizontal grid spacings of O(1km), they allow to explicitly resolve deep convection leading to an improved representation of the water cycle. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer-designs that involve conventional multicore CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation using the GPU-enabled COSMO version. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss the performance of the convection-resolving modeling approach on the European scale. Specifically we focus on the annual cycle of convection in Europe, on the organization of convective clouds and on the verification of hourly rainfall with various high resolution datasets.

  1. Fully Coupled Micro/Macro Deformation, Damage, and Failure Prediction for SiC/Ti-15-3 Laminates

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.; Lerch, Brad A.

    2001-01-01

    The deformation, failure, and low cycle fatigue life of SCS-6/Ti-15-3 composites are predicted using a coupled deformation and damage approach in the context of the analytical generalized method of cells (GMC) micromechanics model. The local effects of inelastic deformation, fiber breakage, fiber-matrix interfacial debonding, and fatigue damage are included as sub-models that operate on the micro scale for the individual composite phases. For the laminate analysis, lamination theory is employed as the global or structural scale model, while GMC is embedded to operate on the meso scale to simulate the behavior of the composite material within each laminate layer. While the analysis approach is quite complex and multifaceted, it is shown, through comparison with experimental data, to be quite accurate and realistic while remaining extremely efficient.

  2. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  3. A quality by design approach to scale-up of high-shear wet granulation process.

    PubMed

    Pandey, Preetanshu; Badawy, Sherif

    2016-01-01

    High-shear wet granulation is a complex process that in turn makes scale-up a challenging task. Scale-up of high-shear wet granulation process has been studied extensively in the past with various different methodologies being proposed in the literature. This review article discusses existing scale-up principles and categorizes the various approaches into two main scale-up strategies - parameter-based and attribute-based. With the advent of quality by design (QbD) principle in drug product development process, an increased emphasis toward the latter approach may be needed to ensure product robustness. In practice, a combination of both scale-up strategies is often utilized. In a QbD paradigm, there is also a need for an increased fundamental and mechanistic understanding of the process. This can be achieved either by increased experimentation that comes at higher costs, or by using modeling techniques, that are also discussed as part of this review.

  4. Degradation modeling of high temperature proton exchange membrane fuel cells using dual time scale simulation

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.

    2015-02-01

    HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.

  5. Plant systems biology: network matters.

    PubMed

    Lucas, Mikaël; Laplaze, Laurent; Bennett, Malcolm J

    2011-04-01

    Systems biology is all about networks. A recent trend has been to associate systems biology exclusively with the study of gene regulatory or protein-interaction networks. However, systems biology approaches can be applied at many other scales, from the subatomic to the ecosystem scales. In this review, we describe studies at the sub-cellular, tissue, whole plant and crop scales and highlight how these studies can be related to systems biology. We discuss the properties of system approaches at each scale as well as their current limits, and pinpoint in each case advances unique to the considered scale but representing potential for the other scales. We conclude by examining plant models bridging different scales and considering the future prospects of plant systems biology. © 2011 Blackwell Publishing Ltd.

  6. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.

    PubMed

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah

    2009-01-01

    Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  7. A Multilevel Bifactor Approach to Construct Validation of Mixed-Format Scales

    ERIC Educational Resources Information Center

    Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony

    2018-01-01

    Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…

  8. Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir

    2010-01-01

    A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…

  9. Plant leaf traits, canopy processes, and global atmospheric chemistry interactions.

    NASA Astrophysics Data System (ADS)

    Guenther, A. B.

    2017-12-01

    Plants produce and emit a diverse array of volatile metabolites into the atmosphere that participate in chemical reactions that influence distributions of air pollutants and short-lived climate forcers including organic aerosol, ozone and methane. It is now widely accepted that accurate estimates of these emissions are required as inputs for regional air quality and global climate models. Predicting these emissions is complicated by the large number of volatile organic compounds, driving variables (e.g., temperature, solar radiation, abiotic and biotic stresses) and processes operating across a range of scales. Modeling efforts to characterize emission magnitude and variations will be described along with an assessment of the observations available for parameterizing and evaluating these models including discussion of the limitations and challenges associated with existing model approaches. A new approach for simulating canopy scale organic emissions on regional to global scales will be described and compared with leaf, canopy and regional scale flux measurements. The importance of including additional compounds and processes as well as improving estimates of existing ones will also be discussed.

  10. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2015-03-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  11. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2014-11-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  12. Large-area Soil Moisture Surveys Using a Cosmic-ray Rover: Approaches and Results from Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, A. A.; McJannet, D. L.; Renzullo, L. J.; Baker, B.; Searle, R.

    2017-12-01

    Recent improvements in satellite instrumentation has increased the resolution and frequency of soil moisture observations, and this in turn has supported the development of higher resolution land surface process models. Calibration and validation of these products is restricted by the mismatch of scales between remotely sensed and contemporary ground based observations. Although the cosmic ray neutron soil moisture probe can provide estimates soil moisture at a scale useful for the calibration and validation purposes, it is spatially limited to a single, fixed location. This scaling issue has been addressed with the development of mobile soil moisture monitoring systems that utilizes the cosmic ray neutron method, typically referred to as a `rover'. This manuscript describes a project designed to develop approaches for undertaking rover surveys to produce soil moisture estimates at scales comparable to satellite observations and land surface process models. A custom designed, trailer-mounted rover was used to conduct repeat surveys at two scales in the Mallee region of Victoria, Australia. A broad scale survey was conducted at 36 x 36 km covering an area of a standard SMAP pixel and an intensive scale survey was conducted over a 10 x 10 km portion of the broad scale survey, which is at a scale equivalent to that used for national water balance modelling. We will describe the design of the rover, the methods used for converting neutron counts into soil moisture and discuss factors controlling soil moisture variability. We found that the intensive scale rover surveys produced reliable soil moisture estimates at 1 km resolution and the broad scale at 9 km resolution. We conclude that these products are well suited for future analysis of satellite soil moisture retrievals and finer scale soil moisture models.

  13. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    PubMed

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  14. Dynamic structural disorder in supported nanoscale catalysts

    NASA Astrophysics Data System (ADS)

    Rehr, J. J.; Vila, F. D.

    2014-04-01

    We investigate the origin and physical effects of "dynamic structural disorder" (DSD) in supported nano-scale catalysts. DSD refers to the intrinsic fluctuating, inhomogeneous structure of such nano-scale systems. In contrast to bulk materials, nano-scale systems exhibit substantial fluctuations in structure, charge, temperature, and other quantities, as well as large surface effects. The DSD is driven largely by the stochastic librational motion of the center of mass and fluxional bonding at the nanoparticle surface due to thermal coupling with the substrate. Our approach for calculating and understanding DSD is based on a combination of real-time density functional theory/molecular dynamics simulations, transient coupled-oscillator models, and statistical mechanics. This approach treats thermal and dynamic effects over multiple time-scales, and includes bond-stretching and -bending vibrations, and transient tethering to the substrate at longer ps time-scales. Potential effects on the catalytic properties of these clusters are briefly explored. Model calculations of molecule-cluster interactions and molecular dissociation reaction paths are presented in which the reactant molecules are adsorbed on the surface of dynamically sampled clusters. This model suggests that DSD can affect both the prefactors and distribution of energy barriers in reaction rates, and thus can significantly affect catalytic activity at the nano-scale.

  15. A 100,000 Scale Factor Radar Range.

    PubMed

    Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser

    2017-12-19

    The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.

  16. A two-scale Weibull approach to the failure of porous ceramic structures made by robocasting: possibilities and limits

    PubMed Central

    Genet, Martin; Houmard, Manuel; Eslava, Salvador; Saiz, Eduardo; Tomsia, Antoni P.

    2012-01-01

    This paper introduces our approach to modeling the mechanical behavior of cellular ceramics, through the example of calcium phosphate scaffolds made by robocasting for bone-tissue engineering. The Weibull theory is used to deal with the scaffolds’ constitutive rods statistical failure, and the Sanchez-Palencia theory of periodic homogenization is used to link the rod- and scaffold-scales. Uniaxial compression of scaffolds and three-point bending of rods were performed to calibrate and validate the model. If calibration based on rod-scale data leads to over-conservative predictions of scaffold’s properties (as rods’ successive failures are not taken into account), we show that, for a given rod diameter, calibration based on scaffold-scale data leads to very satisfactory predictions for a wide range of rod spacing, i.e. of scaffold porosity, as well as for different loading conditions. This work establishes the proposed model as a reliable tool for understanding and optimizing cellular ceramics’ mechanical properties. PMID:23439936

  17. A New Computational Method to Fit the Weighted Euclidean Distance Model.

    ERIC Educational Resources Information Center

    De Leeuw, Jan; Pruzansky, Sandra

    1978-01-01

    A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)

  18. A comparison of bivariate, multivariate random-effects, and Poisson correlated gamma-frailty models to meta-analyze individual patient data of ordinal scale diagnostic tests.

    PubMed

    Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea

    2017-11-01

    Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Hierarchical coarse-graining strategy for protein-membrane systems to access mesoscopic scales

    PubMed Central

    Ayton, Gary S.; Lyman, Edward

    2014-01-01

    An overall multiscale simulation strategy for large scale coarse-grain simulations of membrane protein systems is presented. The protein is modeled as a heterogeneous elastic network, while the lipids are modeled using the hybrid analytic-systematic (HAS) methodology, where in both cases atomistic level information obtained from molecular dynamics simulation is used to parameterize the model. A feature of this approach is that from the outset liposome length scales are employed in the simulation (i.e., on the order of ½ a million lipids plus protein). A route to develop highly coarse-grained models from molecular-scale information is proposed and results for N-BAR domain protein remodeling of a liposome are presented. PMID:20158037

  20. Overland flow connectivity on planar patchy hillslopes - modified percolation theory approaches and combinatorial model of urns

    NASA Astrophysics Data System (ADS)

    Nezlobin, David; Pariente, Sarah; Lavee, Hanoch; Sachs, Eyal

    2017-04-01

    Source-sink systems are very common in hydrology; in particular, some land cover types often generate runoff (e.g. embedded rocks, bare soil) , while other obstruct it (e.g. vegetation, cracked soil). Surface runoff coefficients of patchy slopes/plots covered by runoff generating and obstructing covers (e.g., bare soil and vegetation) depend critically on the percentage cover (i.e. sources/sinks abundance) and decrease strongly with observation scale. The classic mathematical percolation theory provides a powerful apparatus for describing the runoff connectivity on patchy hillslopes, but it ignores strong effect of the overland flow directionality. To overcome this and other difficulties, modified percolation theory approaches can be considered, such as straight percolation (for the planar slopes), quasi-straight percolation and models with limited obstruction. These approaches may explain both the observed critical dependence of runoff coefficients on percentage cover and their scale decrease in systems with strong flow directionality (e.g. planar slopes). The contributing area increases sharply when the runoff generating percentage cover approaches the straight percolation threshold. This explains the strong increase of the surface runoff and erosion for relatively low values (normally less than 35%) of the obstructing cover (e.g., vegetation). Combinatorial models of urns with restricted occupancy can be applied for the analytic evaluation of meaningful straight percolation quantities, such as NOGA's (Non-Obstructed Generating Area) expected value and straight percolation probability. It is shown that the nature of the cover-related runoff scale decrease is combinatorial - the probability for the generated runoff to avoid obstruction in unit area decreases with scale for the non-trivial percentage cover values. The magnitude of the scale effect is found to be a skewed non-monotonous function of the percentage cover. It is shown that the cover-related scale effect becomes less prominent if the obstructing capacity decreases, as generally occurs during heavy rainfalls. The plot width have a moderate positive statistical effect on runoff and erosion coefficients, since wider patchy plots have, on average, a greater normalized contributing area and a higher probability to have runoff of a certain length. The effect of plot width depends by itself on the percentage cover, plot length, and compared width scales. The contributing area uncertainty brought about by cover spatial arrangement is examined, including its dependence on the percentage cover and scale. In general, modified percolation theory approaches and combinatorial models of urns with restricted occupancy may link between critical dependence of runoff on percentage cover, cover-related scale effect, and statistical uncertainty of the observed quantities.

  1. Predicting monthly precipitation along coastal Ecuador: ENSO and transfer function models

    NASA Astrophysics Data System (ADS)

    de Guenni, Lelys B.; García, Mariangel; Muñoz, Ángel G.; Santos, José L.; Cedeño, Alexandra; Perugachi, Carlos; Castillo, José

    2017-08-01

    It is well known that El Niño-Southern Oscillation (ENSO) modifies precipitation patterns in several parts of the world. One of the most impacted areas is the western coast of South America, where Ecuador is located. El Niño events that occurred in 1982-1983, 1987-1988, 1991-1992, and 1997-1998 produced important positive rainfall anomalies in the coastal zone of Ecuador, bringing considerable damage to livelihoods, agriculture, and infrastructure. Operational climate forecasts in the region provide only seasonal scale (e.g., 3-month averages) information, but during ENSO events it is key for decision-makers to use reliable sub-seasonal scale forecasts, which at the present time are still non-existent in most parts of the world. This study analyzes the potential predictability of coastal Ecuador rainfall at monthly scale. Instead of the discrete approach that considers training models using only particular seasons, continuous (i.e., all available months are used) transfer function models are built using standard ENSO indices to explore rainfall forecast skill along the Ecuadorian coast and Galápagos Islands. The modeling approach considers a large-scale contribution, represented by the role of a sea-surface temperature index, and a local-scale contribution represented here via the use of previous precipitation observed in the same station. The study found that the Niño3 index is the best ENSO predictor of monthly coastal rainfall, with a lagged response varying from 0 months (simultaneous) for Galápagos up to 3 months for the continental locations considered. Model validation indicates that the skill is similar to the one obtained using principal component regression models for the same kind of experiments. It is suggested that the proposed approach could provide skillful rainfall forecasts at monthly scale for up to a few months in advance.

  2. Modeling responses of large-river fish populations to global climate change through downscaling and incorporation of predictive uncertainty

    USGS Publications Warehouse

    Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima; Mader, Helmut; Kraml, Julia

    2012-01-01

    Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.

  3. Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur

    2012-01-01

    One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF

  4. Scale Space for Camera Invariant Features.

    PubMed

    Puig, Luis; Guerrero, José J; Daniilidis, Kostas

    2014-09-01

    In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.

  5. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  6. Comments on “A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man

    2015-06-01

    Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less

  7. Kinetic roughening and porosity scaling in film growth with subsurface lateral aggregation.

    PubMed

    Reis, F D A Aarão

    2015-06-01

    We study surface and bulk properties of porous films produced by a model in which particles incide perpendicularly to a substrate, interact with deposited neighbors in its trajectory, and aggregate laterally with probability of order a at each position. The model generalizes ballisticlike models by allowing attachment to particles below the outer surface. For small values of a, a crossover from uncorrelated deposition (UD) to correlated growth is observed. Simulations are performed in 1+1 and 2+1 dimensions. Extrapolation of effective exponents and comparison of roughness distributions confirm Kardar-Parisi-Zhang roughening of the outer surface for a>0. A scaling approach for small a predicts crossover times as a(-2/3) and local height fluctuations as a(-1/3) at the crossover, independent of substrate dimension. These relations are different from all previously studied models with crossovers from UD to correlated growth due to subsurface aggregation, which reduces scaling exponents. The same approach predicts the porosity and average pore height scaling as a(1/3) and a(-1/3), respectively, in good agreement with simulation results in 1+1 and 2+1 dimensions. These results may be useful for modeling samples with desired porosity and long pores.

  8. Downscaling Land Surface Temperature in Complex Regions by Using Multiple Scale Factors with Adaptive Thresholds

    PubMed Central

    Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen

    2017-01-01

    Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301

  9. Predicted deep-sea coral habitat suitability for the U.S. West coast.

    PubMed

    Guinotte, John M; Davies, Andrew J

    2014-01-01

    Regional scale habitat suitability models provide finer scale resolution and more focused predictions of where organisms may occur. Previous modelling approaches have focused primarily on local and/or global scales, while regional scale models have been relatively few. In this study, regional scale predictive habitat models are presented for deep-sea corals for the U.S. West Coast (California, Oregon and Washington). Model results are intended to aid in future research or mapping efforts and to assess potential coral habitat suitability both within and outside existing bottom trawl closures (i.e. Essential Fish Habitat (EFH)) and identify suitable habitat within U.S. National Marine Sanctuaries (NMS). Deep-sea coral habitat suitability was modelled at 500 m×500 m spatial resolution using a range of physical, chemical and environmental variables known or thought to influence the distribution of deep-sea corals. Using a spatial partitioning cross-validation approach, maximum entropy models identified slope, temperature, salinity and depth as important predictors for most deep-sea coral taxa. Large areas of highly suitable deep-sea coral habitat were predicted both within and outside of existing bottom trawl closures and NMS boundaries. Predicted habitat suitability over regional scales are not currently able to identify coral areas with pin point accuracy and probably overpredict actual coral distribution due to model limitations and unincorporated variables (i.e. data on distribution of hard substrate) that are known to limit their distribution. Predicted habitat results should be used in conjunction with multibeam bathymetry, geological mapping and other tools to guide future research efforts to areas with the highest probability of harboring deep-sea corals. Field validation of predicted habitat is needed to quantify model accuracy, particularly in areas that have not been sampled.

  10. Predicted Deep-Sea Coral Habitat Suitability for the U.S. West Coast

    PubMed Central

    Guinotte, John M.; Davies, Andrew J.

    2014-01-01

    Regional scale habitat suitability models provide finer scale resolution and more focused predictions of where organisms may occur. Previous modelling approaches have focused primarily on local and/or global scales, while regional scale models have been relatively few. In this study, regional scale predictive habitat models are presented for deep-sea corals for the U.S. West Coast (California, Oregon and Washington). Model results are intended to aid in future research or mapping efforts and to assess potential coral habitat suitability both within and outside existing bottom trawl closures (i.e. Essential Fish Habitat (EFH)) and identify suitable habitat within U.S. National Marine Sanctuaries (NMS). Deep-sea coral habitat suitability was modelled at 500 m×500 m spatial resolution using a range of physical, chemical and environmental variables known or thought to influence the distribution of deep-sea corals. Using a spatial partitioning cross-validation approach, maximum entropy models identified slope, temperature, salinity and depth as important predictors for most deep-sea coral taxa. Large areas of highly suitable deep-sea coral habitat were predicted both within and outside of existing bottom trawl closures and NMS boundaries. Predicted habitat suitability over regional scales are not currently able to identify coral areas with pin point accuracy and probably overpredict actual coral distribution due to model limitations and unincorporated variables (i.e. data on distribution of hard substrate) that are known to limit their distribution. Predicted habitat results should be used in conjunction with multibeam bathymetry, geological mapping and other tools to guide future research efforts to areas with the highest probability of harboring deep-sea corals. Field validation of predicted habitat is needed to quantify model accuracy, particularly in areas that have not been sampled. PMID:24759613

  11. Multi-scale groundwater flow modeling during temperate climate conditions for the safety assessment of the proposed high-level nuclear waste repository site at Forsmark, Sweden

    NASA Astrophysics Data System (ADS)

    Joyce, Steven; Hartley, Lee; Applegate, David; Hoek, Jaap; Jackson, Peter

    2014-09-01

    Forsmark in Sweden has been proposed as the site of a geological repository for spent high-level nuclear fuel, to be located at a depth of approximately 470 m in fractured crystalline rock. The safety assessment for the repository has required a multi-disciplinary approach to evaluate the impact of hydrogeological and hydrogeochemical conditions close to the repository and in a wider regional context. Assessing the consequences of potential radionuclide releases requires quantitative site-specific information concerning the details of groundwater flow on the scale of individual waste canister locations (1-10 m) as well as details of groundwater flow and composition on the scale of groundwater pathways between the facility and the surface (500 m to 5 km). The purpose of this article is to provide an illustration of multi-scale modeling techniques and the results obtained when combining aspects of local-scale flows in fractures around a potential contaminant source with regional-scale groundwater flow and transport subject to natural evolution of the system. The approach set out is novel, as it incorporates both different scales of model and different levels of detail, combining discrete fracture network and equivalent continuous porous medium representations of fractured bedrock.

  12. Probabilistic flood damage modelling at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2014-05-01

    Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.

  13. Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph

    2013-01-01

    Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.

  14. Brownian motion or Lévy walk? Stepping towards an extended statistical mechanics for animal locomotion.

    PubMed

    Gautestad, Arild O

    2012-09-07

    Animals moving under the influence of spatio-temporal scaling and long-term memory generate a kind of space-use pattern that has proved difficult to model within a coherent theoretical framework. An extended kind of statistical mechanics is needed, accounting for both the effects of spatial memory and scale-free space use, and put into a context of ecological conditions. Simulations illustrating the distinction between scale-specific and scale-free locomotion are presented. The results show how observational scale (time lag between relocations of an individual) may critically influence the interpretation of the underlying process. In this respect, a novel protocol is proposed as a method to distinguish between some main movement classes. For example, the 'power law in disguise' paradox-from a composite Brownian motion consisting of a superposition of independent movement processes at different scales-may be resolved by shifting the focus from pattern analysis at one particular temporal resolution towards a more process-oriented approach involving several scales of observation. A more explicit consideration of system complexity within a statistical mechanical framework, supplementing the more traditional mechanistic modelling approach, is advocated.

  15. Multi-Scale Approach for Predicting Fish Species Distributions across Coral Reef Seascapes

    PubMed Central

    Pittman, Simon J.; Brown, Kerry A.

    2011-01-01

    Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5–300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided ‘outstanding’ model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided ‘outstanding’ model predictions for two of five species, with the remaining three models considered ‘excellent’ (AUC = 0.8–0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management. PMID:21637787

  16. Multi-scale approach for predicting fish species distributions across coral reef seascapes.

    PubMed

    Pittman, Simon J; Brown, Kerry A

    2011-01-01

    Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management.

  17. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  18. A Multiscale Survival Process for Modeling Human Activity Patterns.

    PubMed

    Zhang, Tianyang; Cui, Peng; Song, Chaoming; Zhu, Wenwu; Yang, Shiqiang

    2016-01-01

    Human activity plays a central role in understanding large-scale social dynamics. It is well documented that individual activity pattern follows bursty dynamics characterized by heavy-tailed interevent time distributions. Here we study a large-scale online chatting dataset consisting of 5,549,570 users, finding that individual activity pattern varies with timescales whereas existing models only approximate empirical observations within a limited timescale. We propose a novel approach that models the intensity rate of an individual triggering an activity. We demonstrate that the model precisely captures corresponding human dynamics across multiple timescales over five orders of magnitudes. Our model also allows extracting the population heterogeneity of activity patterns, characterized by a set of individual-specific ingredients. Integrating our approach with social interactions leads to a wide range of implications.

  19. Combining agent-based modeling and life cycle assessment for the evaluation of mobility policies.

    PubMed

    Florent, Querini; Enrico, Benetto

    2015-02-03

    This article presents agent-based modeling (ABM) as a novel approach for consequential life cycle assessment (C-LCA) of large scale policies, more specifically mobility-related policies. The approach is validated at the Luxembourgish level (as a first case study). The agent-based model simulates the car market (sales, use, and dismantling) of the population of users in the period 2013-2020, following the implementation of different mobility policies and available electric vehicles. The resulting changes in the car fleet composition as well as the hourly uses of the vehicles are then used to derive consistent LCA results, representing the consequences of the policies. Policies will have significant environmental consequences: when using ReCiPe2008, we observe a decrease of global warming, fossil depletion, acidification, ozone depletion, and photochemical ozone formation and an increase of metal depletion, ionizing radiations, marine eutrophication, and particulate matter formation. The study clearly shows that the extrapolation of LCA results for the circulating fleet at national scale following the introduction of the policies from the LCAs of single vehicles by simple up-scaling (using hypothetical deployment scenarios) would be flawed. The inventory has to be directly conducted at full scale and to this aim, ABM is indeed a promising approach, as it allows identifying and quantifying emerging effects while modeling the Life Cycle Inventory of vehicles at microscale through the concept of agents.

  20. Multiscale Metabolic Modeling: Dynamic Flux Balance Analysis on a Whole-Plant Scale1[W][OPEN

    PubMed Central

    Grafahrend-Belau, Eva; Junker, Astrid; Eschenröder, André; Müller, Johannes; Schreiber, Falk; Junker, Björn H.

    2013-01-01

    Plant metabolism is characterized by a unique complexity on the cellular, tissue, and organ levels. On a whole-plant scale, changing source and sink relations accompanying plant development add another level of complexity to metabolism. With the aim of achieving a spatiotemporal resolution of source-sink interactions in crop plant metabolism, a multiscale metabolic modeling (MMM) approach was applied that integrates static organ-specific models with a whole-plant dynamic model. Allowing for a dynamic flux balance analysis on a whole-plant scale, the MMM approach was used to decipher the metabolic behavior of source and sink organs during the generative phase of the barley (Hordeum vulgare) plant. It reveals a sink-to-source shift of the barley stem caused by the senescence-related decrease in leaf source capacity, which is not sufficient to meet the nutrient requirements of sink organs such as the growing seed. The MMM platform represents a novel approach for the in silico analysis of metabolism on a whole-plant level, allowing for a systemic, spatiotemporally resolved understanding of metabolic processes involved in carbon partitioning, thus providing a novel tool for studying yield stability and crop improvement. PMID:23926077

  1. Challenges in Global Land Use/Land Cover Change Modeling

    NASA Astrophysics Data System (ADS)

    Clarke, K. C.

    2011-12-01

    For the purposes of projecting and anticipating human-induced land use change at the global scale, much work remains in the systematic mapping and modeling of world-wide land uses and their related dynamics. In particular, research has focused on tropical deforestation, loss of prime agricultural land, loss of wild land and open space, and the spread of urbanization. Fifteen years of experience in modeling land use and land cover change at the regional and city level with the cellular automata model SLEUTH, including cross city and regional comparisons, has led to an ability to comment on the challenges and constraints that apply to global level land use change modeling. Some issues are common to other modeling domains, such as scaling, earth geometry, and model coupling. Others relate to geographical scaling of human activity, while some are issues of data fusion and international interoperability. Grid computing now offers the prospect of global land use change simulation. This presentation summarizes what barriers face global scale land use modeling, but also highlights the benefits of such modeling activity on global change research. An approach to converting land use maps and forecasts into environmental impact measurements is proposed. Using such an approach means that multitemporal mapping, often using remotely sensed sources, and forecasting can also yield results showing the overall and disaggregated status of the environment.

  2. Mathematical and computational approaches can complement experimental studies of host-pathogen interactions.

    PubMed

    Kirschner, Denise E; Linderman, Jennifer J

    2009-04-01

    In addition to traditional and novel experimental approaches to study host-pathogen interactions, mathematical and computer modelling have recently been applied to address open questions in this area. These modelling tools not only offer an additional avenue for exploring disease dynamics at multiple biological scales, but also complement and extend knowledge gained via experimental tools. In this review, we outline four examples where modelling has complemented current experimental techniques in a way that can or has already pushed our knowledge of host-pathogen dynamics forward. Two of the modelling approaches presented go hand in hand with articles in this issue exploring fluorescence resonance energy transfer and two-photon intravital microscopy. Two others explore virtual or 'in silico' deletion and depletion as well as a new method to understand and guide studies in genetic epidemiology. In each of these examples, the complementary nature of modelling and experiment is discussed. We further note that multi-scale modelling may allow us to integrate information across length (molecular, cellular, tissue, organism, population) and time (e.g. seconds to lifetimes). In sum, when combined, these compatible approaches offer new opportunities for understanding host-pathogen interactions.

  3. Measures of Agreement Between Many Raters for Ordinal Classifications

    PubMed Central

    Nelson, Kerrie P.; Edwards, Don

    2015-01-01

    Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449

  4. Electromagnetic scaling functions within the Green's function Monte Carlo approach

    DOE PAGES

    Rocco, N.; Alvarez-Ruso, L.; Lovato, A.; ...

    2017-07-24

    We have studied the scaling properties of the electromagnetic response functions of 4He and 12C nuclei computed by the Green's function Monte Carlo approach, retaining only the one-body current contribution. Longitudinal and transverse scaling functions have been obtained in the relativistic and nonrelativistic cases and compared to experiment for various kinematics. The characteristic asymmetric shape of the scaling function exhibited by data emerges in the calculations in spite of the nonrelativistic nature of the model. The results are mostly consistent with scaling of zeroth, first, and second kinds. Our analysis reveals a direct correspondence between the scaling and the nucleon-densitymore » response functions. In conclusion, the scaling function obtained from the proton-density response displays scaling of the first kind, even more evidently than the longitudinal and transverse scaling functions« less

  5. Electromagnetic scaling functions within the Green's function Monte Carlo approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocco, N.; Alvarez-Ruso, L.; Lovato, A.

    We have studied the scaling properties of the electromagnetic response functions of 4He and 12C nuclei computed by the Green's function Monte Carlo approach, retaining only the one-body current contribution. Longitudinal and transverse scaling functions have been obtained in the relativistic and nonrelativistic cases and compared to experiment for various kinematics. The characteristic asymmetric shape of the scaling function exhibited by data emerges in the calculations in spite of the nonrelativistic nature of the model. The results are mostly consistent with scaling of zeroth, first, and second kinds. Our analysis reveals a direct correspondence between the scaling and the nucleon-densitymore » response functions. In conclusion, the scaling function obtained from the proton-density response displays scaling of the first kind, even more evidently than the longitudinal and transverse scaling functions« less

  6. Subgrid-scale parameterization and low-frequency variability: a response theory approach

    NASA Astrophysics Data System (ADS)

    Demaeyer, Jonathan; Vannitsem, Stéphane

    2016-04-01

    Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.

  7. A systems approach to assess farm-scale nutrient and trace element dynamics: a case study at the Ojebyn dairy farm.

    PubMed

    Oborn, Ingrid; Modin-Edman, Anna-Karin; Bengtsson, Helena; Gustafson, Gunnela M; Salomon, Eva; Nilsson, S Ingvar; Holmqvist, Johan; Jonsson, Simon; Sverdrup, Harald

    2005-06-01

    A systems analysis approach was used to assess farmscale nutrient and trace element sustainability by combining full-scale field experiments with specific studies of nutrient release from mineral weathering and trace-element cycling. At the Ojebyn dairy farm in northern Sweden, a farm-scale case study including phosphorus (P), potassium (K), and zinc (Zn) was run to compare organic and conventional agricultural management practices. By combining different element-balance approaches (at farmgate, barn, and field scales) and further adapting these to the FARMFLOW model, we were able to combine mass flows and pools within the subsystems and establish links between subsystems in order to make farm-scale predictions. It was found that internal element flows on the farm are large and that there are farm internal sources (Zn) and loss terms (K). The approaches developed and tested at the Ojebyn farm are promising and considered generally adaptable to any farm.

  8. On temporal stochastic modeling of precipitation, nesting models across scales

    NASA Astrophysics Data System (ADS)

    Paschalis, Athanasios; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2014-01-01

    We analyze the performance of composite stochastic models of temporal precipitation which can satisfactorily reproduce precipitation properties across a wide range of temporal scales. The rationale is that a combination of stochastic precipitation models which are most appropriate for specific limited temporal scales leads to better overall performance across a wider range of scales than single models alone. We investigate different model combinations. For the coarse (daily) scale these are models based on Alternating renewal processes, Markov chains, and Poisson cluster models, which are then combined with a microcanonical Multiplicative Random Cascade model to disaggregate precipitation to finer (minute) scales. The composite models were tested on data at four sites in different climates. The results show that model combinations improve the performance in key statistics such as probability distributions of precipitation depth, autocorrelation structure, intermittency, reproduction of extremes, compared to single models. At the same time they remain reasonably parsimonious. No model combination was found to outperform the others at all sites and for all statistics, however we provide insight on the capabilities of specific model combinations. The results for the four different climates are similar, which suggests a degree of generality and wider applicability of the approach.

  9. Distribution function approach to redshift space distortions. Part IV: perturbation theory applied to dark matter

    NASA Astrophysics Data System (ADS)

    Vlah, Zvonimir; Seljak, Uroš; McDonald, Patrick; Okumura, Teppei; Baldauf, Tobias

    2012-11-01

    We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.

  10. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  11. A test-bed modeling study for wave resource assessment

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Neary, V. S.; Wang, T.; Gunawan, B.; Dallman, A.

    2016-02-01

    Hindcasts from phase-averaged wave models are commonly used to estimate standard statistics used in wave energy resource assessments. However, the research community and wave energy converter industry is lacking a well-documented and consistent modeling approach for conducting these resource assessments at different phases of WEC project development, and at different spatial scales, e.g., from small-scale pilot study to large-scale commercial deployment. Therefore, it is necessary to evaluate current wave model codes, as well as limitations and knowledge gaps for predicting sea states, in order to establish best wave modeling practices, and to identify future research needs to improve wave prediction for resource assessment. This paper presents the first phase of an on-going modeling study to address these concerns. The modeling study is being conducted at a test-bed site off the Central Oregon Coast using two of the most widely-used third-generation wave models - WaveWatchIII and SWAN. A nested-grid modeling approach, with domain dimension ranging from global to regional scales, was used to provide wave spectral boundary condition to a local scale model domain, which has a spatial dimension around 60km by 60km and a grid resolution of 250m - 300m. Model results simulated by WaveWatchIII and SWAN in a structured-grid framework are compared to NOAA wave buoy data for the six wave parameters, including omnidirectional wave power, significant wave height, energy period, spectral width, direction of maximum directionally resolved wave power, and directionality coefficient. Model performance and computational efficiency are evaluated, and the best practices for wave resource assessments are discussed, based on a set of standard error statistics and model run times.

  12. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    PubMed

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  13. Macro Scale Independently Homogenized Subcells for Modeling Braided Composites

    NASA Technical Reports Server (NTRS)

    Blinzler, Brina J.; Goldberg, Robert K.; Binienda, Wieslaw K.

    2012-01-01

    An analytical method has been developed to analyze the impact response of triaxially braided carbon fiber composites, including the penetration velocity and impact damage patterns. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. Currently, each shell element is considered to be a smeared homogeneous material. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. To determine the stiffness and strength properties required for the constitutive model, a top-down approach for determining the strength properties is merged with a bottom-up approach for determining the stiffness properties. The top-down portion uses global strengths obtained from macro-scale coupon level testing to characterize the material strengths for each subcell. The bottom-up portion uses micro-scale fiber and matrix stiffness properties to characterize the material stiffness for each subcell. Simulations of quasi-static coupon level tests for several representative composites are conducted along with impact simulations.

  14. An individual-based model of skipjack tuna (Katsuwonus pelamis) movement in the tropical Pacific ocean

    NASA Astrophysics Data System (ADS)

    Scutt Phillips, Joe; Sen Gupta, Alex; Senina, Inna; van Sebille, Erik; Lange, Michael; Lehodey, Patrick; Hampton, John; Nicol, Simon

    2018-05-01

    The distribution of marine species is often modeled using Eulerian approaches, in which changes to population density or abundance are calculated at fixed locations in space. Conversely, Lagrangian, or individual-based, models simulate the movement of individual particles moving in continuous space, with broader-scale patterns such as distribution being an emergent property of many, potentially adaptive, individuals. These models offer advantages in examining dynamics across spatiotemporal scales and making comparisons with observations from individual-scale data. Here, we introduce and describe such a model, the Individual-based Kinesis, Advection and Movement of Ocean ANimAls model (Ikamoana), which we use to replicate the movement processes of an existing Eulerian model for marine predators (the Spatial Ecosystem and Population Dynamics Model, SEAPODYM). Ikamoana simulates the movement of either individual or groups of animals by physical ocean currents, habitat-dependent stochastic movements (kinesis), and taxis movements representing active searching behaviours. Applying our model to Pacific skipjack tuna (Katsuwonus pelamis), we show that it accurately replicates the evolution of density distribution simulated by SEAPODYM with low time-mean error and a spatial correlation of density that exceeds 0.96 at all times. We demonstrate how the Lagrangian approach permits easy tracking of individuals' trajectories for examining connectivity between different regions, and show how the model can provide independent estimates of transfer rates between commonly used assessment regions. In particular, we find that retention rates in most assessment regions are considerably smaller (up to a factor of 2) than those estimated by this population of skipjack's primary assessment model. Moreover, these rates are sensitive to ocean state (e.g. El Nino vs La Nina) and so assuming fixed transfer rates between regions may lead to spurious stock estimates. A novel feature of the Lagrangian approach is that individual schools can be tracked through time, and we demonstrate that movement between two assessment regions at broad temporal scales includes extended transits through other regions at finer-scales. Finally, we discuss the utility of this modeling framework for the management of marine reserves, designing effective monitoring programmes, and exploring hypotheses regarding the behaviour of hard-to-observe oceanic animals.

  15. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    NASA Astrophysics Data System (ADS)

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some appropriate uses of global scale hazard maps and explore how this new approach can be invaluable in areas of the world where flood hazard and risk have not previously been assessed.

  16. Predictions of Bedforms in Tidal Inlets and River Mouths

    DTIC Science & Technology

    2016-07-31

    that community modeling environment. APPROACH Bedforms are ubiquitous in unconsolidated sediments . They act as roughness elements, altering the...flow and creating feedback between the bed and the flow and, in doing so, they are intimately tied to erosion, transport and deposition of sediments ...With this approach, grain-scale sediment transport is parameterized with simple rules to drive bedform-scale dynamics. Gallagher (2011) developed a

  17. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph

    2016-09-01

    The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Using horizontal grid spacings of O(1km), convection-resolving weather and climate models allows one to explicitly resolve deep convection. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in supercomputing have led to new hybrid node designs, mixing conventional multi-core hardware and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to these architectures is the COSMO (Consortium for Small-scale Modeling) model.Here we present the convection-resolving COSMO model on continental scales using a version of the model capable of using GPU accelerators. The verification of a week-long simulation containing winter storm Kyrill shows that, for this case, convection-parameterizing simulations and convection-resolving simulations agree well. Furthermore, we demonstrate the applicability of the approach to longer simulations by conducting a 3-month-long simulation of the summer season 2006. Its results corroborate the findings found on smaller domains such as more credible representation of the diurnal cycle of precipitation in convection-resolving models and a tendency to produce more intensive hourly precipitation events. Both simulations also show how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. This includes the formation of sharp cold frontal structures, convection embedded in fronts and small eddies, or the formation and organization of propagating cold pools. Finally, we assess the performance gain from using heterogeneous hardware equipped with GPUs relative to multi-core hardware. With the COSMO model, we now use a weather and climate model that has all the necessary modules required for real-case convection-resolving regional climate simulations on GPUs.

  18. Multiphase modeling of geologic carbon sequestration in saline aquifers.

    PubMed

    Bandilla, Karl W; Celia, Michael A; Birkholzer, Jens T; Cihan, Abdullah; Leister, Evan C

    2015-01-01

    Geologic carbon sequestration (GCS) is being considered as a climate change mitigation option in many future energy scenarios. Mathematical modeling is routinely used to predict subsurface CO2 and resident brine migration for the design of injection operations, to demonstrate the permanence of CO2 storage, and to show that other subsurface resources will not be degraded. Many processes impact the migration of CO2 and brine, including multiphase flow dynamics, geochemistry, and geomechanics, along with the spatial distribution of parameters such as porosity and permeability. In this article, we review a set of multiphase modeling approaches with different levels of conceptual complexity that have been used to model GCS. Model complexity ranges from coupled multiprocess models to simplified vertical equilibrium (VE) models and macroscopic invasion percolation models. The goal of this article is to give a framework of conceptual model complexity, and to show the types of modeling approaches that have been used to address specific GCS questions. Application of the modeling approaches is shown using five ongoing or proposed CO2 injection sites. For the selected sites, the majority of GCS models follow a simplified multiphase approach, especially for questions related to injection and local-scale heterogeneity. Coupled multiprocess models are only applied in one case where geomechanics have a strong impact on the flow. Owing to their computational efficiency, VE models tend to be applied at large scales. A macroscopic invasion percolation approach was used to predict the CO2 migration at one site to examine details of CO2 migration under the caprock. © 2015, National Ground Water Association.

  19. Contribution of regional-scale fire events to ozone and PM2.5 air quality estimated by photochemical modeling approaches

    EPA Science Inventory

    Two specific fires from 2011 are tracked for local to regional scale contribution to ozone (O3) and fine particulate matter (PM2.5) using a freely available regulatory modeling system that includes the BlueSky wildland fire emissions tool, Spare Matrix Operator Kernel Emissions (...

  20. Modeling the impact of the nitrate contamination on groundwater at the groundwater body scale : The Geer basin case study (Invited)

    NASA Astrophysics Data System (ADS)

    Brouyere, S.; Orban, P.; Hérivaux, C.

    2009-12-01

    In the next decades, groundwater managers will have to face regional degradation of the quantity and quality of groundwater under pressure of land-use and socio-economic changes. In this context, the objectives of the European Water Framework Directive require that groundwater be managed at the scale of the groundwater body, taking into account not only all components of the water cycle but also the socio-economic impact of these changes. One of the main challenges remains to develop robust and efficient numerical modeling applications at such a scale and to couple them with economic models, as a support for decision support in groundwater management. An integrated approach between hydrogeologists and economists has been developed by coupling the hydrogeological model SUFT3D and a cost-benefit economic analysis to study the impact of agricultural practices on groundwater quality and to design cost-effective mitigation measures to decrease nitrate pressure on groundwater so as to ensure the highest benefit to the society. A new modeling technique, the ‘Hybrid Finite Element Mixing Cell’ approach has been developed for large scale modeling purposes. The principle of this method is to fully couple different mathematical and numerical approaches to solve groundwater flow and solute transport problems. The mathematical and numerical approaches proposed allows an adaptation to the level of local hydrogeological knowledge and the amount of available data. In combination with long time series of nitrate concentrations and tritium data, the regional scale modelling approach has been used to develop a 3D spatially distributed groundwater flow and solute transport model for the Geer basin (Belgium) of about 480 km2. The model is able to reproduce the spatial patterns of nitrate concentrations together nitrate trends with time. The model has then been used to predict the future evolution of nitrate trends for two types of scenarios: (i) a “business as usual scenario” where current polluting pressures remain the same and (ii) two contrasted scenarios that simulate the implementation of programs of measures aiming at reaching good chemical status. The results of the hydrogeological model under the “business as usual scenario” have been used to assess the cost for the society of the continuous degradation of the groundwater quality. The results of the hydrogeological model under the two contrasted scenarios have been used to assess the economical benefit as avoided damage resulting from the decrease in the nitrate load. A cost-benefit analysis has been thus performed to assess the programme of mitigation measures which provides the largest benefits at the lowest cost.

  1. Phase boundaries of power-law Anderson and Kondo models: A poor man's scaling study

    NASA Astrophysics Data System (ADS)

    Cheng, Mengxing; Chowdhury, Tathagata; Mohammed, Aaron; Ingersent, Kevin

    2017-07-01

    We use the poor man's scaling approach to study the phase boundaries of a pair of quantum impurity models featuring a power-law density of states ρ (ɛ ) ∝|ɛ| r , either vanishing (for r >0 ) or diverging (for r <0 ) at the Fermi energy ɛ =0 , that gives rise to quantum phase transitions between local-moment and Kondo-screened phases. For the Anderson model with a pseudogap (i.e., r >0 ), we find the phase boundary for (a) 0 1 , where the phases are separated by first-order quantum phase transitions that are accessible only for broken p-h symmetry. For the p-h-symmetric Kondo model with easy-axis or easy-plane anisotropy of the impurity-band spin exchange, the phase boundary and scaling trajectories are obtained for both r >0 and r <0 . Throughout the regime of weak-to-moderate impurity-band coupling in which poor man's scaling is expected to be valid, the approach predicts phase boundaries in excellent qualitative and good quantitative agreement with the nonperturbative numerical renormalization group, while also establishing the functional relations between model parameters along these boundaries.

  2. Reconstruction of 24 Penicillium genome-scale metabolic models shows diversity based on their secondary metabolism.

    PubMed

    Prigent, Sylvain; Nielsen, Jens Christian; Frisvad, Jens Christian; Nielsen, Jens

    2018-06-05

    Modelling of metabolism at the genome-scale have proved to be an efficient method for explaining observed phenotypic traits in living organisms. Further, it can be used as a means of predicting the effect of genetic modifications e.g. for development of microbial cell factories. With the increasing amount of genome sequencing data available, a need exists to accurately and efficiently generate such genome-scale metabolic models (GEMs) of non-model organisms, for which data is sparse. In this study, we present an automatic reconstruction approach applied to 24 Penicillium species, which have potential for production of pharmaceutical secondary metabolites or used in the manufacturing of food products such as cheeses. The models were based on the MetaCyc database and a previously published Penicillium GEM, and gave rise to comprehensive genome-scale metabolic descriptions. The models proved that while central carbon metabolism is highly conserved, secondary metabolic pathways represent the main diversity among the species. The automatic reconstruction approach presented in this study can be applied to generate GEMs of other understudied organisms, and the developed GEMs are a useful resource for the study of Penicillium metabolism, for example with the scope of developing novel cell factories. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise

    PubMed Central

    Murakami, Hisashi

    2018-01-01

    Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as Lévy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the Lévy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions. PMID:29689074

  4. A hybrid approach to estimating national scale spatiotemporal variability of PM2.5 in the contiguous United States.

    PubMed

    Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T

    2013-07-02

    Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.

  5. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    PubMed Central

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371

  6. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    PubMed

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  7. A New Approach to Satisfy Dynamic Similarity for Model Submarine Maneuvers

    DTIC Science & Technology

    2007-11-28

    part of the Scaling Task of the FY07 6.1 Turbulence and Stratified Wakes Program (Program Element 0601153N). Introduction The Radio-Controlled Model (RCM...a smaller force and moment than a full scale rudder. This Reynolds scale effect is associated with the boundary layer velocity deficit . 0.300 0250...layer velocity deficit term, namely q = 1. It is further noted from unsteady experimental data that the flow angles associated with flow separation

  8. ATMOSPHERIC AMMONIA EMISSIONS FROM THE LIVESTOCK SECTOR: DEVELOPMENT AND EVALUATION OF A PROCESS-BASED MODELING APPROACH

    EPA Science Inventory

    We propose multi-faceted research to enhance our understanding of NH3 emissions from livestock feeding operations. A process-based emissions modeling approach will be used, and we will investigate ammonia emissions from the scale of the individual farm out to impacts on region...

  9. PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH

    EPA Science Inventory

    A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...

  10. Development of a hybrid 3-D hydrological model to simulate hillslopes and the regional unconfined aquifer system in Earth system models

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Broxton, P. D.; Brunke, M.; Gochis, D.; Niu, G. Y.; Pelletier, J. D.; Troch, P. A. A.; Zeng, X.

    2015-12-01

    The terrestrial hydrological system, including surface and subsurface water, is an essential component of the Earth's climate system. Over the past few decades, land surface modelers have built one-dimensional (1D) models resolving the vertical flow of water through the soil column for use in Earth system models (ESMs). These models generally have a relatively coarse model grid size (~25-100 km) and only account for sub-grid lateral hydrological variations using simple parameterization schemes. At the same time, hydrologists have developed detailed high-resolution (~0.1-10 km grid size) three dimensional (3D) models and showed the importance of accounting for the vertical and lateral redistribution of surface and subsurface water on soil moisture, the surface energy balance and ecosystem dynamics on these smaller scales. However, computational constraints have limited the implementation of the high-resolution models for continental and global scale applications. The current work presents a hybrid-3D hydrological approach is presented, where the 1D vertical soil column model (available in many ESMs) is coupled with a high-resolution lateral flow model (h2D) to simulate subsurface flow and overland flow. H2D accounts for both local-scale hillslope and regional-scale unconfined aquifer responses (i.e. riparian zone and wetlands). This approach was shown to give comparable results as those obtained by an explicit 3D Richards model for the subsurface, but improves runtime efficiency considerably. The h3D approach is implemented for the Delaware river basin, where Noah-MP land surface model (LSM) is used to calculated vertical energy and water exchanges with the atmosphere using a 10km grid resolution. Noah-MP was coupled within the WRF-Hydro infrastructure with the lateral 1km grid resolution h2D model, for which the average depth-to-bedrock, hillslope width function and soil parameters were estimated from digital datasets. The ability of this h3D approach to simulate the hydrological dynamics of the Delaware River basin will be assessed by comparing the model results (both hydrological performance and numerical efficiency) with the standard setup of the NOAH-MP model and a high-resolution (1km) version of NOAH-MP, which also explicitly accounts for lateral subsurface and overland flow.

  11. Towards multiscale modeling of influenza infection

    PubMed Central

    Murillo, Lisa N.; Murillo, Michael S.; Perelson, Alan S.

    2013-01-01

    Aided by recent advances in computational power, algorithms, and higher fidelity data, increasingly detailed theoretical models of infection with influenza A virus are being developed. We review single scale models as they describe influenza infection from intracellular to global scales, and, in particular, we consider those models that capture details specific to influenza and can be used to link different scales. We discuss the few multiscale models of influenza infection that have been developed in this emerging field. In addition to discussing modeling approaches, we also survey biological data on influenza infection and transmission that is relevant for constructing influenza infection models. We envision that, in the future, multiscale models that capitalize on technical advances in experimental biology and high performance computing could be used to describe the large spatial scale epidemiology of influenza infection, evolution of the virus, and transmission between hosts more accurately. PMID:23608630

  12. A method to generate small-scale, high-resolution sedimentary bedform architecture models representing realistic geologic facies

    DOE PAGES

    Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.

    2017-08-23

    Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less

  13. A Comparison of Methods for a Priori Bias Correction in Soil Moisture Data Assimilation

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.

    2011-01-01

    Data assimilation is being increasingly used to merge remotely sensed land surface variables such as soil moisture, snow and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here, a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (i) parameter estimation to calibrate the land model to the climatology of the soil moisture observations, and (ii) scaling of the observations to the model s soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model s climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.

  14. A method to generate small-scale, high-resolution sedimentary bedform architecture models representing realistic geologic facies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.

    Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less

  15. Estimating groundwater extraction in a data-sparse coal seam gas region, Australia

    NASA Astrophysics Data System (ADS)

    Keir, Greg; Bulovic, Nevenka; McIntyre, Neil

    2017-04-01

    The semi-arid Surat and Bowen Basins in central Queensland, Australia, are groundwater resources of both national and regional significance. Regional towns, agricultural industries and communities are heavily dependent on the 30 000+ groundwater supply bores for their existence; however groundwater extraction measurements are rare in this area and primarily limited to small irrigation regions. Accordingly, regional groundwater extraction is not well understood, and this may have implications for regional numerical groundwater modelling and impact assessments associated with recent coal seam gas developments. Here we present a novel statistical approach to model regional groundwater extraction that merges flow measurements / estimates with other more commonly available spatial datasets that may be of value, such as climate data, pasture data, surface water availability, etc. A three step modelling approach, combining a property scale magnitude model, a bore scale occurrence model, and a proportional distribution model within properties, is used to estimate bore extraction. We describe the process of model development and selection, and present extraction results on an aquifer-by-aquifer basis suitable for numerical groundwater modelling. Lastly, we conclude with recommendations for future research, particularly related to improvement of attribution of property-scale water demand, and temporal variability in water usage.

  16. A simple scaling approach to produce climate scenarios of local precipitation extremes for the Netherlands

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Attema, Jisk

    2015-08-01

    Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.

  17. Transport theory and the WKB approximation for interplanetary MHD fluctuations

    NASA Technical Reports Server (NTRS)

    Matthaeus, William H.; Zhou, YE; Zank, G. P.; Oughton, S.

    1994-01-01

    An alternative approach, based on a multiple scale analysis, is presented in order to reconcile the traditional Wentzel-Kramer-Brillouin (WKB) approach to the modeling of interplanetary fluctuations in a mildly inhomogeneous large-scale flow with a more recently developed transport theory. This enables us to compare directly, at a formal level, the inherent structure of the two models. In the case of noninteracting, incompressible (Alven) waves, the principle difference between the two models is the presence of leading-order couplings (called 'mixing effects') in the non-WKB turbulence model which are absent in a WKB development. Within the context of linearized MHD, two cases have been identified for which the leading order non-WJB 'mixing term' does not vanish at zero wavelength. For these cases the WKB expansion is divergent, whereas the multiple-scale theory is well behaved. We have thus established that the WKB results are contained within the multiple-scale theory, but leading order mixing effects, which are likely to have important observational consequences, can never be recovered in the WKB style expansion. Properties of the higher-order terms in each expansion are also discussed, leading to the conclusion that the non-WKB hierarchy may be applicable even when the scale separation parameter is not small.

  18. The effects of hillslope-scale variability in burn severity on post-fire sediment delivery

    NASA Astrophysics Data System (ADS)

    Quinn, Dylan; Brooks, Erin; Dobre, Mariana; Lew, Roger; Robichaud, Peter; Elliot, William

    2017-04-01

    With the increasing frequency of wildfire and the costs associated with managing the burned landscapes, there is an increasing need for decision support tools that can be used to assess the effectiveness of targeted post-fire management strategies. The susceptibility of landscapes to post-fire soil erosion and runoff have been closely linked with the severity of the wildfire. Wildfire severity maps are often spatial complex and largely dependent upon total vegetative biomass, fuel moisture patterns, direction of burn, wind patterns, and other factors. The decision to apply targeted treatment to a specific landscape and the amount of resources dedicated to treating a landscape should ideally be based on the potential for excessive sediment delivery from a particular hillslope. Recent work has suggested that the delivery of sediment to a downstream water body from a hillslope will be highly influenced by the distribution of wildfire severity across a hillslope and that models that do not capture this hillslope scale variability would not provide reliable sediment and runoff predictions. In this project we compare detailed (10 m) grid-based model predictions to lumped and semi-lumped hillslope approaches where hydrologic parameters are fixed based on hillslope scale averaging techniques. We use the watershed scale version of the process-based Watershed Erosion Prediction Projection (WEPP) model and its GIS interface, GeoWEPP, to simulate the fire impacts on runoff and sediment delivery using burn severity maps at a watershed scale. The flowpath option in WEPP allows for the most detail representation of wildfire severity patterns (10 m) but depending upon the size of the watershed, simulations are time consuming and computational demanding. The hillslope version is a simpler approach which assigns wildfire severity based on the severity level that is assigned to the majority of the hillslope area. In the third approach we divided hillslopes in overland flow elements (OFEs) and assigned representative input values on a finer scale within single hillslopes. Each of these approaches were compared for several large wildfires in the mountainous ranges of central Idaho, USA. Simulations indicated that predictions based on lumped hillslope modeling over-predict sediment transport by as much as 4.8x in areas of high to moderate burn severity. Annual sediment yield within the simulated watersheds ranged from 1.7 tonnes/ha to 6.8 tonnes/ha. The disparity between simulated sediment yield with these approaches was attributed to hydrologic connectivity of the burn patterns within the hillslope. High infiltration rates between high severity sites can greatly reduce the delivery of sediment. This research underlines the importance of accurately representing soil burn severity along individual hillslopes in hydrologic models and the need for modeling approaches to capture this variability to reliability simulate soil erosion.

  19. A Component Approach to Collaborative Scientific Software Development: Tools and Techniques Utilized by the Quantum Chemistry Science Application Partnership

    DOE PAGES

    Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...

    2008-01-01

    Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less

  20. Multiscale soil moisture estimates using static and roving cosmic-ray soil moisture sensors

    NASA Astrophysics Data System (ADS)

    McJannet, David; Hawdon, Aaron; Baker, Brett; Renzullo, Luigi; Searle, Ross

    2017-12-01

    Soil moisture plays a critical role in land surface processes and as such there has been a recent increase in the number and resolution of satellite soil moisture observations and the development of land surface process models with ever increasing resolution. Despite these developments, validation and calibration of these products has been limited because of a lack of observations on corresponding scales. A recently developed mobile soil moisture monitoring platform, known as the rover, offers opportunities to overcome this scale issue. This paper describes methods, results and testing of soil moisture estimates produced using rover surveys on a range of scales that are commensurate with model and satellite retrievals. Our investigation involved static cosmic-ray neutron sensors and rover surveys across both broad (36 × 36 km at 9 km resolution) and intensive (10 × 10 km at 1 km resolution) scales in a cropping district in the Mallee region of Victoria, Australia. We describe approaches for converting rover survey neutron counts to soil moisture and discuss the factors controlling soil moisture variability. We use independent gravimetric and modelled soil moisture estimates collected across both space and time to validate rover soil moisture products. Measurements revealed that temporal patterns in soil moisture were preserved through time and regression modelling approaches were utilised to produce time series of property-scale soil moisture which may also have applications in calibration and validation studies or local farm management. Intensive-scale rover surveys produced reliable soil moisture estimates at 1 km resolution while broad-scale surveys produced soil moisture estimates at 9 km resolution. We conclude that the multiscale soil moisture products produced in this study are well suited to future analysis of satellite soil moisture retrievals and finer-scale soil moisture models.

  1. Comparison of local- to regional-scale estimates of ground-water recharge in Minnesota, USA

    USGS Publications Warehouse

    Delin, G.N.; Healy, R.W.; Lorenz, D.L.; Nimmo, J.R.

    2007-01-01

    Regional ground-water recharge estimates for Minnesota were compared to estimates made on the basis of four local- and basin-scale methods. Three local-scale methods (unsaturated-zone water balance, water-table fluctuations (WTF) using three approaches, and age dating of ground water) yielded point estimates of recharge that represent spatial scales from about 1 to about 1000 m2. A fourth method (RORA, a basin-scale analysis of streamflow records using a recession-curve-displacement technique) yielded recharge estimates at a scale of 10–1000s of km2. The RORA basin-scale recharge estimates were regionalized to estimate recharge for the entire State of Minnesota on the basis of a regional regression recharge (RRR) model that also incorporated soil and climate data. Recharge rates estimated by the RRR model compared favorably to the local and basin-scale recharge estimates. RRR estimates at study locations were about 41% less on average than the unsaturated-zone water-balance estimates, ranged from 44% greater to 12% less than estimates that were based on the three WTF approaches, were about 4% less than the age dating of ground-water estimates, and were about 5% greater than the RORA estimates. Of the methods used in this study, the WTF method is the simplest and easiest to apply. Recharge estimates made on the basis of the UZWB method were inconsistent with the results from the other methods. Recharge estimates using the RRR model could be a good source of input for regional ground-water flow models; RRR model results currently are being applied for this purpose in USGS studies elsewhere.

  2. Multiscale modeling of lithium ion batteries: thermal aspects

    PubMed Central

    Zausch, Jochen

    2015-01-01

    Summary The thermal behavior of lithium ion batteries has a huge impact on their lifetime and the initiation of degradation processes. The development of hot spots or large local overpotentials leading, e.g., to lithium metal deposition depends on material properties as well as on the nano- und microstructure of the electrodes. In recent years a theoretical structure emerges, which opens the possibility to establish a systematic modeling strategy from atomistic to continuum scale to capture and couple the relevant phenomena on each scale. We outline the building blocks for such a systematic approach and discuss in detail a rigorous approach for the continuum scale based on rational thermodynamics and homogenization theories. Our focus is on the development of a systematic thermodynamically consistent theory for thermal phenomena in batteries at the microstructure scale and at the cell scale. We discuss the importance of carefully defining the continuum fields for being able to compare seemingly different phenomenological theories and for obtaining rules to determine unknown parameters of the theory by experiments or lower-scale theories. The resulting continuum models for the microscopic and the cell scale are numerically solved in full 3D resolution. The complex very localized distributions of heat sources in a microstructure of a battery and the problems of mapping these localized sources on an averaged porous electrode model are discussed by comparing the detailed 3D microstructure-resolved simulations of the heat distribution with the result of the upscaled porous electrode model. It is shown, that not all heat sources that exist on the microstructure scale are represented in the averaged theory due to subtle cancellation effects of interface and bulk heat sources. Nevertheless, we find that in special cases the averaged thermal behavior can be captured very well by porous electrode theory. PMID:25977870

  3. From mountains to the ocean: quantifying connectivity along the river corridor

    NASA Astrophysics Data System (ADS)

    Gomez-Velez, J. D.; Harvey, J. W.

    2015-12-01

    Rivers are the landscape's arteries; they convey water, solutes, energy, and living organisms from the hillslopes, floodplains, aquifers, and atmosphere to the oceans. As water moves along this complex circulatory system, it is continuously exchanged with the surrounding alluvial aquifer, termed hyporheic exchange, which strongly conditions and constrains the biogeochemical evolution of water at the local scale with basin-scale consequences. Over the last two decades, considerable efforts have focused on the use of detailed mathematical models to explore the hydrodynamics and biogeochemical effect of hyporheic exchange at the scale of individual channel morphologies. While these efforts are essential to gain mechanistic understanding, their computational demand makes them impractical for basin applications. In this talk, a parsimonious but physically based model of hyporheic flow for application in large river basins is presented: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS are the up-scaling of detailed mathematical models and a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, width, grain size, sinuosity, channel slope, and regional groundwater gradients. As a proof-of-concept, we use NEXSS to characterize the spatial and temporal variability of hyporheic exchange and denitrification potential along the Mississippi River basin. This modeling approach allows us to map the location of critical hot spots for biogeochemical transformation, their geomorphic drivers, and cumulative effect. Finally, we discuss new avenues to incorporate exchange with floodplains and ponded waters, which also play a key role in water quality along the river corridor. This new modeling approach is critical to transition from purely empirical continental models of water quality to hybrid approaches that incorporate fundamental physics and take advantage of available hydrogeomorphic data. In particular, hybrid models will be instrumental for predicting outcomes of river basin management practices under present and future socio-economic and climatic conditions.

  4. A Multi-Scale Integrated Approach to Representing Watershed Systems: Significance and Challenges

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ivanov, V. Y.; Katopodes, N.

    2013-12-01

    A range of processes associated with supplying services and goods to human society originate at the watershed level. Predicting watershed response to forcing conditions has been of high interest to many practical societal problems, however, remains challenging due to two significant properties of the watershed systems, i.e., connectivity and non-linearity. Connectivity implies that disturbances arising at any larger scale will necessarily propagate and affect local-scale processes; their local effects consequently influence other processes, and often convey nonlinear relationships. Physically-based, process-scale modeling is needed to approach the understanding and proper assessment of non-linear effects between the watershed processes. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion and sediment transport, tRIBS-OFM-HRM (Triangulated irregular network - based Real time Integrated Basin Simulator-Overland Flow Model-Hairsine and Rose Model). This coupled model offers the advantage of exploring the hydrological effects of watershed physical factors such as topography, vegetation, and soil, as well as their feedback mechanisms. Several examples investigating the effects of vegetation on flow movement, the role of soil's substrate on sediment dynamics, and the driving role of topography on morphological processes are illustrated. We show how this comprehensive modeling tool can help understand interconnections and nonlinearities of the physical system, e.g., how vegetation affects hydraulic resistance depending on slope, vegetation cover fraction, discharge, and bed roughness condition; how the soil's substrate condition impacts erosion processes with an non-unique characteristic at the scale of a zero-order catchment; and how topographic changes affect spatial variations of morphologic variables. Due to feedback and compensatory nature of mechanisms operating in different watershed compartments, our conclusion is that a key to representing watershed systems lies in an integrated, interdisciplinary approach, whereby a physically-based model is used for assessments/evaluations associated with future changes in landuse, climate, and ecosystems.

  5. Multi-Scale Computational Models for Electrical Brain Stimulation

    PubMed Central

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  6. The origins of modern biodiversity on land

    PubMed Central

    Benton, Michael J.

    2010-01-01

    Comparative studies of large phylogenies of living and extinct groups have shown that most biodiversity arises from a small number of highly species-rich clades. To understand biodiversity, it is important to examine the history of these clades on geological time scales. This is part of a distinct ‘phylogenetic expansion’ view of macroevolution, and contrasts with the alternative, non-phylogenetic ‘equilibrium’ approach to the history of biodiversity. The latter viewpoint focuses on density-dependent models in which all life is described by a single global-scale model, and a case is made here that this approach may be less successful at representing the shape of the evolution of life than the phylogenetic expansion approach. The terrestrial fossil record is patchy, but is adequate for coarse-scale studies of groups such as vertebrates that possess fossilizable hard parts. New methods in phylogenetic analysis, morphometrics and the study of exceptional biotas allow new approaches. Models for diversity regulation through time range from the entirely biotic to the entirely physical, with many intermediates. Tetrapod diversity has risen as a result of the expansion of ecospace, rather than niche subdivision or regional-scale endemicity resulting from continental break-up. Tetrapod communities on land have been remarkably stable and have changed only when there was a revolution in floras (such as the demise of the Carboniferous coal forests, or the Cretaceous radiation of angiosperms) or following particularly severe mass extinction events, such as that at the end of the Permian. PMID:20980315

  7. Addressing spatial scales and new mechanisms in climate impact ecosystem modeling

    NASA Astrophysics Data System (ADS)

    Poulter, B.; Joetzjer, E.; Renwick, K.; Ogunkoya, G.; Emmett, K.

    2015-12-01

    Climate change impacts on vegetation distributions are typically addressed using either an empirical approach, such as a species distribution model (SDM), or with process-based methods, for example, dynamic global vegetation models (DGVMs). Each approach has its own benefits and disadvantages. For example, an SDM is constrained by data and few parameters, but does not include adaptation or acclimation processes or other ecosystem feedbacks that may act to mitigate or enhance climate effects. Alternatively, a DGVM model includes many mechanisms relating plant growth and disturbance to climate, but simulations are costly to perform at high-spatial resolution and there remains large uncertainty on a variety of fundamental physical processes. To address these issues, here, we present two DGVM-based case studies where i) high-resolution (1 km) simulations are being performed for vegetation in the Greater Yellowstone Ecosystem using a biogeochemical, forest gap model, LPJ-GUESS, and ii) where new mechanisms for simulating tropical tree-mortality are being introduced. High-resolution DGVM model simulations require not only computing and reorganizing code but also a consideration of scaling issues on vegetation dynamics and stochasticity and also on disturbance and migration. New mechanisms for simulating forest mortality must consider hydraulic limitations and carbon reserves and their interactions on source-sink dynamics and in controlling water potentials. Improving DGVM approaches by addressing spatial scale challenges and integrating new approaches for estimating forest mortality will provide new insights more relevant for land management and possibly reduce uncertainty by physical processes more directly comparable to experimental and observational evidence.

  8. Critical dynamic approach to stationary states in complex systems

    NASA Astrophysics Data System (ADS)

    Rozenfeld, A. F.; Laneri, K.; Albano, E. V.

    2007-04-01

    A dynamic scaling Ansatz for the approach to stationary states in complex systems is proposed and tested by means of extensive simulations applied to both the Bak-Sneppen (BS) model, which exhibits robust Self-Organised Critical (SOC) behaviour, and the Game of Life (GOL) of J. Conway, whose critical behaviour is under debate. Considering the dynamic scaling behaviour of the density of sites (ρ(t)), it is shown that i) by starting the dynamic measurements with configurations such that ρ(t=0) →0, one observes an initial increase of the density with exponents θ= 0.12(2) and θ= 0.11(2) for the BS and GOL models, respectively; ii) by using initial configurations with ρ(t=0) →1, the density decays with exponents δ= 0.47(2) and δ= 0.28(2) for the BS and GOL models, respectively. It is also shown that the temporal autocorrelation decays with exponents Ca = 0.35(2) (Ca = 0.35(5)) for the BS (GOL) model. By using these dynamically determined critical exponents and suitable scaling relationships, we also obtain the dynamic exponents z = 2.10(5) (z = 2.10(5)) for the BS (GOL) model. Based on this evidence we conclude that the dynamic approach to stationary states of the investigated models can be described by suitable power-law functions of time with well-defined exponents.

  9. A predictive parameter estimation approach for the thermodynamically constrained averaging theory applied to diffusion in porous media

    NASA Astrophysics Data System (ADS)

    Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.

    2017-12-01

    Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.

  10. Conceptual design and analysis of a dynamic scale model of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.

    1994-01-01

    This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.

  11. VLBI-resolution radio-map algorithms: Performance analysis of different levels of data-sharing on multi-socket, multi-core architectures

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.

    2012-09-01

    A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.

  12. Testing new approaches to carbonate system simulation at the reef scale: the ReefSam model first results, application to a question in reef morphology and future challenges.

    NASA Astrophysics Data System (ADS)

    Barrett, Samuel; Webster, Jody

    2016-04-01

    Numerical simulation of the stratigraphy and sedimentology of carbonate systems (carbonate forward stratigraphic modelling - CFSM) provides significant insight into the understanding of both the physical nature of these systems and the processes which control their development. It also provides the opportunity to quantitatively test conceptual models concerning stratigraphy, sedimentology or geomorphology, and allows us to extend our knowledge either spatially (e.g. between bore holes) or temporally (forwards or backwards in time). The later is especially important in determining the likely future development of carbonate systems, particularly regarding the effects of climate change. This application, by its nature, requires successful simulation of carbonate systems on short time scales and at high spatial resolutions. Previous modelling attempts have typically focused on the scales of kilometers and kilo-years or greater (the scale of entire carbonate platforms), rather than at the scale of centuries or decades, and tens to hundreds of meters (the scale of individual reefs). Previous work has identified limitations in common approaches to simulating important reef processes. We present a new CFSM, Reef Sedimentary Accretion Model (ReefSAM), which is designed to test new approaches to simulating reef-scale processes, with the aim of being able to better simulate the past and future development of coral reefs. Four major features have been tested: 1. A simulation of wave based hydrodynamic energy with multiple simultaneous directions and intensities including wave refraction, interaction, and lateral sheltering. 2. Sediment transport simulated as sediment being moved from cell to cell in an iterative fashion until complete deposition. 3. A coral growth model including consideration of local wave energy and composition of the basement substrate (as well as depth). 4. A highly quantitative model testing approach where dozens of output parameters describing the reef morphology and development are compared with observational data. Despite being a test-bed and work in progress, ReefSAM was able to simulate the Holocene development of One Tree Reef in the Southern Great Barrier Reef (Australia) and was able to improve upon previous modelling attempts in terms of both quantitative measures and qualitative outputs, such as the presence of previously un-simulated reef features. Given the success of the model in simulating the Holocene development of OTR, we used it to quantitatively explore the effect of basement substrate depth and morphology on reef maturity/lagoonal filling (as discussed by Purdy and Gischer 2005). Initial results show a number of non-linear relationships between basement substrate depth, lagoonal filling and volume of sand produced on the reef rims and deposited in the lagoon. Lastly, further testing of the model has revealed new challenges which are likely to manifest in any attempt at reef-scale simulation. Subtly different sets of energy direction and magnitude input parameters (different in each time step but with identical probability distributions across the entire model run) resulted in a wide range of quantitative model outputs. Time step length is a likely contributing factor and the results of further testing to address this challenge will be presented.

  13. Prediction of spatially explicit rainfall intensity-duration thresholds for post-fire debris-flow generation in the western United States

    NASA Astrophysics Data System (ADS)

    Staley, Dennis; Negri, Jacquelyn; Kean, Jason

    2016-04-01

    Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.

  14. An integrated approach coupling physically based models and probabilistic method to assess quantitatively landslide susceptibility at different scale: application to different geomorphological environments

    NASA Astrophysics Data System (ADS)

    Vandromme, Rosalie; Thiéry, Yannick; Sedan, Olivier; Bernardie, Séverine

    2016-04-01

    Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The probability to obtain a safety factor below 1 represents the probability of occurrence of a landslide for a given triggering event. The dispersion of the distribution gives the uncertainty of the result. Finally, a map is created, displaying a probability of occurrence for each computing cell of the studied area. In order to take into account the land-uses change, a complementary module integrating the vegetation effects on soil properties has been recently developed. Last years, the model has been applied at different scales for different geomorphological environments: (i) at regional scale (1:50,000-1:25,000) in French West Indies and French Polynesian islands (ii) at local scale (i.e.1:10,000) for two complex mountainous areas; (iii) at the site-specific scale (1:2,000) for one landslide. For each study the 3D geotechnical model has been adapted. The different studies have allowed : (i) to discuss the different factors included in the model especially the initial 3D geotechnical models; (ii) to precise the location of probable failure following different hydrological scenarii; (iii) to test the effects of climatic change and land-use on slopes for two cases. In that way, future changes in temperature, precipitation and vegetation cover can be analyzed, permitting to address the impacts of global change on landslides. Finally, results show that it is possible to obtain reliable information about future slope failures at different scale of work for different scenarii with an integrated approach. The final information about landslide susceptibility (i.e. probability of failure) can be integrated in landslide hazard assessment and could be an essential information source for future land planning. As it has been performed in the ANR Project SAMCO (Society Adaptation for coping with Mountain risks in a global change COntext), this analysis constitutes a first step in the chain for risk assessment for different climate and economical development scenarios, to evaluate the resilience of mountainous areas.

  15. A simple microviscometric approach based on Brownian motion tracking.

    PubMed

    Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan

    2015-02-01

    Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, Lee; Gowardhan, Akshay; Lennox, Kristin

    In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other’s flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate themore » utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single-profile meteorology versus higher-fidelity three-dimensional gridded weather forecast for regional-scale analysis. Tradeoffs between computation time and the fidelity of the results are discussed for both scales. LES, for example, requires nearly 100 times more processor time than the mass-consistent diagnostic model or the RANS model, and seems better able to capture flow entrainment behind tall buildings. As anticipated, results obtained by LLNL and CEA at regional scale around Chicago and Paris look very similar in terms of both atmospheric dispersion of the radiological release and total effective dose. Both LLNL and CEA used the same meteorological data, Lagrangian particle dispersion models, and the same dose coefficients. LLNL and CEA urban-scale modeling results show consistent phenomenological behavior and predict similar impacted areas even though the detailed 3D flow patterns differ, particularly for the Chicago cases where differences in vertical entrainment behind tall buildings are particularly notable. Although RANS and LES (LLNL) models incorporate more detailed physics than do mass-consistent diagnostic flow models (CEA), it is not possible to reach definite conclusions about the prediction fidelity of the various models as experimental measurements were not available for comparison. Stronger conclusions about the relative performances of the models involved and evaluation of the tradeoffs involved in model simplification could be made with a systematic benchmarking of urban-scale modeling. This could be the purpose of a future US / French collaborative exercise.« less

  17. From mitochondrial ion channels to arrhythmias in the heart: computational techniques to bridge the spatio-temporal scales

    PubMed Central

    Plank, Gernot; Zhou, Lufang; Greenstein, Joseph L; Cortassa, Sonia; Winslow, Raimond L; O'Rourke, Brian; Trayanova, Natalia A

    2008-01-01

    Computer simulations of electrical behaviour in the whole ventricles have become commonplace during the last few years. The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning, that lifts some of the restrictions imposed by current state-of-the-art ionic models. To illustrate the latter approach, the present study uses a recently developed ionic model of the ventricular myocyte that incorporates an excitation–contraction coupling and mitochondrial energetics model. A paradigm to bridge the vastly disparate spatial and temporal scales, from subcellular processes to the entire organ, and from sub-microseconds to minutes, is presented. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level, and ultimately to the development of novel therapeutic applications. PMID:18603526

  18. Polarizable molecular interactions in condensed phase and their equivalent nonpolarizable models.

    PubMed

    Leontyev, Igor V; Stuchebrukhov, Alexei A

    2014-07-07

    Earlier, using phenomenological approach, we showed that in some cases polarizable models of condensed phase systems can be reduced to nonpolarizable equivalent models with scaled charges. Examples of such systems include ionic liquids, TIPnP-type models of water, protein force fields, and others, where interactions and dynamics of inherently polarizable species can be accurately described by nonpolarizable models. To describe electrostatic interactions, the effective charges of simple ionic liquids are obtained by scaling the actual charges of ions by a factor of 1/√(ε(el)), which is due to electronic polarization screening effect; the scaling factor of neutral species is more complicated. Here, using several theoretical models, we examine how exactly the scaling factors appear in theory, and how, and under what conditions, polarizable Hamiltonians are reduced to nonpolarizable ones. These models allow one to trace the origin of the scaling factors, determine their values, and obtain important insights on the nature of polarizable interactions in condensed matter systems.

  19. Validation of a plant-wide phosphorus modelling approach with minerals precipitation in a full-scale WWTP.

    PubMed

    Kazadi Mbamba, Christian; Flores-Alsina, Xavier; John Batstone, Damien; Tait, Stephan

    2016-09-01

    The focus of modelling in wastewater treatment is shifting from single unit to plant-wide scale. Plant-wide modelling approaches provide opportunities to study the dynamics and interactions of different transformations in water and sludge streams. Towards developing more general and robust simulation tools applicable to a broad range of wastewater engineering problems, this paper evaluates a plant-wide model built with sub-models from the Benchmark Simulation Model No. 2-P (BSM2-P) with an improved/expanded physico-chemical framework (PCF). The PCF includes a simple and validated equilibrium approach describing ion speciation and ion pairing with kinetic multiple minerals precipitation. Model performance is evaluated against data sets from a full-scale wastewater treatment plant, assessing capability to describe water and sludge lines across the treatment process under steady-state operation. With default rate kinetic and stoichiometric parameters, a good general agreement is observed between the full-scale datasets and the simulated results under steady-state conditions. Simulation results show differences between measured and modelled phosphorus as little as 4-15% (relative) throughout the entire plant. Dynamic influent profiles were generated using a calibrated influent generator and were used to study the effect of long-term influent dynamics on plant performance. Model-based analysis shows that minerals precipitation strongly influences composition in the anaerobic digesters, but also impacts on nutrient loading across the entire plant. A forecasted implementation of nutrient recovery by struvite crystallization (model scenario only), reduced the phosphorus content in the treatment plant influent (via centrate recycling) considerably and thus decreased phosphorus in the treated outflow by up to 43%. Overall, the evaluated plant-wide model is able to jointly describe the physico-chemical and biological processes, and is advocated for future use as a tool for design, performance evaluation and optimization of whole wastewater treatment plants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Field Scale Optimization for Long-Term Sustainability of Best Management Practices in Watersheds

    NASA Astrophysics Data System (ADS)

    Samuels, A.; Babbar-Sebens, M.

    2012-12-01

    Agricultural and urban land use changes have led to disruption of natural hydrologic processes and impairment of streams and rivers. Multiple previous studies have evaluated Best Management Practices (BMPs) as means for restoring existing hydrologic conditions and reducing impairment of water resources. However, planning of these practices have relied on watershed scale hydrologic models for identifying locations and types of practices at scales much coarser than the actual field scale, where landowners have to plan, design and implement the practices. Field scale hydrologic modeling provides means for identifying relationships between BMP type, spatial location, and the interaction between BMPs at a finer farm/field scale that is usually more relevant to the decision maker (i.e. the landowner). This study focuses on development of a simulation-optimization approach for field-scale planning of BMPs in the School Branch stream system of Eagle Creek Watershed, Indiana, USA. The Agricultural Policy Environmental Extender (APEX) tool is used as the field scale hydrologic model, and a multi-objective optimization algorithm is used to search for optimal alternatives. Multiple climate scenarios downscaled to the watershed-scale are used to test the long term performance of these alternatives and under extreme weather conditions. The effectiveness of these BMPs under multiple weather conditions are included within the simulation-optimization approach as a criteria/goal to assist landowners in identifying sustainable design of practices. The results from these scenarios will further enable efficient BMP planning for current and future usage.

  1. New analytic results for speciation times in neutral models.

    PubMed

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  2. ACME-III and ACME-IV Final Campaign Reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biraud, S. C.

    2016-01-01

    The goals of the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s third and fourth Airborne Carbon Measurements (ACME) field campaigns, ACME-III and ACME-IV, are: 1) to measure and model the exchange of CO 2, water vapor, and other greenhouse gases by the natural, agricultural, and industrial ecosystems of the Southern Great Plains (SGP) region; 2) to develop quantitative approaches to relate these local fluxes to the concentration of greenhouse gases measured at the Central Facility tower and in the atmospheric column above the ARM SGP Central Facility, 3) to develop and test bottom-up measurement and modeling approaches to estimate regionalmore » scale carbon balances, and 4) to develop and test inverse modeling approaches to estimate regional scale carbon balance and anthropogenic sources over continental regions. Regular soundings of the atmosphere from near the surface into the mid-troposphere are essential for this research.« less

  3. Modeling Framework for Fracture in Multiscale Cement-Based Material Structures

    PubMed Central

    Qian, Zhiwei; Schlangen, Erik; Ye, Guang; van Breugel, Klaas

    2017-01-01

    Multiscale modeling for cement-based materials, such as concrete, is a relatively young subject, but there are already a number of different approaches to study different aspects of these classical materials. In this paper, the parameter-passing multiscale modeling scheme is established and applied to address the multiscale modeling problem for the integrated system of cement paste, mortar, and concrete. The block-by-block technique is employed to solve the length scale overlap challenge between the mortar level (0.1–10 mm) and the concrete level (1–40 mm). The microstructures of cement paste are simulated by the HYMOSTRUC3D model, and the material structures of mortar and concrete are simulated by the Anm material model. Afterwards the 3D lattice fracture model is used to evaluate their mechanical performance by simulating a uniaxial tensile test. The simulated output properties at a lower scale are passed to the next higher scale to serve as input local properties. A three-level multiscale lattice fracture analysis is demonstrated, including cement paste at the micrometer scale, mortar at the millimeter scale, and concrete at centimeter scale. PMID:28772948

  4. Modeling and Simulation of Nanoindentation

    NASA Astrophysics Data System (ADS)

    Huang, Sixie; Zhou, Caizhi

    2017-11-01

    Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.

  5. Large-Scale Traffic Microsimulation From An MPO Perspective

    DOT National Transportation Integrated Search

    1997-01-01

    One potential advancement of the four-step travel model process is the forecasting and simulation of individual activities and travel. A common concern with such an approach is that the data and computational requirements for a large-scale, regional ...

  6. Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments

    DTIC Science & Technology

    2015-09-30

    statistical inference methodologies for ocean- acoustic problems by investigating and applying statistical methods to data collected from scale-model...to begin planning experiments for statistical inference applications. APPROACH In the ocean acoustics community over the past two decades...solutions for waveguide parameters. With the introduction of statistical inference to the field of ocean acoustics came the desire to interpret marginal

  7. Quantifying restoration effectiveness using multi-scale habitat models: Implications for sage-grouse in the Great Basin

    Treesearch

    Robert S. Arkle; David S. Pilliod; Steven E. Hanser; Matthew L. Brooks; Jeanne C. Chambers; James B. Grace; Kevin C. Knutson; David A. Pyke; Justin L. Welty; Troy A. Wirth

    2014-01-01

    A recurrent challenge in the conservation of wide-ranging, imperiled species is understanding which habitats to protect and whether we are capable of restoring degraded landscapes. For Greater Sage-grouse (Centrocercus urophasianus), a species of conservation concern in the western United States, we approached this problem by developing multi-scale empirical models of...

  8. Understanding scale dependency of climatic processes with diarrheal disease

    NASA Astrophysics Data System (ADS)

    Nasr Azadani, F.; Jutla, A.; Akanda, A. S. S.; Colwell, R. R.

    2015-12-01

    The issue of scales in linking climatic processes with diarrheal diseases is perhaps one of the most challenging aspect to develop any predictive algorithm for outbreaks and to understand impacts of changing climate. Majority of diarrheal diseases have shown to be strongly associated with climate modulated environmental processes where pathogens survive. Using cholera as an example of characteristic diarrheal diseases, this study will provide methodological insights on dominant scale variability in climatic processes that are linked with trigger and transmission of disease. Cholera based epidemiological models use human to human interaction as a main transmission mechanism, however, environmental conditions for creating seasonality in outbreaks is not explicitly modeled. For example, existing models cannot create seasonality, unless some of the model parameters are a-priori chosen to vary seasonally. A systems based feedback approach will be presented to understand role of climatic processes on trigger and transmission disease. In order to investigate effect of changing climate on cholera, a downscaling approach using support vector machine will be used. Our preliminary results using three climate models, ECHAM5, GFDL, and HADCM show that varying modalities in future cholera outbreaks.

  9. The secondary drying and the fate of organic solvents for spray dried dispersion drug product.

    PubMed

    Hsieh, Daniel S; Yue, Hongfei; Nicholson, Sarah J; Roberts, Daniel; Schild, Richard; Gamble, John F; Lindrud, Mark

    2015-05-01

    To understand the mechanisms of secondary drying of spray-dried dispersion (SDD) drug product and establish a model to describe the fate of organic solvents in such a product. The experimental approach includes characterization of the SDD particles, drying studies of SDD using an integrated weighing balance and mass spectrometer, and the subsequent generation of the drying curve. The theoretical approach includes the establishment of a Fickian diffusion model. The kinetics of solvent removal during secondary drying from the lab scale to a bench scale follows Fickian diffusion model. Excellent agreement is obtained between the experimental data and the prediction from the modeling. The diffusion process is dependent upon temperature. The key to a successful scale up of the secondary drying is to control the drying temperature. The fate of primary solvents including methanol and acetone, and their potential impurity such as benzene can be described by the Fickian diffusion model. A mathematical relationship based upon the ratio of diffusion coefficient was established to predict the benzene concentration from the fate of the primary solvent during the secondary drying process.

  10. Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach

    PubMed Central

    Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco

    2015-01-01

    Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901

  11. Progress and limitations on quantifying nutrient and carbon loading to coastal waters

    NASA Astrophysics Data System (ADS)

    Stets, E.; Oelsner, G. P.; Stackpoole, S. M.

    2017-12-01

    Riverine export of nutrients and carbon to estuarine and coastal waters are important determinants of coastal ecosystem health and provide necessary insight into global biogeochemical cycles. Quantification of coastal solute loads typically relies upon modeling based on observations of concentration and discharge from selected rivers draining to the coast. Most large-scale river export models require unidirectional flow and thus are referenced to monitoring locations at the head of tide, which can be located far inland. As a result, the contributions of the coastal plain, tidal wetlands, and concentrated coastal development are often poorly represented in regional and continental-scale estimates of solute delivery to coastal waters. However, site-specific studies have found that these areas are disproportionately active in terms of nutrient and carbon export. Modeling efforts to upscale fluxes from these areas, while not common, also suggest an outsized importance to coastal flux estimates. This presentation will focus on illustrating how the problem of under-representation of near-shore environments impacts large-scale coastal flux estimates in the context of recent regional and continental-scale assessments. Alternate approaches to capturing the influence of the near-coastal terrestrial inputs including recent data aggregation efforts and modeling approaches will be discussed.

  12. Incorporating environmental attitudes in discrete choice models: an exploration of the utility of the awareness of consequences scale.

    PubMed

    Hoyos, David; Mariel, Petr; Hess, Stephane

    2015-02-01

    Environmental economists are increasingly interested in better understanding how people cognitively organise their beliefs and attitudes towards environmental change in order to identify key motives and barriers that stimulate or prevent action. In this paper, we explore the utility of a commonly used psychometric scale, the awareness of consequences (AC) scale, in order to better understand stated choices. The main contribution of the paper is that it provides a novel approach to incorporate attitudinal information into discrete choice models for environmental valuation: firstly, environmental attitudes are incorporated using a reinterpretation of the classical AC scale recently proposed by Ryan and Spash (2012); and, secondly, attitudinal data is incorporated as latent variables under a hybrid choice modelling framework. This novel approach is applied to data from a survey conducted in the Basque Country (Spain) in 2008 aimed at valuing land-use policies in a Natura 2000 Network site. The results are relevant to policy-making because choice models that are able to accommodate underlying environmental attitudes may help in designing more effective environmental policies. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Simulating faults and plate boundaries with a transversely isotropic plasticity model

    NASA Astrophysics Data System (ADS)

    Sharples, W.; Moresi, L. N.; Velic, M.; Jadamec, M. A.; May, D. A.

    2016-03-01

    In mantle convection simulations, dynamically evolving plate boundaries have, for the most part, been represented using an visco-plastic flow law. These systems develop fine-scale, localized, weak shear band structures which are reminiscent of faults but it is a significant challenge to resolve the large- and the emergent, small-scale-behavior. We address this issue of resolution by taking into account the observation that a rock element with embedded, planar, failure surfaces responds as a non-linear, transversely isotropic material with a weak orientation defined by the plane of the failure surface. This approach partly accounts for the large-scale behavior of fine-scale systems of shear bands which we are not in a position to resolve explicitly. We evaluate the capacity of this continuum approach to model plate boundaries, specifically in the context of subduction models where the plate boundary interface has often been represented as a planar discontinuity. We show that the inclusion of the transversely isotropic plasticity model for the plate boundary promotes asymmetric subduction from initiation. A realistic evolution of the plate boundary interface and associated stresses is crucial to understanding inter-plate coupling, convergent margin driven topography, and earthquakes.

  14. Multi-scale genetic dynamic modelling I : an algorithm to compute generators.

    PubMed

    Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca

    2011-09-01

    We present a new approach or framework to model dynamic regulatory genetic activity. The framework is using a multi-scale analysis based upon generic assumptions on the relative time scales attached to the different transitions of molecular states defining the genetic system. At micro-level such systems are regulated by the interaction of two kinds of molecular players: macro-molecules like DNA or polymerases, and smaller molecules acting as transcription factors. The proposed genetic model then represents the larger less abundant molecules with a finite discrete state space, for example describing different conformations of these molecules. This is in contrast to the representations of the transcription factors which are-like in classical reaction kinetics-represented by their particle number only. We illustrate the method by considering the genetic activity associated to certain configurations of interacting genes that are fundamental to modelling (synthetic) genetic clocks. A largely unknown question is how different molecular details incorporated via this more realistic modelling approach lead to different macroscopic regulatory genetic models which dynamical behaviour might-in general-be different for different model choices. The theory will be applied to a real synthetic clock in a second accompanying article (Kirkilioniset al., Theory Biosci, 2011).

  15. Stochastic downscaling of numerically simulated spatial rain and cloud fields using a transient multifractal approach

    NASA Astrophysics Data System (ADS)

    Nogueira, M.; Barros, A. P.; Miranda, P. M.

    2012-04-01

    Atmospheric fields can be extremely variable over wide ranges of spatial scales, with a scale ratio of 109-1010 between largest (planetary) and smallest (viscous dissipation) scale. Furthermore atmospheric fields with strong variability over wide ranges in scale most likely should not be artificially split apart into large and small scales, as in reality there is no scale separation between resolved and unresolved motions. Usually the effects of the unresolved scales are modeled by a deterministic bulk formula representing an ensemble of incoherent subgrid processes on the resolved flow. This is a pragmatic approach to the problem and not the complete solution to it. These models are expected to underrepresent the small-scale spatial variability of both dynamical and scalar fields due to implicit and explicit numerical diffusion as well as physically based subgrid scale turbulent mixing, resulting in smoother and less intermittent fields as compared to observations. Thus, a fundamental change in the way we formulate our models is required. Stochastic approaches equipped with a possible realization of subgrid processes and potentially coupled to the resolved scales over the range of significant scale interactions range provide one alternative to address the problem. Stochastic multifractal models based on the cascade phenomenology of the atmosphere and its governing equations in particular are the focus of this research. Previous results have shown that rain and cloud fields resulting from both idealized and realistic numerical simulations display multifractal behavior in the resolved scales. This result is observed even in the absence of scaling in the initial conditions or terrain forcing, suggesting that multiscaling is a general property of the nonlinear solutions of the Navier-Stokes equations governing atmospheric dynamics. Our results also show that the corresponding multiscaling parameters for rain and cloud fields exhibit complex nonlinear behavior depending on large scale parameters such as terrain forcing and mean atmospheric conditions at each location, particularly mean wind speed and moist stability. A particularly robust behavior found is the transition of the multiscaling parameters between stable and unstable cases, which has a clear physical correspondence to the transition from stratiform to organized (banded) convective regime. Thus multifractal diagnostics of moist processes are fundamentally transient and should provide a physically robust basis for the downscaling and sub-grid scale parameterizations of moist processes. Here, we investigate the possibility of using a simplified computationally efficient multifractal downscaling methodology based on turbulent cascades to produce statistically consistent fields at scales higher than the ones resolved by the model. Specifically, we are interested in producing rainfall and cloud fields at spatial resolutions necessary for effective flash flood and earth flows forecasting. The results are examined by comparing downscaled field against observations, and tendency error budgets are used to diagnose the evolution of transient errors in the numerical model prediction which can be attributed to aliasing.

  16. Surveying the SO(10) model landscape: The left-right symmetric case

    NASA Astrophysics Data System (ADS)

    Deppisch, Frank F.; Gonzalo, Tomás E.; Graf, Lukas

    2017-09-01

    Grand unified theories (GUTs) are a very well motivated extensions of the Standard Model (SM), but the landscape of models and possibilities is overwhelming, and different patterns can lead to rather distinct phenomenologies. In this work we present a way to automatize the model building process, by considering a top to bottom approach that constructs viable and sensible theories from a small and controllable set of inputs at the high scale. By providing a GUT scale symmetry group and the field content, possible symmetry breaking paths are generated and checked for consistency, ensuring anomaly cancellation, SM embedding and gauge coupling unification. We emphasize the usefulness of this approach for the particular case of a nonsupersymmetric SO(10) model with an intermediate left-right symmetry, and we analyze how low-energy observables such as proton decay and lepton flavor violation might affect the generated model landscape.

  17. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    NASA Technical Reports Server (NTRS)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  18. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    NASA Astrophysics Data System (ADS)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  19. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on first principals, partly pde-type, are available for several processes (but not for all), because measurement and modelling scale are compatible (-) the spatial model domain are hardly representative for larger spatial entities, including regions for which water resources management decisions are to be taken; straightforward upsizing is also limited by data availability and computational requirements. Meso scale (e.g. extent of a small to large catchment or region): (+) the spatial extent of the model domain has approximately the same extent as the regions for which water resources management decisions are to be taken. I.e., such models enable water resources quantification at the scale of most water management decisions; (+) data of some state conditions (e.g. vegetation cover, topography, river network and cross sections) are available; (+) data of some boundary fluxes (in particular surface runoff / channel flow) are directly measurable with mostly sufficient certainty; (+) equations, partly based on simple water budgeting, partly variants of pde-type equations, are available for most hydrological processes. This enables the construction of meso-scale distributed models reflecting the spatial heterogeneity of regions/landscapes; (-) process scale, measurement scale, and modelling scale differ from each other for a number of processes, e.g., such as runoff generation; (-) the process formulation (usually derived from micro-scale studies) cannot directly be transferred to the modelling domain. Upscaling procedures for this purpose are not readily and generally available. Macro scale (e.g. extent of a continent up to global): (+) the spatial extent of the model may cover the whole Earth. This enables an attractive global display of model results; (+) model results might be technically interchangeable or at least comparable with results from other global models, such as global climate models; (-) process scale, measurement scale, and modelling scale differ heavily from each other for all hydrological and associated processes; (-) the model domain and its results are not representative regions for which water resources management decisions are to be taken. (-) both state condition and boundary flux data are hardly available for the whole model domain. Water management data and discharge data from remote regions are particular incomplete / unavailable for this scale. This undermines the model's verifiability; (-) since process formulation and resulting modelling reliability at this scale is very limited, such models can hardly show any explanatory skills or prognostic power; (-) since both the entire model domain and the spatial sub-units cover large areas, model results represent values averaged over at least the spatial sub-unit's extent. In many cases, the applied time scale implies a long-term averaging in time, too. We emphasize the importance to be aware of the above mentioned strengths and weaknesses of those scale-specific models. (Many of the) results of the current global model studies do not reflect such limitations. In particular, we consider the averaging over large model entities in space and/or time inadequate. Many hydrological processes are of a non-linear nature, including threshold-type behaviour. Such features cannot be reflected by such large scale entities. The model results therefore can be of little or no use for water resources decisions and/or even misleading for public debates or decision making. Some rather newly developed sustainability concepts, e.g. "Planetary Boundaries" in which humanity may "continue to develop and thrive for generations to come" are based on such global-scale approaches and models. However, many of the major problems regarding sustainability on Earth, e.g. water scarcity, do not exhibit on a global but on a regional scale. While on a global scale water might look like being available in sufficient quantity and quality, there are many regions where water problems already have very harmful or even devastating effects. Therefore, it is the challenge to derive models and observation programmes for regional scales. In case a global display is desired future efforts should be directed towards the development of a global picture based on a mosaic of regional sound assessments, rather than "zooming into" the results of large-scale simulations. Still, a key question remains to be discussed, i.e. for which purpose models at this (global) scale can be used.

  20. Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting.

    PubMed

    Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M

    2014-06-01

    Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind "noise," which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical "downscaling" of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations.

  1. Water Quality Assessment in the Vouga Catchment (Portugal) through the Integration of Hydrologic Modeling, Economic Valuation, and Optimization Methods

    NASA Astrophysics Data System (ADS)

    Hawtree, Daniel; Julich, Stefan; Rocha, João; Roebeling, Peter; Feger, Karl-Heinz

    2016-04-01

    Hydrologic model assessments of the impacts of land-cover / use change (LCLUC) are fundamental for the development of catchment management plans, which are increasingly needed for meeting water quality standards (i.e. Water Framework Directive). These assessments can be difficult to conduct at the spatial scale required for such plans, due to data limitations and the challenge of up-scaling from field / small scale studies to larger regions. Furthermore, such hydrologic assessments are of limited practical use if the financial impacts of any potential land-cover / management changes on local stakeholders are adequately quantified and taken into planning consideration. To address these challenges, this study presents an approach that integrates hydrologic modeling, economic valuation, and landscape optimization methods. This approach is applied to the Vouga catchment, a large (2,298 km^2) mixed land-use catchment in north-central Portugal. The Vouga has high nutrient (nitrogen and phosphorus) impacts in a number of reaches, which have negative impacts on downstream wetlands and groundwater supplies. To examine potential improvements to water quality, the Soil and Water Assessment Tool (SWAT) was calibrated over a five period (2002 - 2007) to establish the baseline hydrologic and nutrient fluxes. This calibration relies upon the up-scaling of findings from previous field studies (on vegetation and soils), hydrologic assessments, and modeling studies. The agricultural income for local stakeholders was estimated from existing land-cover and management approaches is made, to establish the baseline financial conditions. An optimization algorithm is then applied to the baseline scenario using both the biophysical and financial information, which seeks to determine various (most) optimal states. The preliminary results from this work are presented, and the advantages and challenges of using such an approach for scenario analysis for catchment management are discussed

  2. Construct validity evidence for the Male Role Norms Inventory-Short Form: A structural equation modeling approach using the bifactor model.

    PubMed

    Levant, Ronald F; Hall, Rosalie J; Weigold, Ingrid K; McCurdy, Eric R

    2016-10-01

    The construct validity of the Male Role Norms Inventory-Short Form (MRNI-SF) was assessed using a latent variable approach implemented with structural equation modeling (SEM). The MRNI-SF was specified as having a bifactor structure, and validation scales were also specified as latent variables. The latent variable approach had the advantages of separating effects of general and specific factors and controlling for some sources of measurement error. Data (N = 484) were from a diverse sample (38.8% men of color, 22.3% men of diverse sexualities) of community-dwelling and college men who responded to an online survey. The construct validity of the MRNI-SF General Traditional Masculinity Ideology factor was supported for all 4 of the proposed latent correlations with: (a) Male Role Attitudes Scale; (b) general factor of Conformity to Masculine Norms Inventory-46; (c) higher-order factor of Gender Role Conflict Scale; and (d) Personal Attributes Questionnaire-Masculinity Scale. Significant correlations with relevant other latent factors provided concurrent validity evidence for the MRNI-SF specific factors of Negativity toward Sexual Minorities, Importance of Sex, Restrictive Emotionality, and Toughness, with all 8 of the hypothesized relationships supported. However, 3 relationships concerning Dominance were not supported. (The construct validity of the remaining 2 MRNI-SF specific factors-Avoidance of Femininity and Self-Reliance through Mechanical Skills was not assessed.) Comparisons were made, and meaningful differences noted, between the latent correlations emphasized in this study and their raw variable counterparts. Results are discussed in terms of the advantages of an SEM approach and the unique characteristics of the bifactor model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    PubMed

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Multi-scale modeling of multi-component reactive transport in geothermal aquifers

    NASA Astrophysics Data System (ADS)

    Nick, Hamidreza M.; Raoof, Amir; Wolf, Karl-Heinz; Bruhn, David

    2014-05-01

    In deep geothermal systems heat and chemical stresses can cause physical alterations, which may have a significant effect on flow and reaction rates. As a consequence it will lead to changes in permeability and porosity of the formations due to mineral precipitation and dissolution. Large-scale modeling of reactive transport in such systems is still challenging. A large area of uncertainty is the way in which the pore-scale information controlling the flow and reaction will behave at a larger scale. A possible choice is to use constitutive relationships relating, for example the permeability and porosity evolutions to the change in the pore geometry. While determining such relationships through laboratory experiments may be limited, pore-network modeling provides an alternative solution. In this work, we introduce a new workflow in which a hybrid Finite-Element Finite-Volume method [1,2] and a pore network modeling approach [3] are employed. Using the pore-scale model, relevant constitutive relations are developed. These relations are then embedded in the continuum-scale model. This approach enables us to study non-isothermal reactive transport in porous media while accounting for micro-scale features under realistic conditions. The performance and applicability of the proposed model is explored for different flow and reaction regimes. References: 1. Matthäi, S.K., et al.: Simulation of solute transport through fractured rock: a higher-order accurate finite-element finite-volume method permitting large time steps. Transport in porous media 83.2 (2010): 289-318. 2. Nick, H.M., et al.: Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive Henry problem. Journal of contaminant hydrology 145 (2012), 90-104. 3. Raoof A., et al.: PoreFlow: A Complex pore-network model for simulation of reactive transport in variably saturated porous media, Computers & Geosciences, 61, (2013), 160-174.

  5. Dynamic Behavior of Sand: Annual Report FY 11

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoun, T; Herbold, E; Johnson, S

    2012-03-15

    Currently, design of earth-penetrating munitions relies heavily on empirical relationships to estimate behavior, making it difficult to design novel munitions or address novel target situations without expensive and time-consuming full-scale testing with relevant system and target characteristics. Enhancing design through numerical studies and modeling could help reduce the extent and duration of full-scale testing if the models have enough fidelity to capture all of the relevant parameters. This can be separated into three distinct problems: that of the penetrator structural and component response, that of the target response, and that of the coupling between the two. This project focuses onmore » enhancing understanding of the target response, specifically granular geomaterials, where the temporal and spatial multi-scale nature of the material controls its response. As part of the overarching goal of developing computational capabilities to predict the performance of conventional earth-penetrating weapons, this project focuses specifically on developing new models and numerical capabilities for modeling sand response in ALE3D. There is general recognition that granular materials behave in a manner that defies conventional continuum approaches which rely on response locality and which degrade in the presence of strong response nonlinearities, localization, and phase gradients. There are many numerical tools available to address parts of the problem. However, to enhance modeling capability, this project is pursuing a bottom-up approach of building constitutive models from higher fidelity, smaller spatial scale simulations (rather than from macro-scale observations of physical behavior as is traditionally employed) that are being augmented to address the unique challenges of mesoscale modeling of dynamically loaded granular materials. Through understanding response and sensitivity at the grain-scale, it is expected that better reduced order representations of response can be formulated at the continuum scale as illustrated in Figure 1 and Figure 2. The final result of this project is to implement such reduced order models in the ALE3D material library for general use.« less

  6. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.

    PubMed

    Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M

    2015-11-13

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach presented here neglects many important processes that may amplify or mitigate C release from permafrost soils, but serves as a data-constrained estimate on the forced, large-scale permafrost C response to warming. © 2015 The Authors.

  7. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    PubMed Central

    Koven, C. D.; Schuur, E. A. G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A. H.; Marchenko, S. S.; McGuire, A. D.; Natali, S. M.; Nicolsky, D. J.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The simplified approach presented here neglects many important processes that may amplify or mitigate C release from permafrost soils, but serves as a data-constrained estimate on the forced, large-scale permafrost C response to warming. PMID:26438276

  8. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    DOE PAGES

    Koven, C. D.; Schuur, E. A. G.; Schadel, C.; ...

    2015-10-05

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soilmore » temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of –14 to –19 Pg C °C–1 on a 100 year time scale. For CH 4 emissions, our approach assumes a fixed saturated area and that increases in CH 4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH 4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. In conclusion, the simplified approach presented here neglects many important processes that may amplify or mitigate C release from permafrost soils, but serves as a data-constrained estimate on the forced, large-scale permafrost C response to warming.« less

  9. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    USGS Publications Warehouse

    Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The simplified approach presented here neglects many important processes that may amplify or mitigate C release from permafrost soils, but serves as a data-constrained estimate on the forced, large-scale permafrost C response to warming.

  10. The drivers of wildfire enlargement do not exhibit scale thresholds in southeastern Australian forests.

    PubMed

    Price, Owen F; Penman, Trent; Bradstock, Ross; Borah, Rittick

    2016-10-01

    Wildfires are complex adaptive systems, and have been hypothesized to exhibit scale-dependent transitions in the drivers of fire spread. Among other things, this makes the prediction of final fire size from conditions at the ignition difficult. We test this hypothesis by conducting a multi-scale statistical modelling of the factors determining whether fires reached 10 ha, then 100 ha then 1000 ha and the final size of fires >1000 ha. At each stage, the predictors were measures of weather, fuels, topography and fire suppression. The objectives were to identify differences among the models indicative of scale transitions, assess the accuracy of the multi-step method for predicting fire size (compared to predicting final size from initial conditions) and to quantify the importance of the predictors. The data were 1116 fires that occurred in the eucalypt forests of New South Wales between 1985 and 2010. The models were similar at the different scales, though there were subtle differences. For example, the presence of roads affected whether fires reached 10 ha but not larger scales. Weather was the most important predictor overall, though fuel load, topography and ease of suppression all showed effects. Overall, there was no evidence that fires have scale-dependent transitions in behaviour. The models had a predictive accuracy of 73%, 66%, 72% and 53% accuracy at 10 ha, 100 ha, 1000 ha and final size scales. When these steps were combined, the overall accuracy for predicting the size of fires was 62%, while the accuracy of the one step model was only 20%. Thus, the multi-scale approach was an improvement on the single scale approach, even though the predictive accuracy was probably insufficient for use as an operational tool. The analysis has also provided further evidence of the important role of weather, compared to fuel, suppression and topography in driving fire behaviour. Copyright © 2016. Published by Elsevier Ltd.

  11. An Analytical Thermal Model for Autonomous Soaring Research

    NASA Technical Reports Server (NTRS)

    Allen, Michael

    2006-01-01

    A viewgraph presentation describing an analytical thermal model used to enable research on autonomous soaring for a small UAV aircraft is given. The topics include: 1) Purpose; 2) Approach; 3) SURFRAD Data; 4) Convective Layer Thickness; 5) Surface Heat Budget; 6) Surface Virtual Potential Temperature Flux; 7) Convective Scaling Velocity; 8) Other Calculations; 9) Yearly trends; 10) Scale Factors; 11) Scale Factor Test Matrix; 12) Statistical Model; 13) Updraft Strength Calculation; 14) Updraft Diameter; 15) Updraft Shape; 16) Smoothed Updraft Shape; 17) Updraft Spacing; 18) Environment Sink; 19) Updraft Lifespan; 20) Autonomous Soaring Research; 21) Planned Flight Test; and 22) Mixing Ratio.

  12. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel

    2017-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.

  13. Improving catchment discharge predictions by inferring flow route contributions from a nested-scale monitoring and model setup

    NASA Astrophysics Data System (ADS)

    van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.; van Geer, F. C.; Torfs, P. J. J. F.; de Louw, P. G. B.

    2011-03-01

    Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nested-scale discharge and groundwater level measurements for the estimation of flow route volumes and for predictions of catchment discharge. In order to relate field-site measurements to the catchment-scale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD) curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDD-curve was measured for a single field site (0.009 km2) and simple process descriptions were applied to relate groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with Nash-Sutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale field-site measurements to catchment scales by inferring scale-specific GDD-curves from the hydrographs of two nested catchments (0.4 and 6.5 km2). The estimated contribution of tube drain effluent (a dominant source for nitrates) decreased with increasing scale from 76-79% at the field-site to 34-61% and 25-50% for both catchment scales. These results were validated by demonstrating that a model conditioned on nested-scale measurements improves simulations of nitrate loads and predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only.

  14. Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-07-01

    Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.

  15. Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling

    NASA Astrophysics Data System (ADS)

    Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.

    The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.

  16. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  17. Recent (1999-2003) Canadian research on contemporary processes of river erosion and sedimentation, and river mechanics

    NASA Astrophysics Data System (ADS)

    de Boer, D. H.; Hassan, M. A.; MacVicar, B.; Stone, M.

    2005-01-01

    Contributions by Canadian fluvial geomorphologists between 1999 and 2003 are discussed under four major themes: sediment yield and sediment dynamics of large rivers; cohesive sediment transport; turbulent flow structure and sediment transport; and bed material transport and channel morphology. The paper concludes with a section on recent technical advances. During the review period, substantial progress has been made in investigating the details of fluvial processes at relatively small scales. Examples of this emphasis are the studies of flow structure, turbulence characteristics and bedload transport, which continue to form central themes in fluvial research in Canada. Translating the knowledge of small-scale, process-related research to an understanding of the behaviour of large-scale fluvial systems, however, continues to be a formidable challenge. Models play a prominent role in elucidating the link between small-scale processes and large-scale fluvial geomorphology, and, as a result, a number of papers describing models and modelling results have been published during the review period. In addition, a number of investigators are now approaching the problem by directly investigating changes in the system of interest at larger scales, e.g. a channel reach over tens of years, and attempting to infer what processes may have led to the result. It is to be expected that these complementary approaches will contribute to an increased understanding of fluvial systems at a variety of spatial and temporal scales. Copyright

  18. Feedforward and feedback frequency-dependent interactions in a large-scale laminar network of the primate cortex.

    PubMed

    Mejias, Jorge F; Murray, John D; Kennedy, Henry; Wang, Xiao-Jing

    2016-11-01

    Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions.

  19. Feedforward and feedback frequency-dependent interactions in a large-scale laminar network of the primate cortex

    PubMed Central

    Mejias, Jorge F.; Murray, John D.; Kennedy, Henry; Wang, Xiao-Jing

    2016-01-01

    Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions. PMID:28138530

  20. COSP - A computer model of cyclic oxidation

    NASA Technical Reports Server (NTRS)

    Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.

    1991-01-01

    A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.

  1. Operational evapotranspiration mapping using remote sensing and weather datasets: a new parameterization for the SSEB approach

    USGS Publications Warehouse

    Senay, Gabriel B.; Bohms, Stefanie; Singh, Ramesh K.; Gowda, Prasanna H.; Velpuri, Naga Manohar; Alemu, Henok; Verdin, James P.

    2013-01-01

    The increasing availability of multi-scale remotely sensed data and global weather datasets is allowing the estimation of evapotranspiration (ET) at multiple scales. We present a simple but robust method that uses remotely sensed thermal data and model-assimilated weather fields to produce ET for the contiguous United States (CONUS) at monthly and seasonal time scales. The method is based on the Simplified Surface Energy Balance (SSEB) model, which is now parameterized for operational applications, renamed as SSEBop. The innovative aspect of the SSEBop is that it uses predefined boundary conditions that are unique to each pixel for the "hot" and "cold" reference conditions. The SSEBop model was used for computing ET for 12 years (2000-2011) using the MODIS and Global Data Assimilation System (GDAS) data streams. SSEBop ET results compared reasonably well with monthly eddy covariance ET data explaining 64% of the observed variability across diverse ecosystems in the CONUS during 2005. Twelve annual ET anomalies (2000-2011) depicted the spatial extent and severity of the commonly known drought years in the CONUS. More research is required to improve the representation of the predefined boundary conditions in complex terrain at small spatial scales. SSEBop model was found to be a promising approach to conduct water use studies in the CONUS, with a similar opportunity in other parts of the world. The approach can also be applied with other thermal sensors such as Landsat.

  2. Evaluating the effects of terrestrial ecosystems, climate and carbon dioxide on weathering over geological time: a global-scale process-based approach.

    PubMed

    Taylor, Lyla L; Banwart, Steve A; Valdes, Paul J; Leake, Jonathan R; Beerling, David J

    2012-02-19

    Global weathering of calcium and magnesium silicate rocks provides the long-term sink for atmospheric carbon dioxide (CO(2)) on a timescale of millions of years by causing precipitation of calcium carbonates on the seafloor. Catchment-scale field studies consistently indicate that vegetation increases silicate rock weathering, but incorporating the effects of trees and fungal symbionts into geochemical carbon cycle models has relied upon simple empirical scaling functions. Here, we describe the development and application of a process-based approach to deriving quantitative estimates of weathering by plant roots, associated symbiotic mycorrhizal fungi and climate. Our approach accounts for the influence of terrestrial primary productivity via nutrient uptake on soil chemistry and mineral weathering, driven by simulations using a dynamic global vegetation model coupled to an ocean-atmosphere general circulation model of the Earth's climate. The strategy is successfully validated against observations of weathering in watersheds around the world, indicating that it may have some utility when extrapolated into the past. When applied to a suite of six global simulations from 215 to 50 Ma, we find significantly larger effects over the past 220 Myr relative to the present day. Vegetation and mycorrhizal fungi enhanced climate-driven weathering by a factor of up to 2. Overall, we demonstrate a more realistic process-based treatment of plant fungal-geosphere interactions at the global scale, which constitutes a first step towards developing 'next-generation' geochemical models.

  3. Assessing the influence of rater and subject characteristics on measures of agreement for ordinal ratings.

    PubMed

    Nelson, Kerrie P; Mitani, Aya A; Edwards, Don

    2017-09-10

    Widespread inconsistencies are commonly observed between physicians' ordinal classifications in screening tests results such as mammography. These discrepancies have motivated large-scale agreement studies where many raters contribute ratings. The primary goal of these studies is to identify factors related to physicians and patients' test results, which may lead to stronger consistency between raters' classifications. While ordered categorical scales are frequently used to classify screening test results, very few statistical approaches exist to model agreement between multiple raters. Here we develop a flexible and comprehensive approach to assess the influence of rater and subject characteristics on agreement between multiple raters' ordinal classifications in large-scale agreement studies. Our approach is based upon the class of generalized linear mixed models. Novel summary model-based measures are proposed to assess agreement between all, or a subgroup of raters, such as experienced physicians. Hypothesis tests are described to formally identify factors such as physicians' level of experience that play an important role in improving consistency of ratings between raters. We demonstrate how unique characteristics of individual raters can be assessed via conditional modes generated during the modeling process. Simulation studies are presented to demonstrate the performance of the proposed methods and summary measure of agreement. The methods are applied to a large-scale mammography agreement study to investigate the effects of rater and patient characteristics on the strength of agreement between radiologists. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Evaluating the effects of terrestrial ecosystems, climate and carbon dioxide on weathering over geological time: a global-scale process-based approach

    PubMed Central

    Taylor, Lyla L.; Banwart, Steve A.; Valdes, Paul J.; Leake, Jonathan R.; Beerling, David J.

    2012-01-01

    Global weathering of calcium and magnesium silicate rocks provides the long-term sink for atmospheric carbon dioxide (CO2) on a timescale of millions of years by causing precipitation of calcium carbonates on the seafloor. Catchment-scale field studies consistently indicate that vegetation increases silicate rock weathering, but incorporating the effects of trees and fungal symbionts into geochemical carbon cycle models has relied upon simple empirical scaling functions. Here, we describe the development and application of a process-based approach to deriving quantitative estimates of weathering by plant roots, associated symbiotic mycorrhizal fungi and climate. Our approach accounts for the influence of terrestrial primary productivity via nutrient uptake on soil chemistry and mineral weathering, driven by simulations using a dynamic global vegetation model coupled to an ocean–atmosphere general circulation model of the Earth's climate. The strategy is successfully validated against observations of weathering in watersheds around the world, indicating that it may have some utility when extrapolated into the past. When applied to a suite of six global simulations from 215 to 50 Ma, we find significantly larger effects over the past 220 Myr relative to the present day. Vegetation and mycorrhizal fungi enhanced climate-driven weathering by a factor of up to 2. Overall, we demonstrate a more realistic process-based treatment of plant fungal–geosphere interactions at the global scale, which constitutes a first step towards developing ‘next-generation’ geochemical models. PMID:22232768

  5. Incorporating abundance information and guiding variable selection for climate-based ensemble forecasting of species' distributional shifts.

    PubMed

    Tanner, Evan P; Papeş, Monica; Elmore, R Dwayne; Fuhlendorf, Samuel D; Davis, Craig A

    2017-01-01

    Ecological niche models (ENMs) have increasingly been used to estimate the potential effects of climate change on species' distributions worldwide. Recently, predictions of species abundance have also been obtained with such models, though knowledge about the climatic variables affecting species abundance is often lacking. To address this, we used a well-studied guild (temperate North American quail) and the Maxent modeling algorithm to compare model performance of three variable selection approaches: correlation/variable contribution (CVC), biological (i.e., variables known to affect species abundance), and random. We then applied the best approach to forecast potential distributions, under future climatic conditions, and analyze future potential distributions in light of available abundance data and presence-only occurrence data. To estimate species' distributional shifts we generated ensemble forecasts using four global circulation models, four representative concentration pathways, and two time periods (2050 and 2070). Furthermore, we present distributional shifts where 75%, 90%, and 100% of our ensemble models agreed. The CVC variable selection approach outperformed our biological approach for four of the six species. Model projections indicated species-specific effects of climate change on future distributions of temperate North American quail. The Gambel's quail (Callipepla gambelii) was the only species predicted to gain area in climatic suitability across all three scenarios of ensemble model agreement. Conversely, the scaled quail (Callipepla squamata) was the only species predicted to lose area in climatic suitability across all three scenarios of ensemble model agreement. Our models projected future loss of areas for the northern bobwhite (Colinus virginianus) and scaled quail in portions of their distributions which are currently areas of high abundance. Climatic variables that influence local abundance may not always scale up to influence species' distributions. Special attention should be given to selecting variables for ENMs, and tests of model performance should be used to validate the choice of variables.

  6. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  7. Modeling vehicle fuel consumption and emissions at signalized intersection approaches : integrating field-collected data into microscopic simulation.

    DOT National Transportation Integrated Search

    2012-07-01

    Microscopic models produce emissions and fuel consumption estimates with higher temporal resolution than other scales of : models. Most emissions and fuel consumption models were developed with data from dynamometer testing which are : sufficiently a...

  8. Development of a Patient-Specific Multi-Scale Model to Understand Atherosclerosis and Calcification Locations: Comparison with In vivo Data in an Aortic Dissection

    PubMed Central

    Alimohammadi, Mona; Pichardo-Almarza, Cesar; Agu, Obiekezie; Díaz-Zuccarini, Vanessa

    2016-01-01

    Vascular calcification results in stiffening of the aorta and is associated with hypertension and atherosclerosis. Atherogenesis is a complex, multifactorial, and systemic process; the result of a number of factors, each operating simultaneously at several spatial and temporal scales. The ability to predict sites of atherogenesis would be of great use to clinicians in order to improve diagnostic and treatment planning. In this paper, we present a mathematical model as a tool to understand why atherosclerotic plaque and calcifications occur in specific locations. This model is then used to analyze vascular calcification and atherosclerotic areas in an aortic dissection patient using a mechanistic, multi-scale modeling approach, coupling patient-specific, fluid-structure interaction simulations with a model of endothelial mechanotransduction. A number of hemodynamic factors based on state-of-the-art literature are used as inputs to the endothelial permeability model, in order to investigate plaque and calcification distributions, which are compared with clinical imaging data. A significantly improved correlation between elevated hydraulic conductivity or volume flux and the presence of calcification and plaques was achieved by using a shear index comprising both mean and oscillatory shear components (HOLMES) and a non-Newtonian viscosity model as inputs, as compared to widely used hemodynamic indicators. The proposed approach shows promise as a predictive tool. The improvements obtained using the combined biomechanical/biochemical modeling approach highlight the benefits of mechanistic modeling as a powerful tool to understand complex phenomena and provides insight into the relative importance of key hemodynamic parameters. PMID:27445834

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lechman, Jeremy B.; Battaile, Corbett Chandler.; Bolintineanu, Dan

    This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performance variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling information transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were successfully advanced. As discussed in Chapter 2 a flash diffusivity capability for measuring homogeneous thermal conductivity ofmore » pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes success for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in developing and informing the kind of modeling approach originally envisioned (see Chapter 6). In both cases much more remains to be accomplished.« less

  10. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  11. Moving from a project to programmatic response: scaling up harm reduction in Asia.

    PubMed

    Chatterjee, Anindya; Sharma, Mukta

    2010-03-01

    The response to the HIV epidemics among people who inject drugs in Asia began to emerge in the early to mid 1990s, with the rather hesitant implementation of small-scale needle syringe programmes and community care initiatives aiming to support those who were already living with the virus. Since then Asia has seen a significant scaling up of harm reduction, despite very limited resources and difficult policy and legislative environments. One of the major reasons this has happened, is the utilisation of programme based approaches and the firm entrenchment of harm reduction thinking within national HIV/AIDS programmes and strategic plans--in most cases aided by multilateral and bilateral donors. Several models of scale up have been noted in Asia. The transition away from project based approaches, while on the whole positive, can also have a negative impact if the involvement of civil society and a client focussed approach is not protected. Also there are implications for which models of capacity building can be systematised for ongoing scale up. Most crucially, the tensions between drug policy, human rights and public health policies need to be resolved if harm reduction services are to be made available to the millions in Asia who are still unable to access these services. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  12. Scalable clustering algorithms for continuous environmental flow cytometry.

    PubMed

    Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill

    2016-02-01

    Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. The saturated zone at Yucca Mountain: An overview of the characterization and assessment of the saturated zone as a barrier to potential radionuclide migration

    USGS Publications Warehouse

    Eddebbarh, A.-A.; Zyvoloski, G.A.; Robinson, B.A.; Kwicklis, E.M.; Reimus, P.W.; Arnold, B.W.; Corbet, T.; Kuzio, S.P.; Faunt, C.

    2003-01-01

    The US Department of Energy is pursuing Yucca Mountain, Nevada, for the development of a geologic repository for the disposal of spent nuclear fuel and high-level radioactive waste, if the repository is able to meet applicable radiation protection standards established by the US Nuclear Regulatory Commission and the US Environmental Protection Agency (EPA). Effective performance of such a repository would rely on a number of natural and engineered barriers to isolate radioactive waste from the accessible environment. Groundwater beneath Yucca Mountain is the primary medium through which most radionuclides might move away from the potential repository. The saturated zone (SZ) system is expected to act as a natural barrier to this possible movement of radionuclides both by delaying their transport and by reducing their concentration before they reach the accessible environment. Information obtained from Yucca Mountain Site Characterization Project activities is used to estimate groundwater flow rates through the site-scale SZ flow and transport model area and to constrain general conceptual models of groundwater flow in the site-scale area. The site-scale conceptual model is a synthesis of what is known about flow and transport processes at the scale required for total system performance assessment of the site. This knowledge builds on and is consistent with knowledge that has accumulated at the regional scale but is more detailed because more data are available at the site-scale level. The mathematical basis of the site-scale model and the associated numerical approaches are designed to assist in quantifying the uncertainty in the permeability of rocks in the geologic framework model and to represent accurately the flow and transport processes included in the site-scale conceptual model. Confidence in the results of the mathematical model was obtained by comparing calculated to observed hydraulic heads, estimated to measured permeabilities, and lateral flow rates calculated by the site-scale model to those calculated by the regional-scale flow model. In addition, it was confirmed that the flow paths leaving the region of the potential repository are consistent with those inferred from gradients of measured head and those independently inferred from water-chemistry data. The general approach of the site-scale SZ flow and transport model analysis is to calculate unit breakthrough curves for radionuclides at the interface between the SZ and the biosphere using the three-dimensional site-scale SZ flow and transport model. Uncertainties are explicitly incorporated into the site-scale SZ flow and transport abstractions through key parameters and conceptual models. ?? 2002 Elsevier Science B.V. All rights reserved.

  14. Multiscale turbulence models based on convected fluid microstructure

    NASA Astrophysics Data System (ADS)

    Holm, Darryl D.; Tronci, Cesare

    2012-11-01

    The Euler-Poincaré approach to complex fluids is used to derive multiscale equations for computationally modeling Euler flows as a basis for modeling turbulence. The model is based on a kinematic sweeping ansatz (KSA) which assumes that the mean fluid flow serves as a Lagrangian frame of motion for the fluctuation dynamics. Thus, we regard the motion of a fluid parcel on the computationally resolvable length scales as a moving Lagrange coordinate for the fluctuating (zero-mean) motion of fluid parcels at the unresolved scales. Even in the simplest two-scale version on which we concentrate here, the contributions of the fluctuating motion under the KSA to the mean motion yields a system of equations that extends known results and appears to be suitable for modeling nonlinear backscatter (energy transfer from smaller to larger scales) in turbulence using multiscale methods.

  15. Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines

    PubMed Central

    Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram

    2014-01-01

    When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002

  16. Multiple scales modelling approaches to social interaction in crowd dynamics and crisis management. Comment on "Human behaviours in evacuation crowd dynamics: From modelling to "big data" toward crisis management" by Nicola Bellomo et al.

    NASA Astrophysics Data System (ADS)

    Trucu, Dumitru

    2016-09-01

    In this comprehensive review concerning the modelling of human behaviours in crowd dynamics [3], the authors explore a wide range of mathematical approaches spanning over multiple scales that are suitable to describe emerging crowd behaviours in extreme situations. Focused on deciphering the key aspects leading to emerging crowd patterns evolutions in challenging times such as those requiring an evacuation on a complex venue, the authors address this complex dynamics at both microscale (individual level), mesoscale (probability distributions of interacting individuals), and macroscale (population level), ultimately aiming to gain valuable understanding and knowledge that would inform decision making in managing crisis situations.

  17. Development of an Image-based Multi-Scale Finite Element Approach to Predict Fatigue Damage in Asphalt Mixtures

    NASA Astrophysics Data System (ADS)

    Arshadi, Amir

    Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.

  18. Developing Higher-Order Materials Knowledge Systems

    NASA Astrophysics Data System (ADS)

    Fast, Anthony Nathan

    2011-12-01

    Advances in computational materials science and novel characterization techniques have allowed scientists to probe deeply into a diverse range of materials phenomena. These activities are producing enormous amounts of information regarding the roles of various hierarchical material features in the overall performance characteristics displayed by the material. Connecting the hierarchical information over disparate domains is at the crux of multiscale modeling. The inherent challenge of performing multiscale simulations is developing scale bridging relationships to couple material information between well separated length scales. Much progress has been made in the development of homogenization relationships which replace heterogeneous material features with effective homogenous descriptions. These relationships facilitate the flow of information from lower length scales to higher length scales. Meanwhile, most localization relationships that link the information from a from a higher length scale to a lower length scale are plagued by computationally intensive techniques which are not readily integrated into multiscale simulations. The challenge of executing fully coupled multiscale simulations is augmented by the need to incorporate the evolution of the material structure that may occur under conditions such as material processing. To address these challenges with multiscale simulation, a novel framework called the Materials Knowledge System (MKS) has been developed. This methodology efficiently extracts, stores, and recalls microstructure-property-processing localization relationships. This approach is built on the statistical continuum theories developed by Kroner that express the localization of the response field at the microscale using a series of highly complex convolution integrals, which have historically been evaluated analytically. The MKS approach dramatically improves the accuracy of these expressions by calibrating the convolution kernels in these expressions to results from previously validated physics-based models. These novel tools have been validated for the elastic strain localization in moderate contrast dual-phase composites by direct comparisons with predictions from finite element model. The versatility of the approach is further demonstrated by its successful application to capturing the structure evolution during spinodal decomposition of a binary alloy. Lastly, some key features in the future application of the MKS approach are developed using the Portevin-le Chaterlier effect. It has been shown with these case studies that the MKS approach is capable of accurately reproducing the results from physics based models with a drastic reduction in computational requirements.

  19. Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics

    DTIC Science & Technology

    2016-02-29

    Performance/Technic~ 02-01-2016- 02-29-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of Neuronal...neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this approach, time step can be made...propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for studies of large-scale neuronal

  20. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach

    PubMed Central

    2016-01-01

    Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235

  1. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 1: The participatory modeling approach

    NASA Astrophysics Data System (ADS)

    Scheer, Dirk; Konrad, Wilfried; Class, Holger; Kissinger, Alexander; Knopf, Stefan; Noack, Vera

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the potential hazards associated with the geological storage of CO2. Thus, in a site selection process, models for predicting the fate of the displaced brine are required, for example, for a risk assessment or the optimization of pressure management concepts. From the very beginning, this research on brine migration aimed at involving expert and stakeholder knowledge and assessment in simulating the impacts of injecting CO2 into deep saline aquifers by means of a participatory modeling process. The involvement exercise made use of two approaches. First, guideline-based interviews were carried out, aiming at eliciting expert and stakeholder knowledge and assessments of geological structures and mechanisms affecting CO2-induced brine migration. Second, a stakeholder workshop including the World Café format yielded evaluations and judgments of the numerical modeling approach, scenario selection, and preliminary simulation results. The participatory modeling approach gained several results covering brine migration in general, the geological model sketch, scenario development, and the review of the preliminary simulation results. These results were included in revised versions of both the geological model and the numerical model, helping to improve the analysis of regional-scale brine migration along vertical pathways due to CO2 injection.

  2. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  3. Dynamic model-based N management reduces surplus nitrogen and improves the environmental performance of corn production

    NASA Astrophysics Data System (ADS)

    Sela, S.; Woodbury, P. B.; van Es, H. M.

    2018-05-01

    The US Midwest is the largest and most intensive corn (Zea mays, L.) production region in the world. However, N losses from corn systems cause serious environmental impacts including dead zones in coastal waters, groundwater pollution, particulate air pollution, and global warming. New approaches to reducing N losses are urgently needed. N surplus is gaining attention as such an approach for multiple cropping systems. We combined experimental data from 127 on-farm field trials conducted in seven US states during the 2011–2016 growing seasons with biochemical simulations using the PNM model to quantify the benefits of a dynamic location-adapted management approach to reduce N surplus. We found that this approach allowed large reductions in N rate (32%) and N surplus (36%) compared to existing static approaches, without reducing yield and substantially reducing yield-scaled N losses (11%). Across all sites, yield-scaled N losses increased linearly with N surplus values above ~48 kg ha‑1. Using the dynamic model-based N management approach enabled growers to get much closer to this target than using existing static methods, while maintaining yield. Therefore, this approach can substantially reduce N surplus and N pollution potential compared to static N management.

  4. Prediction of Continental-Scale Net Ecosystem Carbon Exchange by Combining MODIS and AmeriFlux Data

    NASA Astrophysics Data System (ADS)

    Xiao, J.; Zhuang, Q.

    2007-12-01

    There is growing interest in scaling up net ecosystem exchange (NEE) measured at eddy covariance flux towers to regional scales. Here we used remote sensing data from the MODIS instrument on board NASA's Terra satellite to extrapolate NEE measured at AmeriFlux sites to the continental scale. We combined MODIS data and NEE measurements from a number of AmeriFlux sites with a variety of vegetation types (e.g., forests, grasslands, shrublands, savannas, and croplands) to develop a predictive NEE model using a regression tree approach. The model was trained using 2000-2003 NEE measurements, and the performance of the model was evaluated using independent data over the period 2004-2006. We found that the model predicted NEE with reasonable accuracy at the continental scale. The R-squared values are 0.50 for all vegetation types combined and 0.72 for deciduous forests. We then applied the model to the conterminous U.S. and predicted NEE for each 500m by 500m cell over the period 2001-2006. Based on the wall-to-wall NEE estimates, we examined the spatial and temporal distributions of annual NEE and interannual variability of annual NEE across the conterminous U.S. over the study period (2001-2006). Our scaling-up approach implicitly considered the effects of climate variability, land use/land cover change, disturbances, extreme climate events, and management practices, and thus our annual NEE estimates represents the net carbon fluxes between the terrestrial biosphere and the atmosphere in the conterminous U.S.

  5. Integrating Cellular Metabolism into a Multiscale Whole-Body Model

    PubMed Central

    Krauss, Markus; Schaller, Stephan; Borchers, Steffen; Findeisen, Rolf; Lippert, Jörg; Kuepfer, Lars

    2012-01-01

    Cellular metabolism continuously processes an enormous range of external compounds into endogenous metabolites and is as such a key element in human physiology. The multifaceted physiological role of the metabolic network fulfilling the catalytic conversions can only be fully understood from a whole-body perspective where the causal interplay of the metabolic states of individual cells, the surrounding tissue and the whole organism are simultaneously considered. We here present an approach relying on dynamic flux balance analysis that allows the integration of metabolic networks at the cellular scale into standardized physiologically-based pharmacokinetic models at the whole-body level. To evaluate our approach we integrated a genome-scale network reconstruction of a human hepatocyte into the liver tissue of a physiologically-based pharmacokinetic model of a human adult. The resulting multiscale model was used to investigate hyperuricemia therapy, ammonia detoxification and paracetamol-induced toxication at a systems level. The specific models simultaneously integrate multiple layers of biological organization and offer mechanistic insights into pathology and medication. The approach presented may in future support a mechanistic understanding in diagnostics and drug development. PMID:23133351

  6. Synchronicity in predictive modelling: a new view of data assimilation

    NASA Astrophysics Data System (ADS)

    Duane, G. S.; Tribbia, J. J.; Weiss, J. B.

    2006-11-01

    The problem of data assimilation can be viewed as one of synchronizing two dynamical systems, one representing "truth" and the other representing "model", with a unidirectional flow of information between the two. Synchronization of truth and model defines a general view of data assimilation, as machine perception, that is reminiscent of the Jung-Pauli notion of synchronicity between matter and mind. The dynamical systems paradigm of the synchronization of a pair of loosely coupled chaotic systems is expected to be useful because quasi-2D geophysical fluid models have been shown to synchronize when only medium-scale modes are coupled. The synchronization approach is equivalent to standard approaches based on least-squares optimization, including Kalman filtering, except in highly non-linear regions of state space where observational noise links regimes with qualitatively different dynamics. The synchronization approach is used to calculate covariance inflation factors from parameters describing the bimodality of a one-dimensional system. The factors agree in overall magnitude with those used in operational practice on an ad hoc basis. The calculation is robust against the introduction of stochastic model error arising from unresolved scales.

  7. Optimization of a novel biophysical model using large scale in vivo antisense hybridization data displays improved prediction capabilities of structurally accessible RNA regions

    PubMed Central

    Vazquez-Anderson, Jorge; Mihailovic, Mia K.; Baldridge, Kevin C.; Reyes, Kristofer G.; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B.

    2017-01-01

    Abstract Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA–RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA–RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA–mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. PMID:28334800

  8. A Cellular Automaton / Finite Element model for predicting grain texture development in galvanized coatings

    NASA Astrophysics Data System (ADS)

    Guillemot, G.; Avettand-Fènoël, M.-N.; Iosta, A.; Foct, J.

    2011-01-01

    Hot-dipping galvanizing process is a widely used and efficient way to protect steel from corrosion. We propose to master the microstructure of zinc grains by investigating the relevant process parameters. In order to improve the texture of this coating, we model grain nucleation and growth processes and simulate the zinc solid phase development. A coupling scheme model has been applied with this aim. This model improves a previous two-dimensional model of the solidification process. It couples a cellular automaton (CA) approach and a finite element (FE) method. CA grid and FE mesh are superimposed on the same domain. The grain development is simulated at the micro-scale based on the CA grid. A nucleation law is defined using a Gaussian probability and a random set of nucleating cells. A crystallographic orientation is defined for each one with a choice of Euler's angle (Ψ,θ,φ). A small growing shape is then associated to each cell in the mushy domain and a dendrite tip kinetics is defined using the model of Kurz [2]. The six directions of basal plane and the two perpendicular directions develop in each mushy cell. During each time step, cell temperature and solid fraction are then determined at micro-scale using the enthalpy conservation relation and variations are reassigned at macro-scale. This coupling scheme model enables to simulate the three-dimensional growing kinetics of the zinc grain in a two-dimensional approach. Grain structure evolutions for various cooling times have been simulated. Final grain structure has been compared to EBSD measurements. We show that the preferentially growth of dendrite arms in the basal plane of zinc grains is correctly predicted. The described coupling scheme model could be applied for simulated other product or manufacturing processes. It constitutes an approach gathering both micro and macro scale models.

  9. Voluntary EMG-to-force estimation with a multi-scale physiological muscle model

    PubMed Central

    2013-01-01

    Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560

  10. Broad and Narrow CHC Abilities Measured and Not Measured by the Wechsler Scales: Moving beyond Within-Battery Factor Analysis

    ERIC Educational Resources Information Center

    Flanagan, Dawn P.; Alfonso, Vincent C.; Reynolds, Matthew R.

    2013-01-01

    In this commentary, we reviewed two clinical validation studies on the Wechsler Scales conducted by Weiss and colleagues. These researchers used a rigorous within-battery model-fitting approach that demonstrated the factorial invariance of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) and Wechsler Adult Intelligence…

  11. PEEX Modelling Platform for Seamless Environmental Prediction

    NASA Astrophysics Data System (ADS)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  12. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  13. An empirical perspective for understanding climate change impacts in Switzerland

    USGS Publications Warehouse

    Henne, Paul; Bigalke, Moritz; Büntgen, Ulf; Colombaroli, Daniele; Conedera, Marco; Feller, Urs; Frank, David; Fuhrer, Jürg; Grosjean, Martin; Heiri, Oliver; Luterbacher, Jürg; Mestrot, Adrien; Rigling, Andreas; Rössler, Ole; Rohr, Christian; Rutishauser, This; Schwikowski, Margit; Stampfli, Andreas; Szidat, Sönke; Theurillat, Jean-Paul; Weingartner, Rolf; Wilcke, Wolfgan; Tinner, Willy

    2018-01-01

    Planning for the future requires a detailed understanding of how climate change affects a wide range of systems at spatial scales that are relevant to humans. Understanding of climate change impacts can be gained from observational and reconstruction approaches and from numerical models that apply existing knowledge to climate change scenarios. Although modeling approaches are prominent in climate change assessments, observations and reconstructions provide insights that cannot be derived from simulations alone, especially at local to regional scales where climate adaptation policies are implemented. Here, we review the wealth of understanding that emerged from observations and reconstructions of ongoing and past climate change impacts in Switzerland, with wider applicability in Europe. We draw examples from hydrological, alpine, forest, and agricultural systems, which are of paramount societal importance, and are projected to undergo important changes by the end of this century. For each system, we review existing model-based projections, present what is known from observations, and discuss how empirical evidence may help improve future projections. A particular focus is given to better understanding thresholds, tipping points and feedbacks that may operate on different time scales. Observational approaches provide the grounding in evidence that is needed to develop local to regional climate adaptation strategies. Our review demonstrates that observational approaches should ideally have a synergistic relationship with modeling in identifying inconsistencies in projections as well as avenues for improvement. They are critical for uncovering unexpected relationships between climate and agricultural, natural, and hydrological systems that will be important to society in the future.

  14. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  15. Numerical investigation of the flow in axial water turbines and marine propellers with scale-resolving simulations

    NASA Astrophysics Data System (ADS)

    Morgut, Mitja; Jošt, Dragica; Nobile, Enrico; Škerlavaj, Aljaž

    2015-11-01

    The accurate prediction of the performances of axial water turbines and naval propellers is a challenging task, of great practical relevance. In this paper a numerical prediction strategy, based on the combination of a trusted CFD solver and a calibrated mass transfer model, is applied to the turbulent flow in axial turbines and around a model scale naval propeller, under non-cavitating and cavitating conditions. Some selected results for axial water turbines and a marine propeller, and in particular the advantages, in terms of accuracy and fidelity, of ScaleResolving Simulations (SRS), like SAS (Scale Adaptive Simulation) and Zonal-LES (ZLES) compared to standard RANS approaches, are presented. Efficiency prediction for a Kaplan and a bulb turbine was significantly improved by use of the SAS SST model in combination with the ZLES in the draft tube. Size of cavitation cavity and sigma break curve for Kaplan turbine were successfully predicted with SAS model in combination with robust high resolution scheme, while for mass transfer the Zwart model with calibrated constants were used. The results obtained for a marine propeller in non-uniform inflow, under cavitating conditions, compare well with available experimental measurements, and proved that a mass transfer model, previously calibrated for RANS (Reynolds Averaged Navier Stokes), can be successfully applied also within the SRS approaches.

  16. Relativistic space-charge-limited current for massive Dirac fermions

    NASA Astrophysics Data System (ADS)

    Ang, Y. S.; Zubair, M.; Ang, L. K.

    2017-04-01

    A theory of relativistic space-charge-limited current (SCLC) is formulated to determine the SCLC scaling, J ∝Vα/Lβ , for a finite band-gap Dirac material of length L biased under a voltage V . In one-dimensional (1D) bulk geometry, our model allows (α ,β ) to vary from (2,3) for the nonrelativistic model in traditional solids to (3/2,2) for the ultrarelativistic model of massless Dirac fermions. For 2D thin-film geometry we obtain α =β , which varies between 2 and 3/2, respectively, at the nonrelativistic and ultrarelativistic limits. We further provide rigorous proof based on a Green's-function approach that for a uniform SCLC model described by carrier-density-dependent mobility, the scaling relations of the 1D bulk model can be directly mapped into the case of 2D thin film for any contact geometries. Our simplified approach provides a convenient tool to obtain the 2D thin-film SCLC scaling relations without the need of explicitly solving the complicated 2D problems. Finally, this work clarifies the inconsistency in using the traditional SCLC models to explain the experimental measurement of a 2D Dirac semiconductor. We conclude that the voltage scaling 3 /2 <α <2 is a distinct signature of massive Dirac fermions in a Dirac semiconductor and is in agreement with experimental SCLC measurements in MoS2.

  17. Field Validation of Habitat Suitability Models for Vulnerable Marine Ecosystems in the South Pacific Ocean: Implications for the use of Broad-scale Models in Fisheries Management

    NASA Astrophysics Data System (ADS)

    Anderson, O. F.; Guinotte, J. M.; Clark, M. R.; Rowden, A. A.; Mormede, S.; Davies, A. J.; Bowden, D.

    2016-02-01

    Spatial management of vulnerable marine ecosystems requires accurate knowledge of their distribution. Predictive habitat suitability modelling, using species presence data and a suite of environmental predictor variables, has emerged as a useful tool for inferring distributions outside of known areas. However, validation of model predictions is typically performed with non-independent data. In this study, we describe the results of habitat suitability models constructed for four deep-sea reef-forming coral species across a large region of the South Pacific Ocean using MaxEnt and Boosted Regression Tree modelling approaches. In order to validate model predictions we conducted a photographic survey on a set of seamounts in an un-sampled area east of New Zealand. The likelihood of habitat suitable for reef forming corals on these seamounts was predicted to be variable, but very high in some regions, particularly where levels of aragonite saturation, dissolved oxygen, and particulate organic carbon were optimal. However, the observed frequency of coral occurrence in analyses of survey photographic data was much lower than expected, and patterns of observed versus predicted coral distribution were not highly correlated. The poor performance of these broad-scale models is attributed to lack of recorded species absences to inform the models, low precision of global bathymetry models, and lack of data on the geomorphology and substrate of the seamounts at scales appropriate to the modelled taxa. This demonstrates the need to use caution when interpreting and applying broad-scale, presence-only model results for fisheries management and conservation planning in data poor areas of the deep sea. Future improvements in the predictive performance of broad-scale models will rely on the continued advancement in modelling of environmental predictor variables, refinements in modelling approaches to deal with missing or biased inputs, and incorporation of true absence data.

  18. A Modelling Framework to Assess the Effect of Pressures on River Abiotic Habitat Conditions and Biota

    PubMed Central

    Kail, Jochem; Guse, Björn; Radinger, Johannes; Schröder, Maria; Kiesel, Jens; Kleinhans, Maarten; Schuurman, Filip; Fohrer, Nicola; Hering, Daniel; Wolter, Christian

    2015-01-01

    River biota are affected by global reach-scale pressures, but most approaches for predicting biota of rivers focus on river reach or segment scale processes and habitats. Moreover, these approaches do not consider long-term morphological changes that affect habitat conditions. In this study, a modelling framework was further developed and tested to assess the effect of pressures at different spatial scales on reach-scale habitat conditions and biota. Ecohydrological and 1D hydrodynamic models were used to predict discharge and water quality at the catchment scale and the resulting water level at the downstream end of a study reach. Long-term reach morphology was modelled using empirical regime equations, meander migration and 2D morphodynamic models. The respective flow and substrate conditions in the study reach were predicted using a 2D hydrodynamic model, and the suitability of these habitats was assessed with novel habitat models. In addition, dispersal models for fish and macroinvertebrates were developed to assess the re-colonization potential and to finally compare habitat suitability and the availability / ability of species to colonize these habitats. Applicability was tested and model performance was assessed by comparing observed and predicted conditions in the lowland Treene River in northern Germany. Technically, it was possible to link the different models, but future applications would benefit from the development of open source software for all modelling steps to enable fully automated model runs. Future research needs concern the physical modelling of long-term morphodynamics, feedback of biota (e.g., macrophytes) on abiotic habitat conditions, species interactions, and empirical data on the hydraulic habitat suitability and dispersal abilities of macroinvertebrates. The modelling framework is flexible and allows for including additional models and investigating different research and management questions, e.g., in climate impact research as well as river restoration and management. PMID:26114430

  19. Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests

    Treesearch

    Susan Will-Wolf; Peter Neitlich

    2010-01-01

    Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...

  20. Gravitational lensing and the Lyman-alpha forest

    NASA Technical Reports Server (NTRS)

    Ikeuchi, Satoru; Turner, Edwin L.

    1991-01-01

    Possible connections between the inhomogeneities responsible for the Lyman-alpha forest in quasar spectra and gravitational lensing effects are investigated. For most models of the Lyman-alpha forest, no significant lensing is expected. For some versions of the CDM model-based minihalo hypothesis, gravitational lensings on scales less than abour 0.1 arcsec would occur with a frequency approaching that with which ordinary galaxies cause arcsecond scale lensing.

  1. Development of the Large-Scale Forcing Data to Support MC3E Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Xie, S.; Zhang, Y.

    2011-12-01

    The large-scale forcing fields (e.g., vertical velocity and advective tendencies) are required to run single-column and cloud-resolving models (SCMs/CRMs), which are the two key modeling frameworks widely used to link field data to climate model developments. In this study, we use an advanced objective analysis approach to derive the required forcing data from the soundings collected by the Midlatitude Continental Convective Cloud Experiment (MC3E) in support of its cloud modeling studies. MC3E is the latest major field campaign conducted during the period 22 April 2011 to 06 June 2011 in south-central Oklahoma through a joint effort between the DOE ARM program and the NASA Global Precipitation Measurement Program. One of its primary goals is to provide a comprehensive dataset that can be used to describe the large-scale environment of convective cloud systems and evaluate model cumulus parameterizations. The objective analysis used in this study is the constrained variational analysis method. A unique feature of this approach is the use of domain-averaged surface and top-of-the atmosphere (TOA) observations (e.g., precipitation and radiative and turbulent fluxes) as constraints to adjust atmospheric state variables from soundings by the smallest possible amount to conserve column-integrated mass, moisture, and static energy so that the final analysis data is dynamically and thermodynamically consistent. To address potential uncertainties in the surface observations, an ensemble forcing dataset will be developed. Multi-scale forcing will be also created for simulating various scale convective systems. At the meeting, we will provide more details about the forcing development and present some preliminary analysis of the characteristics of the large-scale forcing structures for several selected convective systems observed during MC3E.

  2. Unstructured-grid coastal ocean modelling in Southern Adriatic and Northern Ionian Seas

    NASA Astrophysics Data System (ADS)

    Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo

    2016-04-01

    The Southern Adriatic Northern Ionian coastal Forecasting System (SANIFS) is a short-term forecasting system based on unstructured grid approach. The model component is built on SHYFEM finite element three-dimensional hydrodynamic model. The operational chain exploits a downscaling approach starting from the Mediterranean oceanographic-scale model MFS (Mediterranean Forecasting System, operated by INGV). The implementation set-up has been designed to provide accurate hydrodynamics and active tracer processes in the coastal waters of Southern Eastern Italy (Apulia, Basilicata and Calabria regions), where the model is characterized by a variable resolution in range of 50-500 m. The horizontal resolution is also high in open-sea areas, where the elements size is approximately 3 km. The model is forced: (i) at the lateral open boundaries through a full nesting strategy directly with the MFS (temperature, salinity, non-tidal sea surface height and currents) and OTPS (tidal forcing) fields; (ii) at surface through two alternative atmospheric forcing datasets (ECMWF and COSMOME) via MFS-bulk-formulae. Given that the coastal fields are driven by a combination of both local/coastal and deep ocean forcings propagating along the shelf, the performance of SANIFS was verified first (i) at the large and shelf-coastal scales by comparing with a large scale CTD survey and then (ii) at the coastal-harbour scale by comparison with CTD, ADCP and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 7 km). The present work highlights how downscaling could improve the simulation of the flow field going from typical open-ocean scales of the order of several km to the coastal (and harbour) scales of tens to hundreds of meters.

  3. The economics of leaf-gas exchange in a fluctuating environment and their upscaling to the canopy-level using turbulent transport theories

    NASA Astrophysics Data System (ADS)

    Katul, G. G.; Palmroth, S.; Manzoni, S.; Oren, R.

    2012-12-01

    Global climate models predict decreases in leaf stomatal conductance (gs) and transpiration due to increases in atmospheric CO2. The consequences of these reductions are increases in soil moisture availability and continental scale run-off at decadal time-scales. Thus, a theory explaining the differential sensitivity of stomata to changing atmospheric CO2 and other environmental conditions such as soil moisture at the ecosystem scale must be identified. Here, these responses are investigated using an optimality theory applied to stomatal conductance. An analytical model for gs is first proposed based on (a) Fickian mass transfer of CO2 and H2O through stomata; (b) a biochemical photosynthesis model that relates intercellular CO2 to net photosynthesis; and (c) a stomatal model based on optimization for maximizing carbon gains when water losses represent a cost. The optimization theory produced three gas exchange responses that are consistent with observations across a wide-range of species: (1) the sensitivity of gs to vapour pressure deficit (D) is similar to that obtained from a previous synthesis of more than 40 species, (2) the theory is consistent with the onset of an apparent 'feed-forward' mechanism in gs, and (3) the emergent non-linear relationship between the ratio of intercellular to atmospheric CO2 (ci/ca) and D agrees with the results available on this response. A simplified version of this leaf-scale approach recovers the linear relationship between stomatal conductance and leaf-photosynthesis employed in numerous climate models that currently use a variant on the 'Ball-Berry' or the 'Leuning' approaches provided the marginal water use efficiency increases linearly with atmospheric CO2. The model is then up-scaled to the canopy-level using novel theories about the structure of turbulence inside vegetation. This up-scaling proved to be effective in resolving the complex (and two-way) interactions between leaves and their immediate micro-climate. Extensions of this optimality approach to drought and salt-stressed cases are briefly presented.

  4. Biointerface dynamics--Multi scale modeling considerations.

    PubMed

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Application of empirical and dynamical closure methods to simple climate models

    NASA Astrophysics Data System (ADS)

    Padilla, Lauren Elizabeth

    This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.

  6. Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning

    NASA Astrophysics Data System (ADS)

    Zuberi, M. AH; Pratt, R. G.

    2018-04-01

    The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.

  7. Biogeochemical metabolic modeling of methanogenesis by Methanosarcina barkeri

    NASA Astrophysics Data System (ADS)

    Jensvold, Z. D.; Jin, Q.

    2015-12-01

    Methanogenesis, the biological process of methane production, is the final step of natural organic matter degradation. In studying natural methanogenesis, important questions include how fast methanogenesis proceeds and how methanogens adapt to the environment. To address these questions, we propose a new approach - biogeochemical reaction modeling - by simulating the metabolic networks of methanogens. Biogeochemical reaction modeling combines geochemical reaction modeling and genome-scale metabolic modeling. Geochemical reaction modeling focuses on the speciation of electron donors and acceptors in the environment, and therefore the energy available to methanogens. Genome-scale metabolic modeling predicts microbial rates and metabolic strategies. Specifically, this approach describes methanogenesis using an enzyme network model, and computes enzyme rates by accounting for both the kinetics and thermodynamics. The network model is simulated numerically to predict enzyme abundances and rates of methanogen metabolism. We applied this new approach to Methanosarcina barkeri strain fusaro, a model methanogen that makes methane by reducing carbon dioxide and oxidizing dihydrogen. The simulation results match well with the results of previous laboratory experiments, including the magnitude of proton motive force and the kinetic parameters of Methanosarcina barkeri. The results also predict that in natural environments, the configuration of methanogenesis network, including the concentrations of enzymes and metabolites, differs significantly from that under laboratory settings.

  8. Implication of Tsallis entropy in the Thomas–Fermi model for self-gravitating fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ourabah, Kamel; Tribeche, Mouloud, E-mail: mouloudtribeche@yahoo.fr

    The Thomas–Fermi approach for self-gravitating fermions is revisited within the theoretical framework of the q-statistics. Starting from the q-deformation of the Fermi–Dirac distribution function, a generalized Thomas–Fermi equation is derived. It is shown that the Tsallis entropy preserves a scaling property of this equation. The q-statistical approach to Jeans’ instability in a system of self-gravitating fermions is also addressed. The dependence of the Jeans’ wavenumber (or the Jeans length) on the parameter q is traced. It is found that the q-statistics makes the Fermionic system unstable at scales shorter than the standard Jeans length. -- Highlights: •Thomas–Fermi approach for self-gravitatingmore » fermions. •A generalized Thomas–Fermi equation is derived. •Nonextensivity preserves a scaling property of this equation. •Nonextensive approach to Jeans’ instability of self-gravitating fermions. •It is found that nonextensivity makes the Fermionic system unstable at shorter scales.« less

  9. Simulations of ecosystem hydrological processes using a unified multi-scale model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Liu, Chongxuan; Fang, Yilin

    2015-01-01

    This paper presents a unified multi-scale model (UMSM) that we developed to simulate hydrological processes in an ecosystem containing both surface water and groundwater. The UMSM approach modifies the Navier–Stokes equation by adding a Darcy force term to formulate a single set of equations to describe fluid momentum and uses a generalized equation to describe fluid mass balance. The advantage of the approach is that the single set of the equations can describe hydrological processes in both surface water and groundwater where different models are traditionally required to simulate fluid flow. This feature of the UMSM significantly facilitates modelling ofmore » hydrological processes in ecosystems, especially at locations where soil/sediment may be frequently inundated and drained in response to precipitation, regional hydrological and climate changes. In this paper, the UMSM was benchmarked using WASH123D, a model commonly used for simulating coupled surface water and groundwater flow. Disney Wilderness Preserve (DWP) site at the Kissimmee, Florida, where active field monitoring and measurements are ongoing to understand hydrological and biogeochemical processes, was then used as an example to illustrate the UMSM modelling approach. The simulations results demonstrated that the DWP site is subject to the frequent changes in soil saturation, the geometry and volume of surface water bodies, and groundwater and surface water exchange. All the hydrological phenomena in surface water and groundwater components including inundation and draining, river bank flow, groundwater table change, soil saturation, hydrological interactions between groundwater and surface water, and the migration of surface water and groundwater interfaces can be simultaneously simulated using the UMSM. Overall, the UMSM offers a cross-scale approach that is particularly suitable to simulate coupled surface and ground water flow in ecosystems with strong surface water and groundwater interactions.« less

  10. A Hybrid Coarse-graining Approach for Lipid Bilayers at Large Length and Time Scales

    PubMed Central

    Ayton, Gary S.; Voth, Gregory A.

    2009-01-01

    A hybrid analytic-systematic (HAS) coarse-grained (CG) lipid model is developed and employed in a large-scale simulation of a liposome. The methodology is termed hybrid analyticsystematic as one component of the interaction between CG sites is variationally determined from the multiscale coarse-graining (MS-CG) methodology, while the remaining component utilizes an analytic potential. The systematic component models the in-plane center of mass interaction of the lipids as determined from an atomistic-level MD simulation of a bilayer. The analytic component is based on the well known Gay-Berne ellipsoid of revolution liquid crystal model, and is designed to model the highly anisotropic interactions at a highly coarse-grained level. The HAS CG approach is the first step in an “aggressive” CG methodology designed to model multi-component biological membranes at very large length and timescales. PMID:19281167

  11. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  12. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  13. Multi-time-scale hydroclimate dynamics of a regional watershed and links to large-scale atmospheric circulation: Application to the Seine river catchment, France

    NASA Astrophysics Data System (ADS)

    Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.

    2017-03-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This approach basically consisted in three steps: 1 - decomposing large-scale climate and hydrological signals (SLP field, precipitation or streamflow) using discrete wavelet multiresolution analysis, 2 - generating a statistical downscaling model per time-scale, 3 - summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either precipitation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with alternating flood and extremely low-flow/drought periods (e.g., winter/spring 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. In accordance with previous studies, the wavelet components detected in SLP, precipitation and streamflow on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation.

  14. Ensuring congruency in multiscale modeling: towards linking agent based and continuum biomechanical models of arterial adaptation.

    PubMed

    Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D

    2011-11-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.

  15. Assessment of current atomic scale modelling methods for the investigation of nuclear fuels under irradiation: Example of uranium dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolus, Marjorie; Krack, Matthias; Freyss, Michel

    Multiscale approaches are developed to build more physically based kinetic and mechanical mesoscale models to enhance the predictive capability of fuel performance codes and increase the efficiency of the development of the safer and more innovative nuclear materials needed in the future. Atomic scale methods, and in particular electronic structure and empirical potential methods, form the basis of this multiscale approach. It is therefore essential to know the accuracy of the results computed at this scale if we want to feed them into higher scale models. We focus here on the assessment of the description of interatomic interactions in uraniummore » dioxide using on the one hand electronic structure methods, in particular in the density functional theory (DFT) framework and on the other hand empirical potential methods. These two types of methods are complementary, the former enabling to get results from a minimal amount of input data and further insight into the electronic and magnetic properties, while the latter are irreplaceable for studies where a large number of atoms needs to be considered. We consider basic properties as well as specific ones, which are important for the description of nuclear fuel under irradiation. These are especially energies, which are the main data passed to higher scale models. We limit ourselves to uranium dioxide.« less

  16. Multiscale Aspects of Modeling Gas-Phase Nanoparticle Synthesis

    PubMed Central

    Buesser, B.; Gröhn, A.J.

    2013-01-01

    Aerosol reactors are utilized to manufacture nanoparticles in industrially relevant quantities. The development, understanding and scale-up of aerosol reactors can be facilitated with models and computer simulations. This review aims to provide an overview of recent developments of models and simulations and discuss their interconnection in a multiscale approach. A short introduction of the various aerosol reactor types and gas-phase particle dynamics is presented as a background for the later discussion of the models and simulations. Models are presented with decreasing time and length scales in sections on continuum, mesoscale, molecular dynamics and quantum mechanics models. PMID:23729992

  17. Estimation of Solar Radiation on Building Roofs in Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Stevanato, G.; De Filippi, R.; Furlanello, C.

    2011-04-01

    The aim of this study is estimating solar radiation on building roofs in complex mountain landscape areas. A multi-scale solar radiation estimation methodology is proposed that combines 3D data ranging from regional scale to the architectural one. Both the terrain and the nearby building shadowing effects are considered. The approach is modular and several alternative roof models, obtained by surveying and modelling techniques at varying level of detail, can be embedded in a DTM, e.g. that of an Alpine valley surrounded by mountains. The solar radiation maps obtained from raster models at different resolutions are compared and evaluated in order to obtain information regarding the benefits and disadvantages tied to each roof modelling approach. The solar radiation estimation is performed within the open-source GRASS GIS environment using r.sun and its ancillary modules.

  18. PERFORMANCE AND ANALYSIS OF AQUIFER TESTS WITH IMPLICATIONS FOR CONTAMINANT TRANSPORT MODELING

    EPA Science Inventory

    The scale-dependence of dispersivity values used in contaminant transport models to estimate the spreading of contaminant plumes by hydrodynamic dispersion processes was investigated and found to be an artifact of conventional modeling approaches (especially, vertically averaged ...

  19. Bridging Empirical and Physical Approaches for Landslide Monitoring and Early Warning

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Kumar, Sujay; Harrison, Ken

    2011-01-01

    Rainfall-triggered landslides typically occur and are evaluated at local scales, using slope-stability models to calculate coincident changes in driving and resisting forces at the hillslope level in order to anticipate slope failures. Over larger areas, detailed high resolution landslide modeling is often infeasible due to difficulties in quantifying the complex interaction between rainfall infiltration and surface materials as well as the dearth of available in situ soil and rainfall estimates and accurate landslide validation data. This presentation will discuss how satellite precipitation and surface information can be applied within a landslide hazard assessment framework to improve landslide monitoring and early warning by considering two disparate approaches to landslide hazard assessment: an empirical landslide forecasting algorithm and a physical slope-stability model. The goal of this research is to advance near real-time landslide hazard assessment and early warning at larger spatial scales. This is done by employing high resolution surface and precipitation information within a probabilistic framework to provide more physically-based grounding to empirical landslide triggering thresholds. The empirical landslide forecasting tool, running in near real-time at http://trmm.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. The physical approach considers how rainfall infiltration on a hillslope affects the in situ hydro-mechanical processes that may lead to slope failure. Evaluation of these empirical and physical approaches are performed within the Land Information System (LIS), a high performance land surface model processing and data assimilation system developed within the Hydrological Sciences Branch at NASA's Goddard Space Flight Center. LIS provides the capabilities to quantify uncertainty from model inputs and calculate probabilistic estimates for slope failures. Results indicate that remote sensing data can provide many of the spatiotemporal requirements for accurate landslide monitoring and early warning; however, higher resolution precipitation inputs will help to better identify small-scale precipitation forcings that contribute to significant landslide triggering. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale, which will serve as key inputs to significantly advance the accuracy of landslide hazard assessment, particularly over larger spatial scales.

  20. A hybrid modeling with data assimilation to evaluate human exposure level

    NASA Astrophysics Data System (ADS)

    Koo, Y. S.; Cheong, H. K.; Choi, D.; Kim, A. L.; Yun, H. Y.

    2015-12-01

    Exposure models are designed to better represent human contact with PM (Particulate Matter) and other air pollutants such as CO, SO2, O3, and NO2. The exposure concentrations of the air pollutants to human are determined by global and regional long range transport of global and regional scales from Europe and China as well as local emissions from urban and road vehicle sources. To assess the exposure level in detail, the multiple scale influence from background to local sources should be considered. A hybrid air quality modeling methodology combing a grid-based chemical transport model with a local plume dispersion model was used to provide spatially and temporally resolved air quality concentration for human exposure levels in Korea. In the hybrid modeling approach, concentrations from a grid-based chemical transport model and a local plume dispersion model are added to provide contributions from photochemical interactions, long-range (regional) transport and local-scale dispersion. The CAMx (Comprehensive Air quality Model with Extensions was used for the background concentrations from anthropogenic and natural emissions in East Asia including Korea while the road dispersion by vehicle emission was calculated by CALPUFF model. The total exposure level of the pollutants was finally assessed by summing the background and road contributions. In the hybrid modeling, the data assimilation method based on the optimal interpolation was applied to overcome the discrepancies between the model predicted concentrations and observations. The air quality data from the air quality monitoring stations in Korea. The spatial resolution of the hybrid model was 50m for the Seoul Metropolitan Ares. This example clearly demonstrates that the exposure level could be estimated to the fine scale for the exposure assessment by using the hybrid modeling approach with data assimilation.

  1. Signal-transfer Modeling for Regional Assessment of Forest Responses to Environmental Changes in the Southeastern United States

    Treesearch

    Robert J. Luxmoore; William W. Hargrove; M. Lynn Tharp; Wilfred M. Post; Michael W. Berry; Karen S. Minser; Wendell P. Cropper; Dale W. Johnson; Boris Zeide; Ralph L. Amateis; Harold E. Burkhart; V. Clark Baldwin; Kelly D. Peterson

    2000-01-01

    Stochastic transfer of information in a hierarchy of simulators is offered as a conceptual approach for assessing forest responses to changing climate and air quality across 13 southeastern states of the USA. This assessment approach combines geographic information system and Monte Carlo capabilities with several scales of computer modeling for southern pine species...

  2. Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques

    PubMed Central

    Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695

  3. Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.

    PubMed

    Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.

  4. Microscopic approach based on a multiscale algebraic version of the resonating group model for radiative capture reactions

    NASA Astrophysics Data System (ADS)

    Solovyev, Alexander S.; Igashov, Sergey Yu.

    2017-12-01

    A microscopic approach to description of radiative capture reactions based on a multiscale algebraic version of the resonating group model is developed. The main idea of the approach is to expand wave functions of discrete spectrum and continuum for a nuclear system over different bases of the algebraic version of the resonating group model. These bases differ from each other by values of oscillator radius playing a role of scale parameter. This allows us in a unified way to calculate total and partial cross sections (astrophysical S factors) as well as branching ratio for the radiative capture reaction, to describe phase shifts for the colliding nuclei in the initial channel of the reaction, and at the same time to reproduce breakup thresholds of the final nucleus. The approach is applied to the theoretical study of the mirror 3H(α ,γ )7Li and 3He(α ,γ )7Be reactions, which are of great interest to nuclear astrophysics. The calculated results are compared with existing experimental data and with our previous calculations in the framework of the single-scale algebraic version of the resonating group model.

  5. Streamwise Versus Spanwise Spacing of Obstacle Arrays: Parametrization of the Effects on Drag and Turbulence

    NASA Astrophysics Data System (ADS)

    Simón-Moral, Andres; Santiago, Jose Luis; Krayenhoff, E. Scott; Martilli, Alberto

    2014-06-01

    A Reynolds-averaged Navier-Stokes model is used to investigate the evolution of the sectional drag coefficient and turbulent length scales with the layouts of aligned arrays of cubes. Results show that the sectional drag coefficient is determined by the non-dimensional streamwise distance (sheltering parameter), and the non-dimensional spanwise distance (channelling parameter) between obstacles. This is different than previous approaches that consider only plan area density . On the other hand, turbulent length scales behave similarly to the staggered case (e. g. they are function of only). Analytical formulae are proposed for the length scales and for the sectional drag coefficient as a function of sheltering and channelling parameters, and implemented in a column model. This approach demonstrates good skill in the prediction of vertical profiles of the spatially-averaged horizontal wind speed.

  6. Various Numerical Applications on Tropical Convective Systems Using a Cloud Resolving Model

    NASA Technical Reports Server (NTRS)

    Shie, C.-L.; Tao, W.-K.; Simpson, J.

    2003-01-01

    In recent years, increasing attention has been given to cloud resolving models (CRMs or cloud ensemble models-CEMs) for their ability to simulate the radiative-convective system, which plays a significant role in determining the regional heat and moisture budgets in the Tropics. The growing popularity of CRM usage can be credited to its inclusion of crucial and physically relatively realistic features such as explicit cloud-scale dynamics, sophisticated microphysical processes, and explicit cloud-radiation interaction. On the other hand, impacts of the environmental conditions (for example, the large-scale wind fields, heat and moisture advections as well as sea surface temperature) on the convective system can also be plausibly investigated using the CRMs with imposed explicit forcing. In this paper, by basically using a Goddard Cumulus Ensemble (GCE) model, three different studies on tropical convective systems are briefly presented. Each of these studies serves a different goal as well as uses a different approach. In the first study, which uses more of an idealized approach, the respective impacts of the large-scale horizontal wind shear and surface fluxes on the modeled tropical quasi-equilibrium states of temperature and water vapor are examined. In this 2-D study, the imposed large-scale horizontal wind shear is ideally either nudged (wind shear maintained strong) or mixed (wind shear weakened), while the minimum surface wind speed used for computing surface fluxes varies among various numerical experiments. For the second study, a handful of real tropical episodes (TRMM Kwajalein Experiment - KWAJEX, 1999; TRMM South China Sea Monsoon Experiment - SCSMEX, 1998) have been simulated such that several major atmospheric characteristics such as the rainfall amount and its associated stratiform contribution, the Qlheat and Q2/moisture budgets are investigated. In this study, the observed large-scale heat and moisture advections are continuously applied to the 2-D model. The modeled cloud generated from such an approach is termed continuously forced convection or continuous large-scale forced convection. A third study, which focuses on the respective impact of atmospheric components on upper Ocean heat and salt budgets, will be presented in the end. Unlike the two previous 2-D studies, this study employs the 3-D GCE-simulated diabatic source terms (using TOGA COARE observations) - radiation (longwave and shortwave), surface fluxes (sensible and latent heat, and wind stress), and precipitation as input for the Ocean mixed-layer (OML) model.

  7. Alcohol expectancy multiaxial assessment: a memory network-based approach.

    PubMed

    Goldman, Mark S; Darkes, Jack

    2004-03-01

    Despite several decades of activity, alcohol expectancy research has yet to merge measurement approaches with developing memory theory. This article offers an expectancy assessment approach built on a conceptualization of expectancy as an information processing network. The authors began with multidimensional scaling models of expectancy space, which served as heuristics to suggest confirmatory factor analytic dimensional models for entry into covariance structure predictive models. It is argued that this approach permits a relatively thorough assessment of the broad range of potential expectancy dimensions in a format that is very flexible in terms of instrument length and specificity versus breadth of focus. ((c) 2004 APA, all rights reserved)

  8. Use of modeled and satelite soil moisture to estimate soil erosion in central and southern Italy.

    NASA Astrophysics Data System (ADS)

    Termite, Loris Francesco; Massari, Christian; Todisco, Francesca; Brocca, Luca; Ferro, Vito; Bagarello, Vincenzo; Pampalone, Vincenzo; Wagner, Wolfgang

    2016-04-01

    This study presents an accurate comparison between two different approaches aimed to enhance accuracy of the Universal Soil Loss Equation (USLE) in estimating the soil loss at the single event time scale. Indeed it is well known that including the observed event runoff in the USLE improves its soil loss estimation ability at the event scale. In particular, the USLE-M and USLE-MM models use the observed runoff coefficient to correct the rainfall erosivity factor. In the first case, the soil loss is linearly dependent on rainfall erosivity, in the second case soil loss and erosivity are related by a power law. However, the measurement of the event runoff is not straightforward or, in some cases, possible. For this reason, the first approach used in this study is the use of Soil Moisture For Erosion (SM4E), a recent USLE-derived model in which the event runoff is replaced by the antecedent soil moisture. Three kinds of soil moisture datasets have been separately used: the ERA-Interim/Land reanalysis data of the European Centre for Medium-range Weather Forecasts (ECMWF); satellite retrievals from the European Space Agency - Climate Change Initiative (ESA-CCI); modeled data using a Soil Water Balance Model (SWBM). The second approach is the use of an estimated runoff rather than the observed. Specifically, the Simplified Continuous Rainfall-Runoff Model (SCRRM) is used to derive the runoff estimates. SCRMM requires soil moisture data as input and at this aim the same three soil moisture datasets used for the SM4E have been separately used. All the examined models have been calibrated and tested at the plot scale, using data from the experimental stations for the monitoring of the erosive processes "Masse" (Central Italy) and "Sparacia" (Southern Italy). Climatic data and runoff and soil loss measures at the event time scale are available for the period 2008-2013 at Masse and for the period 2002-2013 at Sparacia. The results show that both the approaches can provide better results than the USLE. Specifically, the SM4E model has proven to be particularly effective at Masse, providing the best soil loss estimations, especially when the modeled soil moisture is used. In this case, the RSR index (ratio between the Root Mean Square Error and the Observed Standard deviation) is equal to 0.94. Instead, the SCRRM is able to better estimate the event runoff at Sparacia than at Masse, thus resulting in good performances of the USLE-derived models using the estimated runoff; however, even at Sparacia the SM4E with modeled soil moisture gives the better soil loss estimates, with RSR = 0.54. These results open an interesting scenario in the use of empirical models to determine soil loss at a large scale, since soil moisture is a not only a simple in situ measurement, but only a widely available information on a global scale from remote sensing.

  9. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  10. Identifying western yellow-billed cuckoo breeding habitat with a dual modelling approach

    USGS Publications Warehouse

    Johnson, Matthew J.; Hatten, James R.; Holmes, Jennifer A.; Shafroth, Patrick B.

    2017-01-01

    The western population of the yellow-billed cuckoo (Coccyzus americanus) was recently listed as threatened under the federal Endangered Species Act. Yellow-billed cuckoo conservation efforts require the identification of features and area requirements associated with high quality, riparian forest habitat at spatial scales that range from nest microhabitat to landscape, as well as lower-suitability areas that can be enhanced or restored. Spatially explicit models inform conservation efforts by increasing ecological understanding of a target species, especially at landscape scales. Previous yellow-billed cuckoo modelling efforts derived plant-community maps from aerial photography, an expensive and oftentimes inconsistent approach. Satellite models can remotely map vegetation features (e.g., vegetation density, heterogeneity in vegetation density or structure) across large areas with near perfect repeatability, but they usually cannot identify plant communities. We used aerial photos and satellite imagery, and a hierarchical spatial scale approach, to identify yellow-billed cuckoo breeding habitat along the Lower Colorado River and its tributaries. Aerial-photo and satellite models identified several key features associated with yellow-billed cuckoo breeding locations: (1) a 4.5 ha core area of dense cottonwood-willow vegetation, (2) a large native, heterogeneously dense forest (72 ha) around the core area, and (3) moderately rough topography. The odds of yellow-billed cuckoo occurrence decreased rapidly as the amount of tamarisk cover increased or when cottonwood-willow vegetation was limited. We achieved model accuracies of 75–80% in the project area the following year after updating the imagery and location data. The two model types had very similar probability maps, largely predicting the same areas as high quality habitat. While each model provided unique information, a dual-modelling approach provided a more complete picture of yellow-billed cuckoo habitat requirements and will be useful for management and conservation activities.

  11. Pairing top-down and bottom-up approaches to analyze catchment scale management of water quality and quantity

    NASA Astrophysics Data System (ADS)

    Lovette, J. P.; Duncan, J. M.; Band, L. E.

    2016-12-01

    Watershed management requires information on the hydrologic impacts of local to regional land use, land cover and infrastructure conditions. Management of runoff volumes, storm flows, and water quality can benefit from large scale, "top-down" screening tools, using readily available information, as well as more detailed, "bottom-up" process-based models that explicitly track local runoff production and routing from sources to receiving water bodies. Regional scale data, available nationwide through the NHD+, and top-down models based on aggregated catchment information provide useful tools for estimating regional patterns of peak flows, volumes and nutrient loads at the catchment level. Management impacts can be estimated with these models, but have limited ability to resolve impacts beyond simple changes to land cover proportions. Alternatively, distributed process-based models provide more flexibility in modeling management impacts by resolving spatial patterns of nutrient source, runoff generation, and uptake. This bottom-up approach can incorporate explicit patterns of land cover, drainage connectivity, and vegetation extent, but are typically applied over smaller areas. Here, we first model peak flood flows and nitrogen loads across North Carolina's 70,000 NHD+ catchments using USGS regional streamflow regression equations and the SPARROW model. We also estimate management impact by altering aggregated sources in each of these models. To address the missing spatial implications of the top-down approach, we further explore the demand for riparian buffers as a management strategy, simulating the accumulation of nutrient sources along flow paths and the potential mitigation of these sources through forested buffers. We use the Regional Hydro-Ecological Simulation System (RHESSys) to model changes across several basins in North Carolina's Piedmont and Blue Ridge regions, ranging in size from 15 - 1,130 km2. The two approaches provide a complementary set of tools for large area screening, followed by smaller, more process based assessment and design tools.

  12. Multiple-length-scale deformation analysis in a thermoplastic polyurethane

    PubMed Central

    Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.

    2015-01-01

    Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945

  13. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  14. Integration of climatic water deficit and fine-scale physiography in process-based modeling of forest landscape resilience to large-scale tree mortality

    NASA Astrophysics Data System (ADS)

    Yang, J.; Weisberg, P.; Dilts, T.

    2016-12-01

    Climate warming can lead to large-scale drought-induced tree mortality events and greatly affect forest landscape resilience. Climatic water deficit (CWD) and its physiographic variations provide a key mechanism in driving landscape dynamics in response to climate change. Although CWD has been successfully applied in niche-based species distribution models, its application in process-based forest landscape models is still scarce. Here we present a framework incorporating fine-scale influence of terrain on ecohydrology in modeling forest landscape dynamics. We integrated CWD with a forest landscape succession and disturbance model (LANDIS-II) to evaluate how tree species distribution might shift in response to different climate-fire scenarios across an elevation-aspect gradient in a semi-arid montane landscape of northeastern Nevada, USA. Our simulations indicated that drought-intolerant tree species such as quaking aspen could experience greatly reduced distributions in the more arid portions of their existing ranges due to water stress limitations under future climate warming scenarios. However, even at the most xeric portions of its range, aspen is likely to persist in certain environmental settings due to unique and often fine-scale combinations of resource availability, species interactions and disturbance regime. The modeling approach presented here allowed identification of these refugia. In addition, this approach helped quantify how the direction and magnitude of fire influences on species distribution would vary across topoclimatic gradients, as well as furthers our understanding on the role of environmental conditions, fire, and inter-specific competition in shaping potential responses of landscape resilience to climate change.

  15. Multifractality and value-at-risk forecasting of exchange rates

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Kinateder, Harald; Wagner, Niklas

    2014-05-01

    This paper addresses market risk prediction for high frequency foreign exchange rates under nonlinear risk scaling behaviour. We use a modified version of the multifractal model of asset returns (MMAR) where trading time is represented by the series of volume ticks. Our dataset consists of 138,418 5-min round-the-clock observations of EUR/USD spot quotes and trading ticks during the period January 5, 2006 to December 31, 2007. Considering fat-tails, long-range dependence as well as scale inconsistency with the MMAR, we derive out-of-sample value-at-risk (VaR) forecasts and compare our approach to historical simulation as well as a benchmark GARCH(1,1) location-scale VaR model. Our findings underline that the multifractal properties in EUR/USD returns in fact have notable risk management implications. The MMAR approach is a parsimonious model which produces admissible VaR forecasts at the 12-h forecast horizon. For the daily horizon, the MMAR outperforms both alternatives based on conditional as well as unconditional coverage statistics.

  16. Combined biofouling and scaling in membrane feed channels: a new modeling approach.

    PubMed

    Radu, A I; Bergwerff, L; van Loosdrecht, M C M; Picioreanu, C

    2015-01-01

    A mathematical model was developed for combined fouling due to biofilms and mineral precipitates in membrane feed channels with spacers. Finite element simulation of flow and solute transport in two-dimensional geometries was coupled with a particle-based approach for the development of a composite (cells and crystals) foulant layer. Three fouling scenarios were compared: biofouling only, scaling only and combined fouling. Combined fouling causes a quicker flux decline than the summed flux deterioration when scaling and biofouling act independently. The model results indicate that the presence of biofilms leads to more mineral formation due to: (1) an enhanced degree of saturation for salts next to the membrane and within the biofilm; and (2) more available surface for nucleation to occur. The impact of biofilm in accelerating gypsum precipitation depends on the composition of the feed water (eg the presence of NaCl) and the kinetics of crystal nucleation and growth. Interactions between flow, solute transport and biofilm-induced mineralization are discussed.

  17. Laser-plasma interactions for fast ignition

    NASA Astrophysics Data System (ADS)

    Kemp, A. J.; Fiuza, F.; Debayle, A.; Johzaki, T.; Mori, W. B.; Patel, P. K.; Sentoku, Y.; Silva, L. O.

    2014-05-01

    In the electron-driven fast-ignition (FI) approach to inertial confinement fusion, petawatt laser pulses are required to generate MeV electrons that deposit several tens of kilojoules in the compressed core of an imploded DT shell. We review recent progress in the understanding of intense laser-plasma interactions (LPI) relevant to FI. Increases in computational and modelling capabilities, as well as algorithmic developments have led to enhancement in our ability to perform multi-dimensional particle-in-cell simulations of LPI at relevant scales. We discuss the physics of the interaction in terms of laser absorption fraction, the laser-generated electron spectra, divergence, and their temporal evolution. Scaling with irradiation conditions such as laser intensity are considered, as well as the dependence on plasma parameters. Different numerical modelling approaches and configurations are addressed, providing an overview of the modelling capabilities and limitations. In addition, we discuss the comparison of simulation results with experimental observables. In particular, we address the question of surrogacy of today's experiments for the full-scale FI problem.

  18. A robust computational technique for model order reduction of two-time-scale discrete systems via genetic algorithms.

    PubMed

    Alsmadi, Othman M K; Abo-Hammour, Zaer S

    2015-01-01

    A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  19. Multi-scale Modeling of the Impact Response of a Strain Rate Sensitive High-Manganese Austenitic Steel

    NASA Astrophysics Data System (ADS)

    Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan

    2014-09-01

    A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.

  20. Modeling CMB lensing cross correlations with CLEFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modi, Chirag; White, Martin; Vlah, Zvonimir, E-mail: modichirag@berkeley.edu, E-mail: mwhite@berkeley.edu, E-mail: zvlah@stanford.edu

    2017-08-01

    A new generation of surveys will soon map large fractions of sky to ever greater depths and their science goals can be enhanced by exploiting cross correlations between them. In this paper we study cross correlations between the lensing of the CMB and biased tracers of large-scale structure at high z . We motivate the need for more sophisticated bias models for modeling increasingly biased tracers at these redshifts and propose the use of perturbation theories, specifically Convolution Lagrangian Effective Field Theory (CLEFT). Since such signals reside at large scales and redshifts, they can be well described by perturbative approaches.more » We compare our model with the current approach of using scale independent bias coupled with fitting functions for non-linear matter power spectra, showing that the latter will not be sufficient for upcoming surveys. We illustrate our ideas by estimating σ{sub 8} from the auto- and cross-spectra of mock surveys, finding that CLEFT returns accurate and unbiased results at high z . We discuss uncertainties due to the redshift distribution of the tracers, and several avenues for future development.« less

Top