Sample records for scaling model based

  1. Extension of landscape-based population viability models to ecoregional scales for conservation planning

    Treesearch

    Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh

    2011-01-01

    Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...

  2. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel

    2017-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.

  3. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  4. S-2 stage 1/25 scale model base region thermal environment test. Volume 1: Test results, comparison with theory and flight data

    NASA Technical Reports Server (NTRS)

    Sadunas, J. A.; French, E. P.; Sexton, H.

    1973-01-01

    A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.

  5. A Method for Estimating Noise from Full-Scale Distributed Exhaust Nozzles

    NASA Technical Reports Server (NTRS)

    Kinzie, Kevin W.; Schein, David B.

    2004-01-01

    A method to estimate the full-scale noise suppression from a scale model distributed exhaust nozzle (DEN) is presented. For a conventional scale model exhaust nozzle, Strouhal number scaling using a scale factor related to the nozzle exit area is typically applied that shifts model scale frequency in proportion to the geometric scale factor. However, model scale DEN designs have two inherent length scales. One is associated with the mini-nozzles, whose size do not change in going from model scale to full scale. The other is associated with the overall nozzle exit area which is much smaller than full size. Consequently, lower frequency energy that is generated by the coalesced jet plume should scale to lower frequency, but higher frequency energy generated by individual mini-jets does not shift frequency. In addition, jet-jet acoustic shielding by the array of mini-nozzles is a significant noise reduction effect that may change with DEN model size. A technique has been developed to scale laboratory model spectral data based on the premise that high and low frequency content must be treated differently during the scaling process. The model-scale distributed exhaust spectra are divided into low and high frequency regions that are then adjusted to full scale separately based on different physics-based scaling laws. The regions are then recombined to create an estimate of the full-scale acoustic spectra. These spectra can then be converted to perceived noise levels (PNL). The paper presents the details of this methodology and provides an example of the estimated noise suppression by a distributed exhaust nozzle compared to a round conic nozzle.

  6. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2013-09-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).

  7. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    NASA Astrophysics Data System (ADS)

    Scheibe, T. D.; Tartakovsky, G.; Tartakovsky, A. M.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2012-12-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).

  8. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2013-09-07

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated withmore » microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).« less

  9. SDG and qualitative trend based model multiple scale validation

    NASA Astrophysics Data System (ADS)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  10. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less

  11. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2018-01-01

    Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  12. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  13. Scaling in situ cosmogenic nuclide production rates using analytical approximations to atmospheric cosmic-ray fluxes

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.

    2014-01-01

    Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.

  14. Spatial calibration and temporal validation of flow for regional scale hydrologic modeling

    USDA-ARS?s Scientific Manuscript database

    Physically based regional scale hydrologic modeling is gaining importance for planning and management of water resources. Calibration and validation of such regional scale model is necessary before applying it for scenario assessment. However, in most regional scale hydrologic modeling, flow validat...

  15. Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds

    NASA Astrophysics Data System (ADS)

    Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.

    2015-12-01

    An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.

  16. Modelling strategies to predict the multi-scale effects of rural land management change

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.

    2011-12-01

    Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.

  17. Downscaling modelling system for multi-scale air quality forecasting

    NASA Astrophysics Data System (ADS)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.

  18. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  19. Experimental and analytical studies of advanced air cushion landing systems

    NASA Technical Reports Server (NTRS)

    Lee, E. G. S.; Boghani, A. B.; Captain, K. M.; Rutishauser, H. J.; Farley, H. L.; Fish, R. B.; Jeffcoat, R. L.

    1981-01-01

    Several concepts are developed for air cushion landing systems (ACLS) which have the potential for improving performance characteristics (roll stiffness, heave damping, and trunk flutter), and reducing fabrication cost and complexity. After an initial screening, the following five concepts were evaluated in detail: damped trunk, filled trunk, compartmented trunk, segmented trunk, and roll feedback control. The evaluation was based on tests performed on scale models. An ACLS dynamic simulation developed earlier is updated so that it can be used to predict the performance of full-scale ACLS incorporating these refinements. The simulation was validated through scale-model tests. A full-scale ACLS based on the segmented trunk concept was fabricated and installed on the NASA ACLS test vehicle, where it is used to support advanced system development. A geometrically-scaled model (one third full scale) of the NASA test vehicle was fabricated and tested. This model, evaluated by means of a series of static and dynamic tests, is used to investigate scaling relationships between reduced and full-scale models. The analytical model developed earlier is applied to simulate both the one third scale and the full scale response.

  20. Upscaling Cement Paste Microstructure to Obtain the Fracture, Shear, and Elastic Concrete Mechanical LDPM Parameters.

    PubMed

    Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez

    2017-02-28

    Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10 -10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale.

  1. Upscaling Cement Paste Microstructure to Obtain the Fracture, Shear, and Elastic Concrete Mechanical LDPM Parameters

    PubMed Central

    Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez

    2017-01-01

    Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10−10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale. PMID:28772605

  2. Scaling and modeling of turbulent suspension flows

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1989-01-01

    Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.

  3. A watershed scale spatially-distributed model for streambank erosion rate driven by channel curvature

    NASA Astrophysics Data System (ADS)

    McMillan, Mitchell; Hu, Zhiyong

    2017-10-01

    Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.

  4. Application and comparison of the SCS-CN-based rainfall-runoff model in meso-scale watershed and field scale

    NASA Astrophysics Data System (ADS)

    Luo, L.; Wang, Z.

    2010-12-01

    Soil Conservation Service Curve Number (SCS-CN) based hydrologic model, has widely been used for agricultural watersheds in recent years. However, there will be relative error when applying it due to differentiation of geographical and climatological conditions. This paper introduces a more adaptable and propagable model based on the modified SCS-CN method, which specializes into two different scale cases of research regions. Combining the typical conditions of the Zhanghe irrigation district in southern part of China, such as hydrometeorologic conditions and surface conditions, SCS-CN based models were established. The Xinbu-Qiao River basin (area =1207 km2) and the Tuanlin runoff test area (area =2.87 km2)were taken as the study areas of basin scale and field scale in Zhanghe irrigation district. Applications were extended from ordinary meso-scale watershed to field scale in Zhanghe paddy field-dominated irrigated . Based on actual measurement data of land use, soil classification, hydrology and meteorology, quantitative evaluation and modifications for two coefficients, i.e. preceding loss and runoff curve, were proposed with corresponding models, table of CN values for different landuse and AMC(antecedent moisture condition) grading standard fitting for research cases were proposed. The simulation precision was increased by putting forward a 12h unit hydrograph of the field area, and 12h unit hydrograph were simplified. Comparison between different scales show that it’s more effectively to use SCS-CN model on field scale after parameters calibrated in basin scale These results can help discovering the rainfall-runoff rule in the district. Differences of established SCS-CN model's parameters between the two study regions are also considered. Varied forms of landuse and impacts of human activities were the important factors which can impact the rainfall-runoff relations in Zhanghe irrigation district.

  5. Permeability from complex conductivity: an evaluation of polarization magnitude versus relaxation time based geophysical length scales

    NASA Astrophysics Data System (ADS)

    Slater, L. D.; Robinson, J.; Weller, A.; Keating, K.; Robinson, T.; Parker, B. L.

    2017-12-01

    Geophysical length scales determined from complex conductivity (CC) measurements can be used to estimate permeability k when the electrical formation factor F describing the ratio between tortuosity and porosity is known. Two geophysical length scales have been proposed: [1] the imaginary conductivity σ" normalized by the specific polarizability cp; [2] the time constant τ multiplied by a diffusion coefficient D+. The parameters cp and D+ account for the control of fluid chemistry and/or varying minerology on the geophysical length scale. We evaluated the predictive capability of two recently presented CC permeability models: [1] an empirical formulation based on σ"; [2] a mechanistic formulation based on τ;. The performance of the CC models was evaluated against measured permeability; this performance was also compared against that of well-established k estimation equations that use geometric length scales to represent the pore scale properties controlling fluid flow. Both CC models predict permeability within one order of magnitude for a database of 58 sandstone samples, with the exception of those samples characterized by high pore volume normalized surface area Spor and more complex mineralogy including significant dolomite. Variations in cp and D+ likely contribute to the poor performance of the models for these high Spor samples. The ultimate value of such geophysical models for permeability prediction lies in their application to field scale geophysical datasets. Two observations favor the implementation of the σ" based model over the τ based model for field-scale estimation: [1] the limited range of variation in cp relative to D+; [2] σ" is readily measured using field geophysical instrumentation (at a single frequency) whereas τ requires broadband spectral measurements that are extremely challenging and time consuming to accurately measure in the field. However, the need for a reliable estimate of F remains a major obstacle to the field-scale implementation of either of the CC permeability models for k estimation.

  6. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  7. Proposing an Educational Scaling-and-Diffusion Model for Inquiry-Based Learning Designs

    ERIC Educational Resources Information Center

    Hung, David; Lee, Shu-Shing

    2015-01-01

    Education cannot adopt the linear model of scaling used by the medical sciences. "Gold standards" cannot be replicated without considering process-in-learning, diversity, and student-variedness in classrooms. This article proposes a nuanced model of educational scaling-and-diffusion, describing the scaling (top-down supports) and…

  8. A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission

    PubMed Central

    Parker, Jon; Epstein, Joshua M.

    2013-01-01

    The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120

  9. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  10. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    PubMed

    Olivier, Brett G; Bergmann, Frank T

    2015-09-04

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  11. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    PubMed

    Olivier, Brett G; Bergmann, Frank T

    2015-06-01

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  12. Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics

    NASA Astrophysics Data System (ADS)

    Marcé, R.; Armengol, J.

    2009-01-01

    One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a watershed-scale biogeochemical model. The rationale of this approach relies on the fact that the working unit for the nutrient in-stream processes of most watershed-scale models is the reach, the same unit used in field research based on the Nutrient Spiralling concept. Automatic calibration of the model using data from the study watershed confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Following calibration, the model was used as a heuristic tool in two ways. First, we compared the Nutrient Spiralling metrics obtained during calibration with results obtained during field-based research in the study watershed. The simulated and measured metrics were similar, suggesting that information collected at the reach scale during research based on the Nutrient Spiralling concept can be directly incorporated into models, without the problems associated with upscaling results from fine-scale studies. Second, we used results from our model to examine some patterns observed in several reports on Nutrient Spiralling metrics measured in impaired streams. Although these two exercises involve circular reasoning and, consequently, cannot validate any hypothesis, this is a powerful example of how models can work as heuristic tools to compare hypotheses and stimulate research in ecology.

  13. A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2016-02-01

    Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.

  14. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  15. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    NASA Astrophysics Data System (ADS)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  16. Multi-scale analysis of a household level agent-based model of landcover change.

    PubMed

    Evans, Tom P; Kelley, Hugh

    2004-08-01

    Scale issues have significant implications for the analysis of social and biophysical processes in complex systems. These same scale implications are likewise considerations for the design and application of models of landcover change. Scale issues have wide-ranging effects from the representativeness of data used to validate models to aggregation errors introduced in the model structure. This paper presents an analysis of how scale issues affect an agent-based model (ABM) of landcover change developed for a research area in the Midwest, USA. The research presented here explores how scale factors affect the design and application of agent-based landcover change models. The ABM is composed of a series of heterogeneous agents who make landuse decisions on a portfolio of cells in a raster-based programming environment. The model is calibrated using measures of fit derived from both spatial composition and spatial pattern metrics from multi-temporal landcover data interpreted from historical aerial photography. A model calibration process is used to find a best-fit set of parameter weights assigned to agents' preferences for different landuses (agriculture, pasture, timber production, and non-harvested forest). Previous research using this model has shown how a heterogeneous set of agents with differing preferences for a portfolio of landuses produces the best fit to landcover changes observed in the study area. The scale dependence of the model is explored by varying the resolution of the input data used to calibrate the model (observed landcover), ancillary datasets that affect land suitability (topography), and the resolution of the model landscape on which agents make decisions. To explore the impact of these scale relationships the model is run with input datasets constructed at the following spatial resolutions: 60, 90, 120, 150, 240, 300 and 480 m. The results show that the distribution of landuse-preference weights differs as a function of scale. In addition, with the gradient descent model fitting method used in this analysis the model was not able to converge to an acceptable fit at the 300 and 480 m spatial resolutions. This is a product of the ratio of the input cell resolution to the average parcel size in the landscape. This paper uses these findings to identify scale considerations in the design, development, validation and application of ABMs of landcover change.

  17. Geometrical influence of a deposited particle on the performance of bridged carbon nanotube-based mass detectors

    NASA Astrophysics Data System (ADS)

    Ali-Akbari, H. R.; Ceballes, S.; Abdelkefi, A.

    2017-10-01

    A nonlocal continuum-based model is derived to simulate the dynamic behavior of bridged carbon nanotube-based nano-scale mass detectors. The carbon nanotube (CNT) is modeled as an elastic Euler-Bernoulli beam considering von-Kármán type geometric nonlinearity. In order to achieve better accuracy in characterization of the CNTs, the geometrical properties of an attached nano-scale particle are introduced into the model by its moment of inertia with respect to the central axis of the beam. The inter-atomic long-range interactions within the structure of the CNT are incorporated into the model using Eringen's nonlocal elastic field theory. In this model, the mass can be deposited along an arbitrary length of the CNT. After deriving the full nonlinear equations of motion, the natural frequencies and corresponding mode shapes are extracted based on a linear eigenvalue problem analysis. The results show that the geometry of the attached particle has a significant impact on the dynamic behavior of the CNT-based mechanical resonator, especially, for those with small aspect ratios. The developed model and analysis are beneficial for nano-scale mass identification when a CNT-based mechanical resonator is utilized as a small-scale bio-mass sensor and the deposited particles are those, such as proteins, enzymes, cancer cells, DNA and other nano-scale biological objects with different and complex shapes.

  18. The research of selection model based on LOD in multi-scale display of electronic map

    NASA Astrophysics Data System (ADS)

    Zhang, Jinming; You, Xiong; Liu, Yingzhen

    2008-10-01

    This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.

  19. Conceptual design and analysis of a dynamic scale model of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.

    1994-01-01

    This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.

  20. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  1. On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Jan, A.; Painter, S. L.; Coon, E. T.

    2017-12-01

    Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the DOE Office of Science.

  2. Three Collaborative Models for Scaling Up Evidence-Based Practices

    PubMed Central

    Roberts, Rosemarie; Jones, Helen; Marsenich, Lynne; Sosna, Todd; Price, Joseph M.

    2015-01-01

    The current paper describes three models of research-practice collaboration to scale-up evidence-based practices (EBP): (1) the Rolling Cohort model in England, (2) the Cascading Dissemination model in San Diego County, and (3) the Community Development Team model in 53 California and Ohio counties. Multidimensional Treatment Foster Care (MTFC) and KEEP are the focal evidence-based practices that are designed to improve outcomes for children and families in the child welfare, juvenile justice, and mental health systems. The three scale-up models each originated from collaboration between community partners and researchers with the shared goal of wide-spread implementation and sustainability of MTFC/KEEP. The three models were implemented in a variety of contexts; Rolling Cohort was implemented nationally, Cascading Dissemination was implemented within one county, and Community Development Team was targeted at the state level. The current paper presents an overview of the development of each model, the policy frameworks in which they are embedded, system challenges encountered during scale-up, and lessons learned. Common elements of successful scale-up efforts, barriers to success, factors relating to enduring practice relationships, and future research directions are discussed. PMID:21484449

  3. Preferential flow across scales: how important are plot scale processes for a catchment scale model?

    NASA Astrophysics Data System (ADS)

    Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Klaus, Julian

    2017-04-01

    Numerous experimental studies showed the importance of preferential flow for solute transport and runoff generation. As a consequence, various approaches exist to incorporate preferential flow in hydrological models. However, few studies have applied models that incorporate preferential flow at hillslope scale and even fewer at catchment scale. Certainly, one main difficulty for progress is the determination of an adequate parameterization for preferential flow at these spatial scales. This study applies a 3D physically based model (HydroGeoSphere) of a headwater region (6 ha) of the Weierbach catchment (Luxembourg). The base model was implemented without preferential flow and was limited in simulating fast catchment responses. Thus we hypothesized that the discharge performance can be improved by utilizing a dual permeability approach for a representation of preferential flow. We used the information of bromide irrigation experiments performed on three 1m2 plots to parameterize preferential flow. In a first step we ran 20.000 Monte Carlo simulations of these irrigation experiments in a 1m2 column of the headwater catchment model, varying the dual permeability parameters (15 variable parameters). These simulations identified many equifinal, yet very different parameter sets that reproduced the bromide depth profiles well. Therefore, in the next step we chose 52 parameter sets (the 40 best and 12 low performing sets) for testing the effect of incorporating preferential flow in the headwater catchment scale model. The variability of the flow pattern responses at the headwater catchment scale was small between the different parameterizations and did not coincide with the variability at plot scale. The simulated discharge time series of the different parameterizations clustered in six groups of similar response, ranging from nearly unaffected to completely changed responses compared to the base case model without dual permeability. Yet, in none of the groups the simulated discharge response clearly improved compared to the base case. Same held true for some observed soil moisture time series, although at plot scale the incorporation of preferential flow was necessary to simulate the irrigation experiments correctly. These results rejected our hypothesis and open a discussion on how important plot scale processes and heterogeneities are at catchment scale. Our preliminary conclusion is that vertical preferential flow is important for the irrigation experiments at the plot scale, while discharge generation at the catchment scale is largely controlled by lateral preferential flow. The lateral component, however, was already considered in the base case model with different hydraulic conductivities in different soil layers. This can explain why the internal behavior of the model at single spots seems not to be relevant for the overall hydrometric catchment response. Nonetheless, the inclusion of vertical preferential flow improved the realism of internal processes of the model (fitting profiles at plot scale, unchanged response at catchment scale) and should be considered depending on the intended use of the model. Furthermore, we cannot exclude with certainty yet that the quantitative discharge performance at catchment scale cannot be improved by utilizing a dual permeability approach, which will be tested in parameter optimization process.

  4. Factor Structure, Reliability and Measurement Invariance of the Alberta Context Tool and the Conceptual Research Utilization Scale, for German Residential Long Term Care

    PubMed Central

    Hoben, Matthias; Estabrooks, Carole A.; Squires, Janet E.; Behrens, Johann

    2016-01-01

    We translated the Canadian residential long term care versions of the Alberta Context Tool (ACT) and the Conceptual Research Utilization (CRU) Scale into German, to study the association between organizational context factors and research utilization in German nursing homes. The rigorous translation process was based on best practice guidelines for tool translation, and we previously published methods and results of this process in two papers. Both instruments are self-report questionnaires used with care providers working in nursing homes. The aim of this study was to assess the factor structure, reliability, and measurement invariance (MI) between care provider groups responding to these instruments. In a stratified random sample of 38 nursing homes in one German region (Metropolregion Rhein-Neckar), we collected questionnaires from 273 care aides, 196 regulated nurses, 152 allied health providers, 6 quality improvement specialists, 129 clinical leaders, and 65 nursing students. The factor structure was assessed using confirmatory factor models. The first model included all 10 ACT concepts. We also decided a priori to run two separate models for the scale-based and the count-based ACT concepts as suggested by the instrument developers. The fourth model included the five CRU Scale items. Reliability scores were calculated based on the parameters of the best-fitting factor models. Multiple-group confirmatory factor models were used to assess MI between provider groups. Rather than the hypothesized ten-factor structure of the ACT, confirmatory factor models suggested 13 factors. The one-factor solution of the CRU Scale was confirmed. The reliability was acceptable (>0.7 in the entire sample and in all provider groups) for 10 of 13 ACT concepts, and high (0.90–0.96) for the CRU Scale. We could demonstrate partial strong MI for both ACT models and partial strict MI for the CRU Scale. Our results suggest that the scores of the German ACT and the CRU Scale for nursing homes are acceptably reliable and valid. However, as the ACT lacked strict MI, observed variables (or scale scores based on them) cannot be compared between provider groups. Rather, group comparisons should be based on latent variable models, which consider the different residual variances of each group. PMID:27656156

  5. Derivation of a GIS-based watershed-scale conceptual model for the St. Jones River Delaware from habitat-scale conceptual models.

    PubMed

    Reiter, Michael A; Saintil, Max; Yang, Ziming; Pokrajac, Dragoljub

    2009-08-01

    Conceptual modeling is a useful tool for identifying pathways between drivers, stressors, Valued Ecosystem Components (VECs), and services that are central to understanding how an ecosystem operates. The St. Jones River watershed, DE is a complex ecosystem, and because management decisions must include ecological, social, political, and economic considerations, a conceptual model is a good tool for accommodating the full range of inputs. In 2002, a Four-Component, Level 1 conceptual model was formed for the key habitats of the St. Jones River watershed, but since the habitat level of resolution is too fine for some important watershed-scale issues we developed a functional watershed-scale model using the existing narrowed habitat-scale models. The narrowed habitat-scale conceptual models and associated matrices developed by Reiter et al. (2006) were combined with data from the 2002 land use/land cover (LULC) GIS-based maps of Kent County in Delaware to assemble a diagrammatic and numerical watershed-scale conceptual model incorporating the calculated weight of each habitat within the watershed. The numerical component of the assembled watershed model was subsequently subjected to the same Monte Carlo narrowing methodology used for the habitat versions to refine the diagrammatic component of the watershed-scale model. The narrowed numerical representation of the model was used to generate forecasts for changes in the parameters "Agriculture" and "Forest", showing that land use changes in these habitats propagated through the results of the model by the weighting factor. Also, the narrowed watershed-scale conceptual model identified some key parameters upon which to focus research attention and management decisions at the watershed scale. The forecast and simulation results seemed to indicate that the watershed-scale conceptual model does lead to different conclusions than the habitat-scale conceptual models for some issues at the larger watershed scale.

  6. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  7. Landscape-based population viability models demonstrate importance of strategic conservation planning for birds

    Treesearch

    Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh; D. Todd Jones-Farland

    2013-01-01

    Efforts to conserve regional biodiversity in the face of global climate change, habitat loss and fragmentation will depend on approaches that consider population processes at multiple scales. By combining habitat and demographic modeling, landscape-based population viability models effectively relate small-scale habitat and landscape patterns to regional population...

  8. Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation

    DTIC Science & Technology

    2016-08-02

    PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED

  9. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    PubMed Central

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  10. A methodology for least-squares local quasi-geoid modelling using a noisy satellite-only gravity field model

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-04-01

    The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.

  11. Forecasting an invasive species’ distribution with global distribution data, local data, and physiological information

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Young, Nicholas E.; Talbert, Marian; Talbert, Colin

    2018-01-01

    Understanding invasive species distributions and potential invasions often requires broad‐scale information on the environmental tolerances of the species. Further, resource managers are often faced with knowing these broad‐scale relationships as well as nuanced environmental factors related to their landscape that influence where an invasive species occurs and potentially could occur. Using invasive buffelgrass (Cenchrus ciliaris), we developed global models and local models for Saguaro National Park, Arizona, USA, based on location records and literature on physiological tolerances to environmental factors to investigate whether environmental relationships of a species at a global scale are also important at local scales. In addition to correlative models with five commonly used algorithms, we also developed a model using a priori user‐defined relationships between occurrence and environmental characteristics based on a literature review. All correlative models at both scales performed well based on statistical evaluations. The user‐defined curves closely matched those produced by the correlative models, indicating that the correlative models may be capturing mechanisms driving the distribution of buffelgrass. Given climate projections for the region, both global and local models indicate that conditions at Saguaro National Park may become more suitable for buffelgrass. Combining global and local data with correlative models and physiological information provided a holistic approach to forecasting invasive species distributions.

  12. Modeling Framework for Fracture in Multiscale Cement-Based Material Structures

    PubMed Central

    Qian, Zhiwei; Schlangen, Erik; Ye, Guang; van Breugel, Klaas

    2017-01-01

    Multiscale modeling for cement-based materials, such as concrete, is a relatively young subject, but there are already a number of different approaches to study different aspects of these classical materials. In this paper, the parameter-passing multiscale modeling scheme is established and applied to address the multiscale modeling problem for the integrated system of cement paste, mortar, and concrete. The block-by-block technique is employed to solve the length scale overlap challenge between the mortar level (0.1–10 mm) and the concrete level (1–40 mm). The microstructures of cement paste are simulated by the HYMOSTRUC3D model, and the material structures of mortar and concrete are simulated by the Anm material model. Afterwards the 3D lattice fracture model is used to evaluate their mechanical performance by simulating a uniaxial tensile test. The simulated output properties at a lower scale are passed to the next higher scale to serve as input local properties. A three-level multiscale lattice fracture analysis is demonstrated, including cement paste at the micrometer scale, mortar at the millimeter scale, and concrete at centimeter scale. PMID:28772948

  13. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-11-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite-observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with decoupled direct method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2-based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  14. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-07-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  15. Incorporating scale into digital terrain analysis

    NASA Astrophysics Data System (ADS)

    Dragut, L. D.; Eisank, C.; Strasser, T.

    2009-04-01

    Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Drăguţ et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Drăguţ, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.

  16. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    PubMed

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  17. Scale effect challenges in urban hydrology highlighted with a Fully Distributed Model and High-resolution rainfall data

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2017-04-01

    Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.

  18. An approach to multiscale modelling with graph grammars.

    PubMed

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  19. An approach to multiscale modelling with graph grammars

    PubMed Central

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-01-01

    Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929

  20. A quantum wave based compact modeling approach for the current in ultra-short DG MOSFETs suitable for rapid multi-scale simulations

    NASA Astrophysics Data System (ADS)

    Hosenfeld, Fabian; Horst, Fabian; Iñíguez, Benjamín; Lime, François; Kloes, Alexander

    2017-11-01

    Source-to-drain (SD) tunneling decreases the device performance in MOSFETs falling below the 10 nm channel length. Modeling quantum mechanical effects including SD tunneling has gained more importance specially for compact model developers. The non-equilibrium Green's function (NEGF) has become a state-of-the-art method for nano-scaled device simulation in the past years. In the sense of a multi-scale simulation approach it is necessary to bridge the gap between compact models with their fast and efficient calculation of the device current, and numerical device models which consider quantum effects of nano-scaled devices. In this work, an NEGF based analytical model for nano-scaled double-gate (DG) MOSFETs is introduced. The model consists of a closed-form potential solution of a classical compact model and a 1D NEGF formalism for calculating the device current, taking into account quantum mechanical effects. The potential calculation omits the iterative coupling and allows the straightforward current calculation. The model is based on a ballistic NEGF approach whereby backscattering effects are considered as second order effect in a closed-form. The accuracy and scalability of the non-iterative DG MOSFET model is inspected in comparison with numerical NanoMOS TCAD data for various channel lengths. With the help of this model investigations on short-channel and temperature effects are performed.

  1. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on first principals, partly pde-type, are available for several processes (but not for all), because measurement and modelling scale are compatible (-) the spatial model domain are hardly representative for larger spatial entities, including regions for which water resources management decisions are to be taken; straightforward upsizing is also limited by data availability and computational requirements. Meso scale (e.g. extent of a small to large catchment or region): (+) the spatial extent of the model domain has approximately the same extent as the regions for which water resources management decisions are to be taken. I.e., such models enable water resources quantification at the scale of most water management decisions; (+) data of some state conditions (e.g. vegetation cover, topography, river network and cross sections) are available; (+) data of some boundary fluxes (in particular surface runoff / channel flow) are directly measurable with mostly sufficient certainty; (+) equations, partly based on simple water budgeting, partly variants of pde-type equations, are available for most hydrological processes. This enables the construction of meso-scale distributed models reflecting the spatial heterogeneity of regions/landscapes; (-) process scale, measurement scale, and modelling scale differ from each other for a number of processes, e.g., such as runoff generation; (-) the process formulation (usually derived from micro-scale studies) cannot directly be transferred to the modelling domain. Upscaling procedures for this purpose are not readily and generally available. Macro scale (e.g. extent of a continent up to global): (+) the spatial extent of the model may cover the whole Earth. This enables an attractive global display of model results; (+) model results might be technically interchangeable or at least comparable with results from other global models, such as global climate models; (-) process scale, measurement scale, and modelling scale differ heavily from each other for all hydrological and associated processes; (-) the model domain and its results are not representative regions for which water resources management decisions are to be taken. (-) both state condition and boundary flux data are hardly available for the whole model domain. Water management data and discharge data from remote regions are particular incomplete / unavailable for this scale. This undermines the model's verifiability; (-) since process formulation and resulting modelling reliability at this scale is very limited, such models can hardly show any explanatory skills or prognostic power; (-) since both the entire model domain and the spatial sub-units cover large areas, model results represent values averaged over at least the spatial sub-unit's extent. In many cases, the applied time scale implies a long-term averaging in time, too. We emphasize the importance to be aware of the above mentioned strengths and weaknesses of those scale-specific models. (Many of the) results of the current global model studies do not reflect such limitations. In particular, we consider the averaging over large model entities in space and/or time inadequate. Many hydrological processes are of a non-linear nature, including threshold-type behaviour. Such features cannot be reflected by such large scale entities. The model results therefore can be of little or no use for water resources decisions and/or even misleading for public debates or decision making. Some rather newly developed sustainability concepts, e.g. "Planetary Boundaries" in which humanity may "continue to develop and thrive for generations to come" are based on such global-scale approaches and models. However, many of the major problems regarding sustainability on Earth, e.g. water scarcity, do not exhibit on a global but on a regional scale. While on a global scale water might look like being available in sufficient quantity and quality, there are many regions where water problems already have very harmful or even devastating effects. Therefore, it is the challenge to derive models and observation programmes for regional scales. In case a global display is desired future efforts should be directed towards the development of a global picture based on a mosaic of regional sound assessments, rather than "zooming into" the results of large-scale simulations. Still, a key question remains to be discussed, i.e. for which purpose models at this (global) scale can be used.

  2. Research Progress on Dark Matter Model Based on Weakly Interacting Massive Particles

    NASA Astrophysics Data System (ADS)

    He, Yu; Lin, Wen-bin

    2017-04-01

    The cosmological model of cold dark matter (CDM) with the dark energy and a scale-invariant adiabatic primordial power spectrum has been considered as the standard cosmological model, i.e. the ΛCDM model. Weakly interacting massive particles (WIMPs) become a prominent candidate for the CDM. Many models extended from the standard model can provide the WIMPs naturally. The standard calculations of relic abundance of dark matter show that the WIMPs are well in agreement with the astronomical observation of ΩDM h2 ≈0.11. The WIMPs have a relatively large mass, and a relatively slow velocity, so they are easy to aggregate into clusters, and the results of numerical simulations based on the WIMPs agree well with the observational results of cosmic large-scale structures. In the aspect of experiments, the present accelerator or non-accelerator direct/indirect detections are mostly designed for the WIMPs. Thus, a wide attention has been paid to the CDM model based on the WIMPs. However, the ΛCDM model has a serious problem for explaining the small-scale structures under one Mpc. Different dark matter models have been proposed to alleviate the small-scale problem. However, so far there is no strong evidence enough to exclude the CDM model. We plan to introduce the research progress of the dark matter model based on the WIMPs, such as the WIMPs miracle, numerical simulation, small-scale problem, and the direct/indirect detection, to analyze the criterion for discriminating the ;cold;, ;hot;, and ;warm; dark matter, and present the future prospects for the study in this field.

  3. Scaling tunable network model to reproduce the density-driven superlinear relation

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Shan, Xiaoya; Qin, Yuhao; Yu, Senbin; Xu, Lida; Gao, Zi-You

    2018-03-01

    Previous works have shown the universality of allometric scaling under total and density values at the city level, but our understanding of the size effects of regions on the universality of allometric scaling remains inadequate. Here, we revisit the scaling relations between the gross domestic production (GDP) and the population based on the total and density values and first reveal that the allometric scaling under density values for different regions is universal. The scaling exponent β under the density value is in the range of (1.0, 2.0], which unexpectedly exceeds the range observed by Pan et al. [Nat. Commun. 4, 1961 (2013)]. For the wider range, we propose a network model based on a 2D lattice space with the spatial correlation factor α as a parameter. Numerical experiments prove that the generated scaling exponent β in our model is fully tunable by the spatial correlation factor α. Our model will furnish a general platform for extensive urban and regional studies.

  4. A stochastic two-scale model for pressure-driven flow between rough surfaces

    PubMed Central

    Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas

    2016-01-01

    Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975

  5. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  6. Ensuring congruency in multiscale modeling: towards linking agent based and continuum biomechanical models of arterial adaptation.

    PubMed

    Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D

    2011-11-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.

  7. Grizzly bear habitat selection is scale dependent.

    PubMed

    Ciarniello, Lana M; Boyce, Mark S; Seip, Dale R; Heard, Douglas C

    2007-07-01

    The purpose of our study is to show how ecologists' interpretation of habitat selection by grizzly bears (Ursus arctos) is altered by the scale of observation and also how management questions would be best addressed using predetermined scales of analysis. Using resource selection functions (RSF) we examined how variation in the spatial extent of availability affected our interpretation of habitat selection by grizzly bears inhabiting mountain and plateau landscapes. We estimated separate models for females and males using three spatial extents: within the study area, within the home range, and within predetermined movement buffers. We employed two methods for evaluating the effects of scale on our RSF designs. First, we chose a priori six candidate models, estimated at each scale, and ranked them using Akaike Information Criteria. Using this method, results changed among scales for males but not for females. For female bears, models that included the full suite of covariates predicted habitat use best at each scale. For male bears that resided in the mountains, models based on forest successional stages ranked highest at the study-wide and home range extents, whereas models containing covariates based on terrain features ranked highest at the buffer extent. For male bears on the plateau, each scale estimated a different highest-ranked model. Second, we examined differences among model coefficients across the three scales for one candidate model. We found that both the magnitude and direction of coefficients were dependent upon the scale examined; results varied between landscapes, scales, and sexes. Greenness, reflecting lush green vegetation, was a strong predictor of the presence of female bears in both landscapes and males that resided in the mountains. Male bears on the plateau were the only animals to select areas that exposed them to a high risk of mortality by humans. Our results show that grizzly bear habitat selection is scale dependent. Further, the selection of resources can be dependent upon the availability of a particular vegetation type on the landscape. From a management perspective, decisions should be based on a hierarchical process of habitat selection, recognizing that selection patterns vary across scales.

  8. Assessment of the scale effect on statistical downscaling quality at a station scale using a weather generator-based model

    USDA-ARS?s Scientific Manuscript database

    The resolution of General Circulation Models (GCMs) is too coarse to assess the fine scale or site-specific impacts of climate change. Downscaling approaches including dynamical and statistical downscaling have been developed to meet this requirement. As the resolution of climate model increases, it...

  9. Transverse Tensile Properties of 3 Dimension-4 Directional Braided Cf/SiC Composite Based on Double-Scale Model

    NASA Astrophysics Data System (ADS)

    Niu, Xuming; Sun, Zhigang; Song, Yingdong

    2017-11-01

    In this thesis, a double-scale model for 3 Dimension-4 directional(3D-4d) braided C/SiC composites(CMCs) has been proposed to investigate mechanical properties of it. The double-scale model involves micro-scale which takes fiber/matrix/porosity in fibers tows into consideration and the unit cell scale which considers the 3D-4d braiding structure. Basing on the Micro-optical photographs of composite, we can build a parameterized finite element model that reflects structure of 3D-4d braided composites. The mechanical properties of fiber tows in transverse direction are studied by combining the crack band theory for matrix cracking and cohesive zone model for interface debonding. Transverse tensile process of 3D-4d CMCs can be simulated by introducing mechanical properties of fiber tows into finite element of 3D-4d braided CMCs. Quasi-static tensile tests of 3D-4d braided CMCs have been performed with PWS-100 test system. The predicted tensile stress-strain curve by the double scale model finds good agreement with the experimental results.

  10. [Factor structure of the German version of the BIS/BAS Scales in a population-based sample].

    PubMed

    Müller, A; Smits, D; Claes, L; de Zwaan, M

    2013-02-01

    The Behavioural Inhibition System/Behavioural Activation System Scale (BIS/BAS-Scales) developed by Carver and White 1 is a self-rating instrument to assess the dispositional sensitivity to punishment and reward. The present work aims to examine the factor structure of the German version of the BIS/BAS-Scales. In a large German population-based sample (n = 1881) the model fit of several factor models was tested by using confirmatory factor analyses. The best model fit was found for the 5-factor model with two BIS (anxiety, fear) and three BAS (drive, reward responsiveness, fun seeking) scales, whereas the BIS-fear, the BAS-reward responsiveness, and the BAS-fun seeking subscales showed low internal consistency. The BIS/BAS scales were negatively correlated with age, and women reported higher BIS subscale scores than men. Confirmatory factor analyses suggest a 5-factor model. However, due to the low internal reliability of some of the subscales the use of this model is questionable. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations

    NASA Technical Reports Server (NTRS)

    Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-01-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  12. Multi-scale Modeling and Analysis of Nano-RFID Systems on HPC Setup

    NASA Astrophysics Data System (ADS)

    Pathak, Rohit; Joshi, Satyadhar

    In this paper we have worked out on some the complex modeling aspects such as Multi Scale modeling, MATLAB Sugar based modeling and have shown the complexities involved in the analysis of Nano RFID (Radio Frequency Identification) systems. We have shown the modeling and simulation and demonstrated some novel ideas and library development for Nano RFID. Multi scale modeling plays a very important role in nanotech enabled devices properties of which cannot be explained sometimes by abstraction level theories. Reliability and packaging still remains one the major hindrances in practical implementation of Nano RFID based devices. And to work on them modeling and simulation will play a very important role. CNTs is the future low power material that will replace CMOS and its integration with CMOS, MEMS circuitry will play an important role in realizing the true power in Nano RFID systems. RFID based on innovations in nanotechnology has been shown. MEMS modeling of Antenna, sensors and its integration in the circuitry has been shown. Thus incorporating this we can design a Nano-RFID which can be used in areas like human implantation and complex banking applications. We have proposed modeling of RFID using the concept of multi scale modeling to accurately predict its properties. Also we give the modeling of MEMS devices that are proposed recently that can see possible application in RFID. We have also covered the applications and the advantages of Nano RFID in various areas. RF MEMS has been matured and its devices are being successfully commercialized but taking it to limits of nano domains and integration with singly chip RFID needs a novel approach which is being proposed. We have modeled MEMS based transponder and shown the distribution for multi scale modeling for Nano RFID.

  13. Geological modeling of submeter scale heterogeneity and its influence on tracer transport in a fluvial aquifer

    NASA Astrophysics Data System (ADS)

    Ronayne, Michael J.; Gorelick, Steven M.; Zheng, Chunmiao

    2010-10-01

    We developed a new model of aquifer heterogeneity to analyze data from a single-well injection-withdrawal tracer test conducted at the Macrodispersion Experiment (MADE) site on the Columbus Air Force Base in Mississippi (USA). The physical heterogeneity model is a hybrid that combines 3-D lithofacies to represent submeter scale, highly connected channels within a background matrix based on a correlated multivariate Gaussian hydraulic conductivity field. The modeled aquifer architecture is informed by a variety of field data, including geologic core sampling. Geostatistical properties of this hybrid heterogeneity model are consistent with the statistics of the hydraulic conductivity data set based on extensive borehole flowmeter testing at the MADE site. The representation of detailed, small-scale geologic heterogeneity allows for explicit simulation of local preferential flow and slow advection, processes that explain the complex tracer response from the injection-withdrawal test. Based on the new heterogeneity model, advective-dispersive transport reproduces key characteristics of the observed tracer recovery curve, including a delayed concentration peak and a low-concentration tail. Importantly, our results suggest that intrafacies heterogeneity is responsible for local-scale mass transfer.

  14. Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data

    NASA Astrophysics Data System (ADS)

    Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia

    Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.

  15. A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model

    PubMed Central

    Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin

    2009-01-01

    A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542

  16. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Treesearch

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  17. Towards Characterization, Modeling, and Uncertainty Quantification in Multi-scale Mechanics of Oragnic-rich Shales

    NASA Astrophysics Data System (ADS)

    Abedi, S.; Mashhadian, M.; Noshadravan, A.

    2015-12-01

    Increasing the efficiency and sustainability in operation of hydrocarbon recovery from organic-rich shales requires a fundamental understanding of chemomechanical properties of organic-rich shales. This understanding is manifested in form of physics-bases predictive models capable of capturing highly heterogeneous and multi-scale structure of organic-rich shale materials. In this work we present a framework of experimental characterization, micromechanical modeling, and uncertainty quantification that spans from nanoscale to macroscale. Application of experiments such as coupled grid nano-indentation and energy dispersive x-ray spectroscopy and micromechanical modeling attributing the role of organic maturity to the texture of the material, allow us to identify unique clay mechanical properties among different samples that are independent of maturity of shale formations and total organic content. The results can then be used to inform the physically-based multiscale model for organic rich shales consisting of three levels that spans from the scale of elementary building blocks (e.g. clay minerals in clay-dominated formations) of organic rich shales to the scale of the macroscopic inorganic/organic hard/soft inclusion composite. Although this approach is powerful in capturing the effective properties of organic-rich shale in an average sense, it does not account for the uncertainty in compositional and mechanical model parameters. Thus, we take this model one step forward by systematically incorporating the main sources of uncertainty in modeling multiscale behavior of organic-rich shales. In particular we account for the uncertainty in main model parameters at different scales such as porosity, elastic properties and mineralogy mass percent. To that end, we use Maximum Entropy Principle and random matrix theory to construct probabilistic descriptions of model inputs based on available information. The Monte Carlo simulation is then carried out to propagate the uncertainty and consequently construct probabilistic descriptions of properties at multiple length-scales. The combination of experimental characterization and stochastic multi-scale modeling presented in this work improves the robustness in the prediction of essential subsurface parameters in engineering scale.

  18. Impact of spatial variability and sampling design on model performance

    NASA Astrophysics Data System (ADS)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.

  19. Fine-Scale Exposure to Allergenic Pollen in the Urban Environment: Evaluation of Land Use Regression Approach.

    PubMed

    Hjort, Jan; Hugg, Timo T; Antikainen, Harri; Rusanen, Jarmo; Sofiev, Mikhail; Kukkonen, Jaakko; Jaakkola, Maritta S; Jaakkola, Jouni J K

    2016-05-01

    Despite the recent developments in physically and chemically based analysis of atmospheric particles, no models exist for resolving the spatial variability of pollen concentration at urban scale. We developed a land use regression (LUR) approach for predicting spatial fine-scale allergenic pollen concentrations in the Helsinki metropolitan area, Finland, and evaluated the performance of the models against available empirical data. We used grass pollen data monitored at 16 sites in an urban area during the peak pollen season and geospatial environmental data. The main statistical method was generalized linear model (GLM). GLM-based LURs explained 79% of the spatial variation in the grass pollen data based on all samples, and 47% of the variation when samples from two sites with very high concentrations were excluded. In model evaluation, prediction errors ranged from 6% to 26% of the observed range of grass pollen concentrations. Our findings support the use of geospatial data-based statistical models to predict the spatial variation of allergenic grass pollen concentrations at intra-urban scales. A remote sensing-based vegetation index was the strongest predictor of pollen concentrations for exposure assessments at local scales. The LUR approach provides new opportunities to estimate the relations between environmental determinants and allergenic pollen concentration in human-modified environments at fine spatial scales. This approach could potentially be applied to estimate retrospectively pollen concentrations to be used for long-term exposure assessments. Hjort J, Hugg TT, Antikainen H, Rusanen J, Sofiev M, Kukkonen J, Jaakkola MS, Jaakkola JJ. 2016. Fine-scale exposure to allergenic pollen in the urban environment: evaluation of land use regression approach. Environ Health Perspect 124:619-626; http://dx.doi.org/10.1289/ehp.1509761.

  20. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  1. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.; hide

    2008-01-01

    Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.

  2. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.

  3. Scales and erosion

    USDA-ARS?s Scientific Manuscript database

    There is a need to develop scale explicit understanding of erosion to overcome existing conceptual and methodological flaws in our modelling methods currently applied to understand the process of erosion, transport and deposition at the catchment scale. These models need to be based on a sound under...

  4. Efficient Storage Scheme of Covariance Matrix during Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Mao, D.; Yeh, T. J.

    2013-12-01

    During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.

  5. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  6. Multiscale turbulence models based on convected fluid microstructure

    NASA Astrophysics Data System (ADS)

    Holm, Darryl D.; Tronci, Cesare

    2012-11-01

    The Euler-Poincaré approach to complex fluids is used to derive multiscale equations for computationally modeling Euler flows as a basis for modeling turbulence. The model is based on a kinematic sweeping ansatz (KSA) which assumes that the mean fluid flow serves as a Lagrangian frame of motion for the fluctuation dynamics. Thus, we regard the motion of a fluid parcel on the computationally resolvable length scales as a moving Lagrange coordinate for the fluctuating (zero-mean) motion of fluid parcels at the unresolved scales. Even in the simplest two-scale version on which we concentrate here, the contributions of the fluctuating motion under the KSA to the mean motion yields a system of equations that extends known results and appears to be suitable for modeling nonlinear backscatter (energy transfer from smaller to larger scales) in turbulence using multiscale methods.

  7. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    PubMed

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Multi-scale Drivers of Variations in Atmospheric Evaporative Demand Based on Observations and Physically-based Modeling

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.; Li, D.

    2015-12-01

    Evapotranspiration (ET) is a key link between the availability of water resources and climate change and climate variability. Variability of ET has important environmental and socioeconomic implications for managing hydrological hazards, food and energy production. Although there have been many observational and modeling studies of ET, how ET has varied and the drivers of the variations at different temporal scales remain elusive. Much of the uncertainty comes from the atmospheric evaporative demand (AED), which is the combined effect of radiative and aerodynamic controls. The inconsistencies among modeled AED estimates and the limited observational data may originate from multiple sources including the limited time span and uncertainties in the data. To fully investigate and untangle the intertwined drivers of AED, we present a spectrum analysis to identify key controls of AED across multiple temporal scales. We use long-term records of observed pan evaporation for 1961-2006 from 317 weather stations across China and physically-based model estimates of potential evapotranspiration (PET). The model estimates are based on surface meteorology and radiation derived from reanalysis, satellite retrievals and station data. Our analyses show that temperature plays a dominant role in regulating variability of AED at the inter-annual scale. At the monthly and seasonal scales, the primary control of AED shifts from radiation in humid regions to humidity in dry regions. Unlike many studies focusing on the spatial pattern of ET drivers based on a traditional supply and demand framework, this study underlines the importance of temporal scales when discussing controls of ET variations.

  9. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  10. Low-Lift Drag of the Grumman F9F-9 Airplane as Obtained by a 1/7.5-Scale Rocket-Boosted Model and by Three 1/45.85-Scale Equivalent-Body Models between Mach Numbers of 0.8 and 1.3, TED No. NACA DE 391

    NASA Technical Reports Server (NTRS)

    Stevens, Joseph E.

    1955-01-01

    Low-lift drag data are presented herein for one 1/7.5-scale rocket-boosted model and three 1/45.85-scale equivalent-body models of the Grumman F9F-9 airplane, The data were obtained over a Reynolds number range of about 5 x 10(exp 6) to 10 x 10(exp 6) based on wing mean aerodynamic chord for the rocket model and total body length for the equivalent-body models. The rocket-boosted model showed a drag rise of about 0,037 (based on included wing area) between the subsonic level and the peak supersonic drag coefficient at the maximum Mach number of this test. The base drag coefficient measured on this model varied from a value of -0,0015 in the subsonic range to a maximum of about 0.0020 at a Mach number of 1.28, Drag coefficients for the equivalent-body models varied from about 0.125 (based on body maximum area) in the subsonic range to about 0.300 at a Mach number of 1.25. Increasing the total fineness ratio by a small amount raised the drag-rise Mach number slightly.

  11. Application of Hierarchy Theory to Cross-Scale Hydrologic Modeling of Nutrient Loads

    EPA Science Inventory

    We describe a model called Regional Hydrologic Modeling for Environmental Evaluation 16 (RHyME2) for quantifying annual nutrient loads in stream networks and watersheds. RHyME2 is 17 a cross-scale statistical and process-based water-quality model. The model ...

  12. Combining the Old and the New: Designing a Curriculum Based on the Taba Model and the Global Scale of English

    ERIC Educational Resources Information Center

    Aydin, Belgin; Unver, Meral Melek; Alan, Bülent; Saglam, Sercan

    2017-01-01

    This paper explains the process of designing a curriculum based on the Taba Model and the Global Scale of English (GSE) in an intensive language education program. The Taba Model emphasizing the involvement of the teachers and the learners in the curriculum development process was combined with the GSE, a psychometric tool measuring language…

  13. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE PAGES

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  14. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  15. Explanatory Power of Multi-scale Physical Descriptors in Modeling Benthic Indices Across Nested Ecoregions of the Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Holburn, E. R.; Bledsoe, B. P.; Poff, N. L.; Cuhaciyan, C. O.

    2005-05-01

    Using over 300 R/EMAP sites in OR and WA, we examine the relative explanatory power of watershed, valley, and reach scale descriptors in modeling variation in benthic macroinvertebrate indices. Innovative metrics describing flow regime, geomorphic processes, and hydrologic-distance weighted watershed and valley characteristics are used in multiple regression and regression tree modeling to predict EPT richness, % EPT, EPT/C, and % Plecoptera. A nested design using seven ecoregions is employed to evaluate the influence of geographic scale and environmental heterogeneity on the explanatory power of individual and combined scales. Regression tree models are constructed to explain variability while identifying threshold responses and interactions. Cross-validated models demonstrate differences in the explanatory power associated with single-scale and multi-scale models as environmental heterogeneity is varied. Models explaining the greatest variability in biological indices result from multi-scale combinations of physical descriptors. Results also indicate that substantial variation in benthic macroinvertebrate response can be explained with process-based watershed and valley scale metrics derived exclusively from common geospatial data. This study outlines a general framework for identifying key processes driving macroinvertebrate assemblages across a range of scales and establishing the geographic extent at which various levels of physical description best explain biological variability. Such information can guide process-based stratification to avoid spurious comparison of dissimilar stream types in bioassessments and ensure that key environmental gradients are adequately represented in sampling designs.

  16. [Modeling continuous scaling of NDVI based on fractal theory].

    PubMed

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  17. Modeling dynamics of western juniper under climate change in a semiarid ecosystem

    NASA Astrophysics Data System (ADS)

    Shrestha, R.; Glenn, N. F.; Flores, A. N.

    2013-12-01

    Modeling future vegetation dynamics in response to climate change and disturbances such as fire relies heavily on model parameterization. Fine-scale field-based measurements can provide the necessary parameters for constraining models at a larger scale. But the time- and labor-intensive nature of field-based data collection leads to sparse sampling and significant spatial uncertainties in retrieved parameters. In this study we quantify the fine-scale carbon dynamics and uncertainty of juniper woodland in the Reynolds Creek Experimental Watershed (RCEW) in southern Idaho, which is a proposed critical zone observatory (CZO) site for soil carbon processes. We leverage field-measured vegetation data along with airborne lidar and timeseries Landsat imagery to initialize a state-and-transition model (VDDT) and a process-based fire-model (FlamMap) to examine the vegetation dynamics in response to stochastic fire events and climate change. We utilize recently developed and novel techniques to measure biomass and canopy characteristics of western juniper at the individual tree scale using terrestrial and airborne laser scanning techniques in RCEW. These fine-scale data are upscaled across the watershed for the VDDT and FlamMap models. The results will immediately improve our understanding of fine-scale dynamics and carbon stocks and fluxes of woody vegetation in a semi-arid ecosystem. Moreover, quantification of uncertainty will also provide a basis for generating ensembles of spatially-explicit alternative scenarios to guide future land management decisions in the region.

  18. The global reference atmospheric model, mod 2 (with two scale perturbation model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Hargraves, W. R.

    1976-01-01

    The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.

  19. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    NASA Astrophysics Data System (ADS)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  20. Transregional Collaborative Research Centre 32: Patterns in Soil-Vegetation-Atmosphere-Systems

    NASA Astrophysics Data System (ADS)

    Masbou, M.; Simmer, C.; Kollet, S.; Boessenkool, K.; Crewell, S.; Diekkrüger, B.; Huber, K.; Klitzsch, N.; Koyama, C.; Vereecken, H.

    2012-04-01

    The soil-vegetation-atmosphere system is characterized by non-linear exchanges of mass, momentum and energy with complex patterns, structures and processes that act at different temporal and spatial scales. Under the TR32 framework, the characterisation of these structures and patterns will lead to a deeper qualitative and quantitative understanding of the SVA system, and ultimately to better predictions of the SVA state. Research in TR32 is based on three methodological pillars: Monitoring, Modelling and Data Assimilation. Focusing our research on the Rur Catchment (Germany), patterns are monitored since 2006 continuously using existing and novel geophysical and remote sensing techniques from the local to the catchment scale based on ground penetrating radar methods, induced polarization, radiomagnetotellurics, electrical resistivity tomography, boundary layer scintillometry, lidar techniques, cosmic-ray, microwave radiometry, and precipitation radars with polarization diversity. Modelling approaches involve development of scaled consistent coupled model platform: high resolution numerical weather prediction (NWP; 400m) and hydrological models (few meters). In the second phase (2011-2014), the focus is on the integration of models from the groundwater to the atmosphere for both the m- and km-scale and the extension of the experimental monitoring in respect to vegetation. The coupled modelling platform is based on the atmospheric model COSMO, the land surface model CLM and the hydrological model ParFlow. A scale consistent two-way coupling is performed using the external OASIS coupler. Example work includes the transfer of laboratory methods to the field; the measurements of patterns of soil-carbon, evapotranspiration and respiration measured in the field; catchment-scale modeling of exchange processes and the setup of an atmospheric boundary layer monitoring network. These modern and predominantly non-invasive measurement techniques are exploited in combination with advanced modelling systems by data assimilation to yield improved numerical models for the prediction of water-, energy and CO2-transfer by accounting for the patterns occurring at various scales.

  1. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    NASA Astrophysics Data System (ADS)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  2. Grade 12 Students' Conceptual Understanding and Mental Models of Galvanic Cells before and after Learning by Using Small-Scale Experiments in Conjunction with a Model Kit

    ERIC Educational Resources Information Center

    Supasorn, Saksri

    2015-01-01

    This study aimed to develop the small-scale experiments involving electrochemistry and the galvanic cell model kit featuring the sub-microscopic level. The small-scale experiments in conjunction with the model kit were implemented based on the 5E inquiry learning approach to enhance students' conceptual understanding of electrochemistry. The…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicks, E. P.; Rosner, R., E-mail: eph2001@columbia.edu

    In this paper, we provide support for the Rayleigh-Taylor-(RT)-based subgrid model used in full-star simulations of deflagrations in Type Ia supernovae explosions. We use the results of a parameter study of two-dimensional direct numerical simulations of an RT unstable model flame to distinguish between the two main types of subgrid models (RT or turbulence dominated) in the flamelet regime. First, we give scalings for the turbulent flame speed, the Reynolds number, the viscous scale, and the size of the burning region as the non-dimensional gravity (G) is varied. The flame speed is well predicted by an RT-based flame speed model.more » Next, the above scalings are used to calculate the Karlovitz number (Ka) and to discuss appropriate combustion regimes. No transition to thin reaction zones is seen at Ka = 1, although such a transition is expected by turbulence-dominated subgrid models. Finally, we confirm a basic physical premise of the RT subgrid model, namely, that the flame is fractal, and thus self-similar. By modeling the turbulent flame speed, we demonstrate that it is affected more by large-scale RT stretching than by small-scale turbulent wrinkling. In this way, the RT instability controls the flame directly from the large scales. Overall, these results support the RT subgrid model.« less

  4. RADSOURCE. Volume 1, Part 1, A scaling factor prediction computer program technical manual and code validation: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vance, J.N.; Holderness, J.H.; James, D.W.

    1992-12-01

    Waste stream scaling factors based on sampling programs are vulnerable to one or more of the following factors: sample representativeness, analytic accuracy, and measurement sensitivity. As an alternative to sample analyses or as a verification of the sampling results, this project proposes the use of the RADSOURCE code, which accounts for the release of fuel-source radionuclides. Once the release rates of these nuclides from fuel are known, the code develops scaling factors for waste streams based on easily measured Cobalt-60 (Co-60) and Cesium-137 (Cs-137). The project team developed mathematical models to account for the appearance rate of 10CFR61 radionuclides inmore » reactor coolant. They based these models on the chemistry and nuclear physics of the radionuclides involved. Next, they incorporated the models into a computer code that calculates plant waste stream scaling factors based on reactor coolant gamma- isotopic data. Finally, the team performed special sampling at 17 reactors to validate the models in the RADSOURCE code.« less

  5. Estimation of inlet flow rates for image-based aneurysm CFD models: where and how to begin?

    PubMed

    Valen-Sendstad, Kristian; Piccinelli, Marina; KrishnankuttyRema, Resmi; Steinman, David A

    2015-06-01

    Patient-specific flow rates are rarely available for image-based computational fluid dynamics models. Instead, flow rates are often assumed to scale according to the diameters of the arteries of interest. Our goal was to determine how choice of inlet location and scaling law affect such model-based estimation of inflow rates. We focused on 37 internal carotid artery (ICA) aneurysm cases from the Aneurisk cohort. An average ICA flow rate of 245 mL min(-1) was assumed from the literature, and then rescaled for each case according to its inlet diameter squared (assuming a fixed velocity) or cubed (assuming a fixed wall shear stress). Scaling was based on diameters measured at various consistent anatomical locations along the models. Choice of location introduced a modest 17% average uncertainty in model-based flow rate, but within individual cases estimated flow rates could vary by >100 mL min(-1). A square law was found to be more consistent with physiological flow rates than a cube law. Although impact of parent artery truncation on downstream flow patterns is well studied, our study highlights a more insidious and potentially equal impact of truncation site and scaling law on the uncertainty of assumed inlet flow rates and thus, potentially, downstream flow patterns.

  6. A multi-scale, multi-disciplinary approach for assessing the technological, economic and environmental performance of bio-based chemicals.

    PubMed

    Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai

    2015-12-01

    In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.

  7. ScaleNet: A literature-based model of scale insect biology and systematics

    USDA-ARS?s Scientific Manuscript database

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found in all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis, and plant-insect i...

  8. Multiple-length-scale deformation analysis in a thermoplastic polyurethane

    PubMed Central

    Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.

    2015-01-01

    Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945

  9. Evaluation of strategies for nature-based solutions to drought: a decision support model at the national scale

    NASA Astrophysics Data System (ADS)

    Simpson, Mike; Ives, Matthew; Hall, Jim

    2016-04-01

    There is an increasing body of evidence in support of the use of nature based solutions as a strategy to mitigate drought. Restored or constructed wetlands, grasslands and in some cases forests have been used with success in numerous case studies. Such solutions remain underused in the UK, where they are not considered as part of long-term plans for supply by water companies. An important step is the translation of knowledge on the benefits of nature based solutions at the upland/catchment scale into a model of the impact of these solutions on national water resource planning in terms of financial costs, carbon benefits and robustness to drought. Our project, 'A National Scale Model of Green Infrastructure for Water Resources', addresses this issue through development of a model that can show the costs and benefits associated with a broad roll-out of nature based solutions for water supply. We have developed generalised models of both the hydrological effects of various classes and implementations of nature-based approaches and their economic impacts in terms of construction costs, running costs, time to maturity, land use and carbon benefits. Our next step will be to compare this work with our recent evaluation of conventional water infrastructure, allowing a case to be made in financial terms and in terms of security of water supply. By demonstrating the benefits of nature based solutions under multiple possible climate and population scenarios we aim to demonstrate the potential value of using nature based solutions as a component of future long-term water resource plans. Strategies for decision making regarding the selection of nature based and conventional approaches, developed through discussion with government and industry, will be applied to the final model. Our focus is on keeping our work relevant to the requirements of decision-makers involved in conventional water planning. We propose to present the outcomes of our model for the evaluation of nature-based solutions at catchment scale and ongoing results of our national-scale model.

  10. Improvement of distributed snowmelt energy balance modeling with MODIS-based NDSI-derived fractional snow-covered area data

    Treesearch

    Joel W. Homan; Charles H. Luce; James P. McNamara; Nancy F. Glenn

    2011-01-01

    Describing the spatial variability of heterogeneous snowpacks at a watershed or mountain-front scale is important for improvements in large-scale snowmelt modelling. Snowmelt depletion curves, which relate fractional decreases in snowcovered area (SCA) against normalized decreases in snow water equivalent (SWE), are a common approach to scale-up snowmelt models....

  11. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  12. Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B

    2011-01-01

    In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less

  13. A microstructural model of motion of macro-twin interfaces in Ni-Mn-Ga 10 M martensite

    NASA Astrophysics Data System (ADS)

    Seiner, Hanuš; Straka, Ladislav; Heczko, Oleg

    2014-03-01

    We present a continuum-based model of microstructures forming at the macro-twin interfaces in thermoelastic martensites and apply this model to highly mobile interfaces in 10 M modulated Ni-Mn-Ga martensite. The model is applied at three distinct spatial scales observed in the experiment: meso-scale (modulation twinning), micro-scale (compound a-b lamination), and nano-scale (nanotwining in the concept of adaptive martensite). We show that two mobile interfaces (Type I and Type II macro-twins) have different micromorphologies at all considered spatial scales, which can directly explain their different twinning stress observed in experiments. The results of the model are discussed with respect to various experimental observations at all three considered spatial scales.

  14. Development and Validation of Triarchic Construct Scales from the Psychopathic Personality Inventory

    PubMed Central

    Hall, Jason R.; Drislane, Laura E.; Patrick, Christopher J.; Morano, Mario; Lilienfeld, Scott O.; Poythress, Norman G.

    2014-01-01

    The Triarchic model of psychopathy describes this complex condition in terms of distinct phenotypic components of boldness, meanness, and disinhibition. Brief self-report scales designed specifically to index these psychopathy facets have thus far demonstrated promising construct validity. The present study sought to develop and validate scales for assessing facets of the Triarchic model using items from a well-validated existing measure of psychopathy—the Psychopathic Personality Inventory (PPI). A consensus rating approach was used to identify PPI items relevant to each Triarchic facet, and the convergent and discriminant validity of the resulting PPI-based Triarchic scales were evaluated in relation to multiple criterion variables (i.e., other psychopathy inventories, antisocial personality disorder features, personality traits, psychosocial functioning) in offender and non-offender samples. The PPI-based Triarchic scales showed good internal consistency and related to criterion variables in ways consistent with predictions based on the Triarchic model. Findings are discussed in terms of implications for conceptualization and assessment of psychopathy. PMID:24447280

  15. Development and validation of Triarchic construct scales from the psychopathic personality inventory.

    PubMed

    Hall, Jason R; Drislane, Laura E; Patrick, Christopher J; Morano, Mario; Lilienfeld, Scott O; Poythress, Norman G

    2014-06-01

    The Triarchic model of psychopathy describes this complex condition in terms of distinct phenotypic components of boldness, meanness, and disinhibition. Brief self-report scales designed specifically to index these psychopathy facets have thus far demonstrated promising construct validity. The present study sought to develop and validate scales for assessing facets of the Triarchic model using items from a well-validated existing measure of psychopathy-the Psychopathic Personality Inventory (PPI). A consensus-rating approach was used to identify PPI items relevant to each Triarchic facet, and the convergent and discriminant validity of the resulting PPI-based Triarchic scales were evaluated in relation to multiple criterion variables (i.e., other psychopathy inventories, antisocial personality disorder features, personality traits, psychosocial functioning) in offender and nonoffender samples. The PPI-based Triarchic scales showed good internal consistency and related to criterion variables in ways consistent with predictions based on the Triarchic model. Findings are discussed in terms of implications for conceptualization and assessment of psychopathy.

  16. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Lawrence; Howard, Mike; Levesque, George; Souers, Clark

    2011-06-01

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to ALE hydrodynamics codes to model detonations. We term our model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculate EOS values based on the concentrations. A validation suite of model simulations compared to recent high fidelity metal push experiments at ambient and cold temperatures has been developed. We present here a study of multi-time scale kinetic rate effects for these experiments. Prepared by LLNL under Contract DE-AC52-07NA27344.

  17. Sharply curved turn around duct flow predictions using spectral partitioning of the turbulent kinetic energy and a pressure modified wall law

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1986-01-01

    Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.

  18. Research Advances on Radiation Transfer Modeling and Inversion for Multi-Scale Land Surface Remote Sensing

    NASA Astrophysics Data System (ADS)

    Liu, Q.

    2011-09-01

    At first, research advances on radiation transfer modeling on multi-scale remote sensing data are presented: after a general overview of remote sensing radiation transfer modeling, several recent research advances are presented, including leaf spectrum model (dPROS-PECT), vegetation canopy BRDF models, directional thermal infrared emission models(TRGM, SLEC), rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed. The land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation etc. are taken as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is designed and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China will be introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.

  19. A Ground-Based Research Vehicle for Base Drag Studies at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Diebler, Corey; Smith, Mark

    2002-01-01

    A ground research vehicle (GRV) has been developed to study the base drag on large-scale vehicles at subsonic speeds. Existing models suggest that base drag is dependent upon vehicle forebody drag, and for certain configurations, the total drag of a vehicle can be reduced by increasing its forebody drag. Although these models work well for small projectile shapes, studies have shown that they do not provide accurate predictions when applied to large-scale vehicles. Experiments are underway at the NASA Dryden Flight Research Center to collect data at Reynolds numbers to a maximum of 3 x 10(exp 7), and to formulate a new model for predicting the base drag of trucks, buses, motor homes, reentry vehicles, and other large-scale vehicles. Preliminary tests have shown errors as great as 70 percent compared to Hoerner's two-dimensional base drag prediction. This report describes the GRV and its capabilities, details the studies currently underway at NASA Dryden, and presents preliminary results of both the effort to formulate a new base drag model and the investigation into a method of reducing total drag by manipulating forebody drag.

  20. The Importance of Precise Digital Elevation Models (DEM) in Modelling Floods

    NASA Astrophysics Data System (ADS)

    Demir, Gokben; Akyurek, Zuhal

    2016-04-01

    Digital elevation Models (DEM) are important inputs for topography for the accurate modelling of floodplain hydrodynamics. Floodplains have a key role as natural retarding pools which attenuate flood waves and suppress flood peaks. GPS, LIDAR and bathymetric surveys are well known surveying methods to acquire topographic data. It is not only time consuming and expensive to obtain topographic data through surveying but also sometimes impossible for remote areas. In this study it is aimed to present the importance of accurate modelling of topography for flood modelling. The flood modelling for Samsun-Terme in Blacksea region of Turkey is done. One of the DEM is obtained from the point observations retrieved from 1/5000 scaled orthophotos and 1/1000 scaled point elevation data from field surveys at x-sections. The river banks are corrected by using the orthophotos and elevation values. This DEM is named as scaled DEM. The other DEM is obtained from bathymetric surveys. 296 538 number of points and the left/right bank slopes were used to construct the DEM having 1 m spatial resolution and this DEM is named as base DEM. Two DEMs were compared by using 27 x-sections. The maximum difference at thalweg of the river bed is 2m and the minimum difference is 20 cm between two DEMs. The channel conveyance capacity in base DEM is larger than the one in scaled DEM and floodplain is modelled in detail in base DEM. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. The model by using two DEMs were calibrated for a flood event (July 9, 2012). The roughness is considered as the calibration parameter. From comparison of input hydrograph at the upstream of the river and output hydrograph at the downstream of the river, the attenuation is obtained as 91% and 84% for the base DEM and scaled DEM, respectively. The time lag in hydrographs does not show any difference for two DEMs and it is obtained as 3 hours. Maximum flood extents differ for the two DEMs, larger flooded area is simulated from scaled DEM. The main difference is observed for the braided and meandering parts of the river. For the meandering part of the river, additional 1.82 106 m3 water (5% of the total volume) is calculated as the flooded volume simulated by using the scaled DEM. For the braided stream part 0.187 106 m3 more water is simulated as the flooded volume by the scaled DEM. The flood extent around the braided part of the river is 27.6 ha larger in the simulated flood map obtained from scaled DEM compared to the one obtained from base DEM. Around the meandering part of the river scaled DEM gave 59.8 ha more flooded area. The importance of correct topography of the braided and meandering part of the river in flood modelling and the uncertainty it brings to modelling are discussed in detail.

  1. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  2. Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques.

    PubMed

    Lee, Yi-Hsuan; von Davier, Alina A

    2013-07-01

    Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.

  3. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P A; Fried, L E; Howard, W M

    2011-07-21

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. They use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. They term their model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonationmore » wave and calculates EOS values based on the concentrations. A HE-validation suite of model simulations compared to experiments at ambient, hot, and cold temperatures has been developed. They present here a new rate model and comparison with experimental data.« less

  4. Thermo-Oxidative Induced Damage in Polymer Composites: Microstructure Image-Based Multi-Scale Modeling and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Hussein, Rafid M.; Chandrashekhara, K.

    2017-11-01

    A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.

  5. Performance of Renormalization Group Algebraic Turbulence Model on Boundary Layer Transition Simulation

    NASA Technical Reports Server (NTRS)

    Ahn, Kyung H.

    1994-01-01

    The RNG-based algebraic turbulence model, with a new method of solving the cubic equation and applying new length scales, is introduced. An analysis is made of the RNG length scale which was previously reported and the resulting eddy viscosity is compared with those from other algebraic turbulence models. Subsequently, a new length scale is introduced which actually uses the two previous RNG length scales in a systematic way to improve the model performance. The performance of the present RNG model is demonstrated by simulating the boundary layer flow over a flat plate and the flow over an airfoil.

  6. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  7. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  8. Highly efficient model updating for structural condition assessment of large-scale bridges.

    DOT National Transportation Integrated Search

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  9. The anamorphic universe

    NASA Astrophysics Data System (ADS)

    Ijjas, Anna; Steinhardt, Paul J.

    2015-10-01

    We introduce ``anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariant spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.

  10. A review and empirical study of the composite scales of the Das-Naglieri cognitive assessment system.

    PubMed

    McCrea, Simon M

    2009-01-01

    Alexander Luria's model of the working brain consisting of three functional units was formulated through the examination of hundreds of focal brain-injury patients. Several psychometric instruments based on Luria's syndrome analysis and accompanying qualitative tasks have been developed since the 1970s. In the mid-1970s, JP Das and colleagues defined a specific cognitive processes model based directly on Luria's two coding units termed simultaneous and successive by studying diverse cross-cultural, ability, and socioeconomic strata. The cognitive assessment system is based on the PASS model of cognitive processes and consists of four composite scales of Planning-Attention-Simultaneous-Successive (PASS) devised by Naglieri and Das in 1997. Das and colleagues developed the two new scales of planning and attention to more closely model Luria's theory of higher cortical functions. In this paper a theoretical review of Luria's theory, Das and colleagues elaboration of Luria's model, and the neural correlates of PASS composite scales based on extant studies is summarized. A brief empirical study of the neuropsychological specificity of the PASS composite scales in a sample of 33 focal cortical stroke patients using cluster analysis is then discussed. Planning and simultaneous were sensitive to right hemisphere lesions. These findings were integrated with recent functional neuroimaging studies of PASS scales. In sum it was found that simultaneous is strongly dependent on dual bilateral occipitoparietal interhemispheric coordination whereas successive demonstrated left frontotemporal specificity with some evidence of interhemispheric coordination across the prefrontal cortex. Hence, support for the validity of the PASS composite scales was found as well as for the axiom of the independence of code content from code type originally specified in 1994 by Das, Naglieri, and Kirby.

  11. A review and empirical study of the composite scales of the Das–Naglieri cognitive assessment system

    PubMed Central

    McCrea, Simon M

    2009-01-01

    Alexander Luria’s model of the working brain consisting of three functional units was formulated through the examination of hundreds of focal brain-injury patients. Several psychometric instruments based on Luria’s syndrome analysis and accompanying qualitative tasks have been developed since the 1970s. In the mid-1970s, JP Das and colleagues defined a specific cognitive processes model based directly on Luria’s two coding units termed simultaneous and successive by studying diverse cross-cultural, ability, and socioeconomic strata. The cognitive assessment system is based on the PASS model of cognitive processes and consists of four composite scales of Planning–Attention–Simultaneous–Successive (PASS) devised by Naglieri and Das in 1997. Das and colleagues developed the two new scales of planning and attention to more closely model Luria’s theory of higher cortical functions. In this paper a theoretical review of Luria’s theory, Das and colleagues elaboration of Luria’s model, and the neural correlates of PASS composite scales based on extant studies is summarized. A brief empirical study of the neuropsychological specificity of the PASS composite scales in a sample of 33 focal cortical stroke patients using cluster analysis is then discussed. Planning and simultaneous were sensitive to right hemisphere lesions. These findings were integrated with recent functional neuroimaging studies of PASS scales. In sum it was found that simultaneous is strongly dependent on dual bilateral occipitoparietal interhemispheric coordination whereas successive demonstrated left frontotemporal specificity with some evidence of interhemispheric coordination across the prefrontal cortex. Hence, support for the validity of the PASS composite scales was found as well as for the axiom of the independence of code content from code type originally specified in 1994 by Das, Naglieri, and Kirby. PMID:22110322

  12. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  13. Research Advances on Radiation Transfer Modeling and Inversion for Multi-scale Land Surface Remote Sensing

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Li, J.; Du, Y.; Wen, J.; Zhong, B.; Wang, K.

    2011-12-01

    As the remote sensing data accumulating, it is a challenge and significant issue how to generate high accurate and consistent land surface parameter product from the multi source remote observation and the radiation transfer modeling and inversion methodology are the theoretical bases. In this paper, recent research advances and unresolved issues are presented. At first, after a general overview, recent research advances on multi-scale remote sensing radiation transfer modeling are presented, including leaf spectrum model, vegetation canopy BRDF models, directional thermal infrared emission models, rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed, taking the land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is suggested and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China are introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.

  14. Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)

    NASA Astrophysics Data System (ADS)

    Winstral, A. H.; Marks, D. G.; Gurney, R. J.

    2009-12-01

    The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.

  15. Chemistry resolved kinetic flow modeling of TATB based explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Laurence E.; William, Howard; Levesque, George; Souers, P. Clark

    2012-03-01

    Detonation waves in insensitive, TATB-based explosives are believed to have multiple time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. We term our model chemistry resolved kinetic flow, since CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculates EOS values based on the concentrations. We present here two variants of our new rate model and comparison with hot, ambient, and cold experimental data for PBX 9502.

  16. Ditching Investigations of Dynamic Models and Effects of Design Parameters on Ditching Characteristics

    NASA Technical Reports Server (NTRS)

    Fisher, Lloyd J; Hoffman, Edward L

    1958-01-01

    Data from ditching investigations conducted at the Langley Aeronautical Laboratory with dynamic scale models of various airplanes are presented in the form of tables. The effects of design parameters on the ditching characteristics of airplanes, based on scale-model investigations and on reports of full-scale ditchings, are discussed. Various ditching aids are also discussed as a means of improving ditching behavior.

  17. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    PubMed

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes. The good agreement between the two modeling approaches is very important for defining the tradeoff between data availability and the information provided by the models. The results we present define the possibility of hybrid models combining the agent-based and the metapopulation approaches according to the available data and computational resources.

  18. Simulation of nitrate reduction in groundwater - An upscaling approach from small catchments to the Baltic Sea basin

    NASA Astrophysics Data System (ADS)

    Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.

    2018-01-01

    This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.

  19. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  20. Computational Modeling of Human Metabolism and Its Application to Systems Biomedicine.

    PubMed

    Aurich, Maike K; Thiele, Ines

    2016-01-01

    Modern high-throughput techniques offer immense opportunities to investigate whole-systems behavior, such as those underlying human diseases. However, the complexity of the data presents challenges in interpretation, and new avenues are needed to address the complexity of both diseases and data. Constraint-based modeling is one formalism applied in systems biology. It relies on a genome-scale reconstruction that captures extensive biochemical knowledge regarding an organism. The human genome-scale metabolic reconstruction is increasingly used to understand normal cellular and disease states because metabolism is an important factor in many human diseases. The application of human genome-scale reconstruction ranges from mere querying of the model as a knowledge base to studies that take advantage of the model's topology and, most notably, to functional predictions based on cell- and condition-specific metabolic models built based on omics data.An increasing number and diversity of biomedical questions are being addressed using constraint-based modeling and metabolic models. One of the most successful biomedical applications to date is cancer metabolism, but constraint-based modeling also holds great potential for inborn errors of metabolism or obesity. In addition, it offers great prospects for individualized approaches to diagnostics and the design of disease prevention and intervention strategies. Metabolic models support this endeavor by providing easy access to complex high-throughput datasets. Personalized metabolic models have been introduced. Finally, constraint-based modeling can be used to model whole-body metabolism, which will enable the elucidation of metabolic interactions between organs and disturbances of these interactions as either causes or consequence of metabolic diseases. This chapter introduces constraint-based modeling and describes some of its contributions to systems biomedicine.

  1. Assessing the multi-scale predictive ability of ecosystem functional attributes for species distribution modelling.

    PubMed

    Arenas-Castro, Salvador; Gonçalves, João; Alves, Paulo; Alcaraz-Segura, Domingo; Honrado, João P

    2018-01-01

    Global environmental changes are rapidly affecting species' distributions and habitat suitability worldwide, requiring a continuous update of biodiversity status to support effective decisions on conservation policy and management. In this regard, satellite-derived Ecosystem Functional Attributes (EFAs) offer a more integrative and quicker evaluation of ecosystem responses to environmental drivers and changes than climate and structural or compositional landscape attributes. Thus, EFAs may hold advantages as predictors in Species Distribution Models (SDMs) and for implementing multi-scale species monitoring programs. Here we describe a modelling framework to assess the predictive ability of EFAs as Essential Biodiversity Variables (EBVs) against traditional datasets (climate, land-cover) at several scales. We test the framework with a multi-scale assessment of habitat suitability for two plant species of conservation concern, both protected under the EU Habitats Directive, differing in terms of life history, range and distribution pattern (Iris boissieri and Taxus baccata). We fitted four sets of SDMs for the two test species, calibrated with: interpolated climate variables; landscape variables; EFAs; and a combination of climate and landscape variables. EFA-based models performed very well at the several scales (AUCmedian from 0.881±0.072 to 0.983±0.125), and similarly to traditional climate-based models, individually or in combination with land-cover predictors (AUCmedian from 0.882±0.059 to 0.995±0.083). Moreover, EFA-based models identified additional suitable areas and provided valuable information on functional features of habitat suitability for both test species (narrowly vs. widely distributed), for both coarse and fine scales. Our results suggest a relatively small scale-dependence of the predictive ability of satellite-derived EFAs, supporting their use as meaningful EBVs in SDMs from regional and broader scales to more local and finer scales. Since the evaluation of species' conservation status and habitat quality should as far as possible be performed based on scalable indicators linking to meaningful processes, our framework may guide conservation managers in decision-making related to biodiversity monitoring and reporting schemes.

  2. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DOE PAGES

    King, Zachary A.; Lu, Justin; Drager, Andreas; ...

    2015-10-17

    In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less

  3. Polynomial Conjoint Analysis of Similarities: A Model for Constructing Polynomial Conjoint Measurement Algorithms.

    ERIC Educational Resources Information Center

    Young, Forrest W.

    A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…

  4. Guiding Learners into Reengagement through the SCALE Environment: An Empirical Study

    ERIC Educational Resources Information Center

    Verginis, Ilias; Gouli, Evangelia; Gogoulou, Agoritsa; Grigoriadou, Maria

    2011-01-01

    The paper presents the facilities offered by the open learner model maintained in the web-based, adaptive, activity-oriented learning environment SCALE (Supporting Collaboration and Adaptation in a Learning Environment), in order to guide online students who become disengaged and support their reengagement. The open learner model (OLM_SCALE)…

  5. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    NASA Astrophysics Data System (ADS)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  6. Models of inertial range spectra of interplanetary magnetohydrodynamic turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Matthaeus, William H.

    1990-01-01

    A framework based on turbulence theory is presented to develop approximations for the local turbulence effects that are required in transport models. An approach based on Kolmogoroff-style dimensional analysis is presented as well as one based on a wave-number diffusion picture. Particular attention is given to the case of MHD turbulence with arbitrary cross helicity and with arbitrary ratios of the Alfven time scale and the nonlinear time scale.

  7. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  8. Groundwater recharge from point to catchment scale

    NASA Astrophysics Data System (ADS)

    Leterme, Bertrand; Di Ciacca, Antoine; Laloy, Eric; Jacques, Diederik

    2016-04-01

    Accurate estimation of groundwater recharge is a challenging task as only a few devices (if any) can measure it directly. In this study, we discuss how groundwater recharge can be calculated at different temporal and spatial scales in the Kleine Nete catchment (Belgium). A small monitoring network is being installed, that is aimed to monitor the changes in dominant processes and to address data availability as one goes from the point to the catchment scale. At the point scale, groundwater recharge is estimated using inversion of soil moisture and/or water potential data and stable isotope concentrations (Koeniger et al. 2015). At the plot scale, it is proposed to monitor the discharge of a small drainage ditch in order to calculate the field groundwater recharge. Electrical conductivity measurements are necessary to separate shallow from deeper groundwater contribution to the ditch discharge (see Di Ciacca et al. poster in session HS8.3.4). At this scale, two or three-dimensional process-based vadose zone models will be used to model subsurface flow. At the catchment scale though, using a mechanistic, process-based model to estimate groundwater recharge is debatable (because of, e.g., the presence of numerous drainage ditches, mixed land use pixels, etc.). We therefore investigate to which extent various types of surrogate models can be used to make the necessary upscaling from the plot scale to the scale of the whole Kleine Nete catchment. Ref. Koeniger P, Gaj M, Beyer M, Himmelsbach T (2015) Review on soil water isotope based groundwater recharge estimations. Hydrological Processes, DOI: 10.1002/hyp.10775

  9. Toward micro-scale spatial modeling of gentrification

    NASA Astrophysics Data System (ADS)

    O'Sullivan, David

    A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.

  10. Scaling Characteristics of Mesoscale Wind Fields in the Lower Atmospheric Boundary Layer: Implications for Wind Energy

    NASA Astrophysics Data System (ADS)

    Kiliyanpilakkil, Velayudhan Praju

    Atmospheric motions take place in spatial scales of sub-millimeters to few thousands of kilometers with temporal changes in the atmospheric variables occur in fractions of seconds to several years. Consequently, the variations in atmospheric kinetic energy associated with these atmospheric motions span over a broad spectrum of space and time. The mesoscale region acts as an energy transferring regime between the energy generating synoptic scale and the energy dissipating microscale. Therefore, the scaling characterizations of mesoscale wind fields are significant in the accurate estimation of the atmospheric energy budget. Moreover, the precise knowledge of the scaling characteristics of atmospheric mesoscale wind fields is important for the validation of the numerical models those focus on wind forecasting, dispersion, diffusion, horizontal transport, and optical turbulence. For these reasons, extensive studies have been conducted in the past to characterize the mesoscale wind fields. Nevertheless, the majority of these studies focused on near-surface and upper atmosphere mesoscale regimes. The present study attempt to identify the existence and to quantify the scaling of mesoscale wind fields in the lower atmospheric boundary layer (ABL; in the wind turbine layer) using wind observations from various research-grade instruments (e.g., sodars, anemometers). The scaling characteristics of the mesoscale wind speeds over diverse homogeneous flat terrains, conducted using structure function based analysis, revealed an altitudinal dependence of the scaling exponents. This altitudinal dependence of the wind speed scaling may be attributed to the buoyancy forcing. Subsequently, we use the framework of extended self-similarity (ESS) to characterize the observed scaling behavior. In the ESS framework, the relative scaling exponents of the mesoscale atmospheric boundary layer wind speed exhibit quasi-universal behavior; even far beyond the inertial range of turbulence (Delta t within 10 minutes to 6 hours range). The ESS framework based study is extended further to enquire its validity over complex terrain. This study, based on multiyear wind observations, demonstrate that the ESS holds for the lower ABL wind speed over the complex terrain as well. Another important inference from this study is that the ESS relative scaling exponents corresponding to the mesoscale wind speed closely matches the scaling characteristics of the inertial range turbulence, albeit not exactly identical. The current study proposes benchmark using ESS-based quasi-universal wind speed scaling characteristics in the ABL for the mesoscale modeling community. Using a state-of-the-art atmospheric mesoscale model in conjunction with different planetary boundary layer (PBL) parameterization schemes, multiple wind speed simulations have been conducted. This study reveals that the ESS scaling characteristics of the model simulated wind speed time series in the lower ABL vary significantly from their observational counterparts. The study demonstrate that the model simulated wind speed time series for the time intervals Delta t < 2 hours do not capture the ESS-based scaling characteristics. The detailed analysis of model simulations using different PBL schemes lead to the conclusion that there is a need for significant improvements in the turbulent closure parameterizations adapted in the new-generation atmospheric models. This study is unique as the ESS framework has never been reported or examined for the validation of PBL parameterizations.

  11. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.

  12. Cost-effectiveness of scaling up voluntary counselling and testing in West-Java, Indonesia.

    PubMed

    Tromp, Noor; Siregar, Adiatma; Leuwol, Barnabas; Komarudin, Dindin; van der Ven, Andre; van Crevel, Reinout; Baltussen, Rob

    2013-01-01

    to evaluate the costs-effectiveness of scaling up community-based VCT in West-Java. the Asian epidemic model (AEM) and resource needs model (RNM) were used to calculate incremental costs per HIV infection averted and per disability-adjusted life years saved (DALYs). Locally monitored demographic, epidemiological behavior and cost data were used as model input. scaling up community-based VCT in West-Java will reduce the overall population prevalence by 36% in 2030 and costs US$248 per HIV infection averted and US$9.17 per DALY saved. Cost-effectiveness estimation were most sensitive to the impact of VCT on condom use and to the population size of clients of female sex workers (FSWs), but were overall robust. The total costs for scaling up community-based VCT range between US$1.3 and 3.8 million per year and require the number of VCT integrated clinics at public community health centers to increase from 73 in 2010 to 594 in 2030. scaling up community-based VCT seems both an effective and cost-effective intervention. However, in order to prioritize VCT in HIV/AIDS control in West-Java, issues of budget availability and organizational capacity should be addressed.

  13. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  14. Nonlinear effects of locally heterogeneous hydraulic conductivity fields on regional stream-aquifer exchanges

    NASA Astrophysics Data System (ADS)

    Zhu, J.; Winter, C. L.; Wang, Z.

    2015-08-01

    Computational experiments are performed to evaluate the effects of locally heterogeneous conductivity fields on regional exchanges of water between stream and aquifer systems in the Middle Heihe River Basin (MHRB) of northwestern China. The effects are found to be nonlinear in the sense that simulated discharges from aquifers to streams are systematically lower than discharges produced by a base model parameterized with relatively coarse effective conductivity. A similar, but weaker, effect is observed for stream leakage. The study is organized around three hypotheses: (H1) small-scale spatial variations of conductivity significantly affect regional exchanges of water between streams and aquifers in river basins, (H2) aggregating small-scale heterogeneities into regional effective parameters systematically biases estimates of stream-aquifer exchanges, and (H3) the biases result from slow-paths in groundwater flow that emerge due to small-scale heterogeneities. The hypotheses are evaluated by comparing stream-aquifer fluxes produced by the base model to fluxes simulated using realizations of the MHRB characterized by local (grid-scale) heterogeneity. Levels of local heterogeneity are manipulated as control variables by adjusting coefficients of variation. All models are implemented using the MODFLOW simulation environment, and the PEST tool is used to calibrate effective conductivities defined over 16 zones within the MHRB. The effective parameters are also used as expected values to develop log-normally distributed conductivity (K) fields on local grid scales. Stream-aquifer exchanges are simulated with K fields at both scales and then compared. Results show that the effects of small-scale heterogeneities significantly influence exchanges with simulations based on local-scale heterogeneities always producing discharges that are less than those produced by the base model. Although aquifer heterogeneities are uncorrelated at local scales, they appear to induce coherent slow-paths in groundwater fluxes that in turn reduce aquifer-stream exchanges. Since surface water-groundwater exchanges are critical hydrologic processes in basin-scale water budgets, these results also have implications for water resources management.

  15. The Rangeland Hydrology and Erosion Model

    NASA Astrophysics Data System (ADS)

    Nearing, M. A.

    2016-12-01

    The Rangeland Hydrology and Erosion Model (RHEM) is a process-based model that was designed to address rangelands conditions. RHEM is designed for government agencies, land managers and conservationists who need sound, science-based technology to model, assess, and predict runoff and erosion rates on rangelands and to assist in evaluating rangeland conservation practices effects. RHEM is an event-based model that estimates runoff, erosion, and sediment delivery rates and volumes at the spatial scale of the hillslope and the temporal scale of as single rainfall event. It represents erosion processes under normal and fire-impacted rangeland conditions. Moreover, it adopts a new splash erosion and thin sheet-flow transport equation developed from rangeland data, and it links the model hydrologic and erosion parameters with rangeland plant community by providing a new system of parameter estimation equations based on 204 plots at 49 rangeland sites distributed across 15 western U.S. states. A dynamic partial differential sediment continuity equation is used to model the total detachment rate of concentrated flow and rain splash and sheet flow. RHEM is also designed to be used as a calculator, or "engine", within other watershed scale models. From the research perspective RHEM acts as a vehicle for incorporating new scientific findings from rangeland infiltration, runoff, and erosion studies. Current applications of the model include: 1) a web site for general use (conservation planning, research, etc.), 2) National Resource Inventory reports to Congress, 3) as a computational engine within watershed scale models (e.g., KINEROS, HEC), 4) Ecological Site & State and Transition Descriptions, 5) proposed in 2015 to become part of the NRCS Desktop applications for field offices.

  16. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  17. Validation of a New Conceptual Model of School Connectedness and Its Assessment Measure

    ERIC Educational Resources Information Center

    Hirao, Katsura

    2011-01-01

    A self-report assessment scale of school connectedness was validated in this study based on the data from middle-school children in a northeastern state of the United States (n = 145). The scale was based on the School Bonding Model (Morita, 1991), which was derived reductively from the social control (bond) theory (Hirschi, 1969). This validation…

  18. A Novel BA Complex Network Model on Color Template Matching

    PubMed Central

    Han, Risheng; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235

  19. A novel BA complex network model on color template matching.

    PubMed

    Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.

  20. Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture: Original Research Article: Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture

    DOE PAGES

    Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling; ...

    2018-03-25

    In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less

  1. Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture: Original Research Article: Device-scale CFD modeling of gas-liquid multiphase flow and amine absorption for CO 2 capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling

    In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less

  2. The anamorphic universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ijjas, Anna; Steinhardt, Paul J., E-mail: aijjas@princeton.edu, E-mail: steinh@princeton.edu

    We introduce ''anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariantmore » spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.« less

  3. Bounds on low scale gravity from RICE data and cosmogenic neutrino flux models

    NASA Astrophysics Data System (ADS)

    Hussain, Shahid; McKay, Douglas W.

    2006-03-01

    We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on M, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. Assuming proton-based cosmogenic flux models that cover a broad range of flux possibilities, we find bounds in the interval 0.9 TeV

  4. Validity of thermally-driven small-scale ventilated filling box models

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie L.; Linden, P. F.

    2013-11-01

    The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.

  5. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 3; Disaggregation

    NASA Technical Reports Server (NTRS)

    Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius

    1998-01-01

    This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.

  6. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  7. Nonlinear effects of locally heterogeneous hydraulic conductivity fields on regional stream-aquifer exchanges

    NASA Astrophysics Data System (ADS)

    Zhu, J.; Winter, C. L.; Wang, Z.

    2015-11-01

    Computational experiments are performed to evaluate the effects of locally heterogeneous conductivity fields on regional exchanges of water between stream and aquifer systems in the Middle Heihe River basin (MHRB) of northwestern China. The effects are found to be nonlinear in the sense that simulated discharges from aquifers to streams are systematically lower than discharges produced by a base model parameterized with relatively coarse effective conductivity. A similar, but weaker, effect is observed for stream leakage. The study is organized around three hypotheses: (H1) small-scale spatial variations of conductivity significantly affect regional exchanges of water between streams and aquifers in river basins, (H2) aggregating small-scale heterogeneities into regional effective parameters systematically biases estimates of stream-aquifer exchanges, and (H3) the biases result from slow paths in groundwater flow that emerge due to small-scale heterogeneities. The hypotheses are evaluated by comparing stream-aquifer fluxes produced by the base model to fluxes simulated using realizations of the MHRB characterized by local (grid-scale) heterogeneity. Levels of local heterogeneity are manipulated as control variables by adjusting coefficients of variation. All models are implemented using the MODFLOW (Modular Three-dimensional Finite-difference Groundwater Flow Model) simulation environment, and the PEST (parameter estimation) tool is used to calibrate effective conductivities defined over 16 zones within the MHRB. The effective parameters are also used as expected values to develop lognormally distributed conductivity (K) fields on local grid scales. Stream-aquifer exchanges are simulated with K fields at both scales and then compared. Results show that the effects of small-scale heterogeneities significantly influence exchanges with simulations based on local-scale heterogeneities always producing discharges that are less than those produced by the base model. Although aquifer heterogeneities are uncorrelated at local scales, they appear to induce coherent slow paths in groundwater fluxes that in turn reduce aquifer-stream exchanges. Since surface water-groundwater exchanges are critical hydrologic processes in basin-scale water budgets, these results also have implications for water resources management.

  8. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  9. Scaling Factor Variability and Toxicokinetic Outcomes in Children

    EPA Science Inventory

    Abstract title: Scaling Factor Variability and Toxicokinetic Outcomes in ChildrenBackgroundBiotransformation rates (Vmax) extrapolated from in vitro data are used increasingly in human physiologically based pharmacokinetic (PBPK) models. PBPK models are widely used in human hea...

  10. Reducing aeration energy consumption in a large-scale membrane bioreactor: Process simulation and engineering application.

    PubMed

    Sun, Jianyu; Liang, Peng; Yan, Xiaoxu; Zuo, Kuichang; Xiao, Kang; Xia, Junlin; Qiu, Yong; Wu, Qing; Wu, Shijia; Huang, Xia; Qi, Meng; Wen, Xianghua

    2016-04-15

    Reducing the energy consumption of membrane bioreactors (MBRs) is highly important for their wider application in wastewater treatment engineering. Of particular significance is reducing aeration in aerobic tanks to reduce the overall energy consumption. This study proposed an in situ ammonia-N-based feedback control strategy for aeration in aerobic tanks; this was tested via model simulation and through a large-scale (50,000 m(3)/d) engineering application. A full-scale MBR model was developed based on the activated sludge model (ASM) and was calibrated to the actual MBR. The aeration control strategy took the form of a two-step cascaded proportion-integration (PI) feedback algorithm. Algorithmic parameters were optimized via model simulation. The strategy achieved real-time adjustment of aeration amounts based on feedback from effluent quality (i.e., ammonia-N). The effectiveness of the strategy was evaluated through both the model platform and the full-scale engineering application. In the former, the aeration flow rate was reduced by 15-20%. In the engineering application, the aeration flow rate was reduced by 20%, and overall specific energy consumption correspondingly reduced by 4% to 0.45 kWh/m(3)-effluent, using the present practice of regulating the angle of guide vanes of fixed-frequency blowers. Potential energy savings are expected to be higher for MBRs with variable-frequency blowers. This study indicated that the ammonia-N-based aeration control strategy holds promise for application in full-scale MBRs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Comparing Time-Dependent Geomagnetic and Atmospheric Effects on Cosmogenic Nuclide Production Rate Scaling

    NASA Astrophysics Data System (ADS)

    Lifton, N. A.

    2014-12-01

    A recently published cosmogenic nuclide production rate scaling model based on analytical fits to Monte Carlo simulations of atmospheric cosmic ray flux spectra (both of which agree well with measured spectra) (Lifton et al., 2014, Earth Planet. Sci. Lett. 386, 149-160: termed the LSD model) provides two main advantages over previous scaling models: identification and quantification of potential sources of bias in the earlier models, and the ability to generate nuclide-specific scaling factors easily for a wide range of input parameters. The new model also provides a flexible framework for exploring the implications of advances in model inputs. In this work, the scaling implications of two recent time-dependent spherical harmonic geomagnetic models spanning the Holocene will be explored. Korte and Constable (2011, Phys. Earth Planet. Int. 188, 247-259) and Korte et al. (2011, Earth Planet. Sci. Lett. 312, 497-505) recently updated earlier spherical harmonic paleomagnetic models used by Lifton et al. (2014) with paleomagnetic measurements from sediment cores in addition to archeomagnetic and volcanic data. These updated models offer improved accuracy over the previous versions, in part to due to increased temporal and spatial data coverage. With the new models as input, trajectory-traced estimates of effective vertical cutoff rigidity (RC- the standard method for ordering cosmic ray data) yield significantly different time-integrated scaling predictions when compared to the earlier models. These results will be compared to scaling predictions using another recent time-dependent spherical harmonic model of the Holocene geomagnetic field by Pavón-Carrasco et al. (2014, Earth Planet. Sci. Lett. 388, 98-109), based solely on archeomagnetic and volcanic paleomagnetic data, but extending to 14 ka. In addition, the potential effects of time-dependent atmospheric models on LSD scaling predictions will be presented. Given the typical dominance of altitudinal over latitudinal scaling effects on cosmogenic nuclide production, incorporating transient global simulations of atmospheric structure (e.g., Liu et al., 2009, Science 325, 310-314) into scaling frameworks may contribute to improved understanding of long-term production rate variations.

  12. Use of Fuzzy rainfall-runoff predictions for claypan watersheds with conservation buffers in Northeast Missouri

    NASA Astrophysics Data System (ADS)

    Anomaa Senaviratne, G. M. M. M.; Udawatta, Ranjith P.; Anderson, Stephen H.; Baffaut, Claire; Thompson, Allen

    2014-09-01

    Fuzzy rainfall-runoff models are often used to forecast flood or water supply in large catchments and applications at small/field scale agricultural watersheds are limited. The study objectives were to develop, calibrate, and validate a fuzzy rainfall-runoff model using long-term data of three adjacent field scale row crop watersheds (1.65-4.44 ha) with intermittent discharge in the claypan soils of Northeast Missouri. The watersheds were monitored for a six-year calibration period starting 1991 (pre-buffer period). Thereafter, two of them were treated with upland contour grass and agroforestry (tree + grass) buffers (4.5 m wide, 36.5 m apart) to study water quality benefits. The fuzzy system was based on Mamdani method using MATLAB 7.10.0. The model predicted event-based runoff with model performance coefficients of r2 and Nash-Sutcliffe Coefficient (NSC) values greater than 0.65 for calibration and validation. The pre-buffer fuzzy system predicted event-based runoff for 30-50 times larger corn/soybean watersheds with r2 values of 0.82 and 0.68 and NSC values of 0.77 and 0.53, respectively. The runoff predicted by the fuzzy system closely agreed with values predicted by physically-based Agricultural Policy Environmental eXtender model (APEX) for the pre-buffer watersheds. The fuzzy rainfall-runoff model has the potential for runoff predictions at field-scale watersheds with minimum input. It also could up-scale the predictions for large-scale watersheds to evaluate the benefits of conservation practices.

  13. The Gifted Rating Scales-School Form: An Analysis of the Standardization Sample Based on Age, Gender, Race, and Diagnostic Efficiency

    ERIC Educational Resources Information Center

    Pfeiffer, Steven I.; Jarosewich, Tania

    2007-01-01

    This study analyzes the standardization sample of a new teacher rating scale designed to assist in the identification of gifted students. The Gifted Rating Scales-School Form (GRS-S) is based on a multidimensional model of giftedness. Results indicate no age or race/ethnicity differences on any of the scales and small but significant differences…

  14. Natural Scale for Employee's Payment Based on the Entropy Law

    NASA Astrophysics Data System (ADS)

    Cosma, Ioan; Cosma, Adrian

    2009-05-01

    An econophysical modeling fated to establish an equitable scale of employees' salary in accordance with the importance and effectiveness of labor is considered. Our model, based on the concept and law of entropy, can designate all the parameters connected to the level of personal incomes and taxations, and also to the distribution of employees versus amount of salary in any remuneration system. Consistent with the laws of classical and statistical thermodynamics, this scale reveals that the personal incomes increased progressively in a natural logarithmic way, different compared with other scales arbitrary established by the governments of each country or by employing companies.

  15. Modeling of copper sorption onto GFH and design of full-scale GFH adsorbers.

    PubMed

    Steiner, Michele; Pronk, Wouter; Boller, Markus A

    2006-03-01

    During rain events, copper wash-off occurring from copper roofs results in environmental hazards. In this study, columns filled with granulated ferric hydroxide (GFH) were used to treat copper-containing roof runoff. It was shown that copper could be removed to a high extent. A model was developed to describe this removal process. The model was based on the Two Region Model (TRM), extended with an additional diffusion zone. The extended model was able to describe the copper removal in long-term experiments (up to 125 days) with variable flow rates reflecting realistic runoff events. The four parameters of the model were estimated based on data gained with specific column experiments according to maximum sensitivity for each parameter. After model validation, the parameter set was used for the design of full-scale adsorbers. These full-scale adsorbers show high removal rates during extended periods of time.

  16. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  17. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  18. A Bayesian hierarchical latent trait model for estimating rater bias and reliability in large-scale performance assessment

    PubMed Central

    2018-01-01

    We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129

  19. Model simulations and proxy-based reconstructions for the European region in the past millennium (Invited)

    NASA Astrophysics Data System (ADS)

    Zorita, E.

    2009-12-01

    One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.

  20. Generating clustered scale-free networks using Poisson based localization of edges

    NASA Astrophysics Data System (ADS)

    Türker, İlker

    2018-05-01

    We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.

  1. Simulating large-scale crop yield by using perturbed-parameter ensemble method

    NASA Astrophysics Data System (ADS)

    Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.

    2010-12-01

    Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.

  2. Voluntary EMG-to-force estimation with a multi-scale physiological muscle model

    PubMed Central

    2013-01-01

    Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560

  3. Pore-scale and continuum simulations of solute transport micromodel benchmark experiments

    DOE PAGES

    Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...

    2014-06-18

    Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less

  4. A unifying framework for systems modeling, control systems design, and system operation

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.

    2005-01-01

    Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.

  5. Principal Components Analyses of the MMPI-2 PSY-5 Scales. Identification of Facet Subscales

    ERIC Educational Resources Information Center

    Arnau, Randolph C.; Handel, Richard W.; Archer, Robert P.

    2005-01-01

    The Personality Psychopathology Five (PSY-5) is a five-factor personality trait model designed for assessing personality pathology using quantitative dimensions. Harkness, McNulty, and Ben-Porath developed Minnesota Multiphasic Personality Inventory-2 (MMPI-2) scales based on the PSY-5 model, and these scales were recently added to the standard…

  6. Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning

    NASA Astrophysics Data System (ADS)

    Zuberi, M. AH; Pratt, R. G.

    2018-04-01

    The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.

  7. A new time scale based k-epsilon model for near wall turbulence

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Shih, T. H.

    1992-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  8. New time scale based k-epsilon model for near-wall turbulence

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  9. Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis

    NASA Astrophysics Data System (ADS)

    Mills, D. A.

    2017-10-01

    In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.

  10. Scaling for the SOL/separatrix χ ⊥ following from the heuristic drift model for the power scrape-off layer width

    NASA Astrophysics Data System (ADS)

    Huber, A.; Chankin, A. V.

    2017-06-01

    A simple two-point representation of the tokamak scrape-off layer (SOL) in the conduction limited regime, based on the parallel and perpendicular energy balance equations in combination with the heat flux width predicted by a heuristic drift-based model, was used to derive a scaling for the cross-field thermal diffusivity {χ }\\perp . For fixed plasma shape and neglecting weak power dependence indexes 1/8, the scaling {χ }\\perp \\propto {P}{{S}{{O}}{{L}}}/(n{B}θ {R}2) is derived.

  11. Modeling specific action potentials in the human atria based on a minimal single-cell model.

    PubMed

    Richter, Yvonne; Lind, Pedro G; Maass, Philipp

    2018-01-01

    We present an effective method to model empirical action potentials of specific patients in the human atria based on the minimal model of Bueno-Orovio, Cherry and Fenton adapted to atrial electrophysiology. In this model, three ionic are currents introduced, where each of it is governed by a characteristic time scale. By applying a nonlinear optimization procedure, a best combination of the respective time scales is determined, which allows one to reproduce specific action potentials with a given amplitude, width and shape. Possible applications for supporting clinical diagnosis are pointed out.

  12. A detailed model for simulation of catchment scale subsurface hydrologic processes

    NASA Technical Reports Server (NTRS)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  13. Cyanobacterial Biofuels: Strategies and Developments on Network and Modeling.

    PubMed

    Klanchui, Amornpan; Raethong, Nachon; Prommeenate, Peerada; Vongsangnak, Wanwipa; Meechai, Asawin

    Cyanobacteria, the phototrophic microorganisms, have attracted much attention recently as a promising source for environmentally sustainable biofuels production. However, barriers for commercial markets of cyanobacteria-based biofuels concern the economic feasibility. Miscellaneous strategies for improving the production performance of cyanobacteria have thus been developed. Among these, the simple ad hoc strategies resulting in failure to optimize fully cell growth coupled with desired product yield are explored. With the advancement of genomics and systems biology, a new paradigm toward systems metabolic engineering has been recognized. In particular, a genome-scale metabolic network reconstruction and modeling is a crucial systems-based tool for whole-cell-wide investigation and prediction. In this review, the cyanobacterial genome-scale metabolic models, which offer a system-level understanding of cyanobacterial metabolism, are described. The main process of metabolic network reconstruction and modeling of cyanobacteria are summarized. Strategies and developments on genome-scale network and modeling through the systems metabolic engineering approach are advanced and employed for efficient cyanobacterial-based biofuels production.

  14. Multiscale functions, scale dynamics, and applications to partial differential equations

    NASA Astrophysics Data System (ADS)

    Cresson, Jacky; Pierret, Frédéric

    2016-05-01

    Modeling phenomena from experimental data always begins with a choice of hypothesis on the observed dynamics such as determinism, randomness, and differentiability. Depending on these choices, different behaviors can be observed. The natural question associated to the modeling problem is the following: "With a finite set of data concerning a phenomenon, can we recover its underlying nature? From this problem, we introduce in this paper the definition of multi-scale functions, scale calculus, and scale dynamics based on the time scale calculus [see Bohner, M. and Peterson, A., Dynamic Equations on Time Scales: An Introduction with Applications (Springer Science & Business Media, 2001)] which is used to introduce the notion of scale equations. These definitions will be illustrated on the multi-scale Okamoto's functions. Scale equations are analysed using scale regimes and the notion of asymptotic model for a scale equation under a particular scale regime. The introduced formalism explains why a single scale equation can produce distinct continuous models even if the equation is scale invariant. Typical examples of such equations are given by the scale Euler-Lagrange equation. We illustrate our results using the scale Newton's equation which gives rise to a non-linear diffusion equation or a non-linear Schrödinger equation as asymptotic continuous models depending on the particular fractional scale regime which is considered.

  15. NATIONAL-SCALE ASSESSMENT OF AIR TOXICS RISKS ...

    EPA Pesticide Factsheets

    The national-scale assessment of air toxics risks is a modeling assessment which combines emission inventory development, atmospheric fate and transport modeling, exposure modeling, and risk assessment to characterize the risk associated with inhaling air toxics from outdoor sources. This national-scale effort will be initiated for the base year 1996 and repeated every three years thereafter to track trends and inform program development. Provide broad-scale understanding of inhalation risks for a subset of atmospherically-emitted air toxics to inform further data-gathering efforts and priority-setting for the EPA's Air Toxics Programs.

  16. Modeling erosion and sedimentation coupled with hydrological and overland flow processes at the watershed scale

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Ivanov, Valeriy Y.; Katopodes, Nikolaos D.

    2013-09-01

    A novel two-dimensional, physically based model of soil erosion and sediment transport coupled to models of hydrological and overland flow processes has been developed. The Hairsine-Rose formulation of erosion and deposition processes is used to account for size-selective sediment transport and differentiate bed material into original and deposited soil layers. The formulation is integrated within the framework of the hydrologic and hydrodynamic model tRIBS-OFM, Triangulated irregular network-based, Real-time Integrated Basin Simulator-Overland Flow Model. The integrated model explicitly couples the hydrodynamic formulation with the advection-dominated transport equations for sediment of multiple particle sizes. To solve the system of equations including both the Saint-Venant and the Hairsine-Rose equations, the finite volume method is employed based on Roe's approximate Riemann solver on an unstructured grid. The formulation yields space-time dynamics of flow, erosion, and sediment transport at fine scale. The integrated model has been successfully verified with analytical solutions and empirical data for two benchmark cases. Sensitivity tests to grid resolution and the number of used particle sizes have been carried out. The model has been validated at the catchment scale for the Lucky Hills watershed located in southeastern Arizona, USA, using 10 events for which catchment-scale streamflow and sediment yield data were available. Since the model is based on physical laws and explicitly uses multiple types of watershed information, satisfactory results were obtained. The spatial output has been analyzed and the driving role of topography in erosion processes has been discussed. It is expected that the integrated formulation of the model has the promise to reduce uncertainties associated with typical parameterizations of flow and erosion processes. A potential for more credible modeling of earth-surface processes is thus anticipated.

  17. Generalized probabilistic scale space for image restoration.

    PubMed

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  18. Assessment of health surveys: fitting a multidimensional graded response model.

    PubMed

    Depaoli, Sarah; Tiemensma, Jitske; Felt, John M

    The multidimensional graded response model, an item response theory (IRT) model, can be used to improve the assessment of surveys, even when sample sizes are restricted. Typically, health-based survey development utilizes classical statistical techniques (e.g. reliability and factor analysis). In a review of four prominent journals within the field of Health Psychology, we found that IRT-based models were used in less than 10% of the studies examining scale development or assessment. However, implementing IRT-based methods can provide more details about individual survey items, which is useful when determining the final item content of surveys. An example using a quality of life survey for Cushing's syndrome (CushingQoL) highlights the main components for implementing the multidimensional graded response model. Patients with Cushing's syndrome (n = 397) completed the CushingQoL. Results from the multidimensional graded response model supported a 2-subscale scoring process for the survey. All items were deemed as worthy contributors to the survey. The graded response model can accommodate unidimensional or multidimensional scales, be used with relatively lower sample sizes, and is implemented in free software (example code provided in online Appendix). Use of this model can help to improve the quality of health-based scales being developed within the Health Sciences.

  19. A Physiologically Based, Multi-Scale Model of Skeletal Muscle Structure and Function

    PubMed Central

    Röhrle, O.; Davidson, J. B.; Pullan, A. J.

    2012-01-01

    Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle’s response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modeling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle’s response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modeling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibers and their grouping. Together with a well-established model of motor-unit recruitment, the electro-physiological behavior of single muscle fibers within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenization. The effect of homogenization has been investigated by varying the number of embedded skeletal muscle fibers and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the tibialis anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modeling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behavior ranging from motor-unit recruitment to force generation and fatigue. PMID:22993509

  20. A Spatial Framework to Map Heat Health Risks at Multiple Scales.

    PubMed

    Ho, Hung Chak; Knudby, Anders; Huang, Wei

    2015-12-18

    In the last few decades extreme heat events have led to substantial excess mortality, most dramatically in Central Europe in 2003, in Russia in 2010, and even in typically cool locations such as Vancouver, Canada, in 2009. Heat-related morbidity and mortality is expected to increase over the coming centuries as the result of climate-driven global increases in the severity and frequency of extreme heat events. Spatial information on heat exposure and population vulnerability may be combined to map the areas of highest risk and focus mitigation efforts there. However, a mismatch in spatial resolution between heat exposure and vulnerability data can cause spatial scale issues such as the Modifiable Areal Unit Problem (MAUP). We used a raster-based model to integrate heat exposure and vulnerability data in a multi-criteria decision analysis, and compared it to the traditional vector-based model. We then used the Getis-Ord G(i) index to generate spatially smoothed heat risk hotspot maps from fine to coarse spatial scales. The raster-based model allowed production of maps at spatial resolution, more description of local-scale heat risk variability, and identification of heat-risk areas not identified with the vector-based approach. Spatial smoothing with the Getis-Ord G(i) index produced heat risk hotspots from local to regional spatial scale. The approach is a framework for reducing spatial scale issues in future heat risk mapping, and for identifying heat risk hotspots at spatial scales ranging from the block-level to the municipality level.

  1. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  2. Parcels versus pixels: modeling agricultural land use across broad geographic regions using parcel-based field boundaries

    USGS Publications Warehouse

    Sohl, Terry L.; Dornbierer, Jordan; Wika, Steve; Sayler, Kristi L.; Quenzer, Robert

    2017-01-01

    Land use and land cover (LULC) change occurs at a local level within contiguous ownership and management units (parcels), yet LULC models primarily use pixel-based spatial frameworks. The few parcel-based models being used overwhelmingly focus on small geographic areas, limiting the ability to assess LULC change impacts at regional to national scales. We developed a modified version of the Forecasting Scenarios of land use change model to project parcel-based agricultural change across a large region in the United States Great Plains. A scenario representing an agricultural biofuel scenario was modeled from 2012 to 2030, using real parcel boundaries based on contiguous ownership and land management units. The resulting LULC projection provides a vastly improved representation of landscape pattern over existing pixel-based models, while simultaneously providing an unprecedented combination of thematic detail and broad geographic extent. The conceptual approach is practical and scalable, with potential use for national-scale projections.

  3. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  4. Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters

    PubMed Central

    Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun

    2017-01-01

    Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved. PMID:28241475

  5. Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters.

    PubMed

    Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun

    2017-02-23

    Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved.

  6. Finite Element Method (FEM) Modeling of Freeze-drying: Monitoring Pharmaceutical Product Robustness During Lyophilization.

    PubMed

    Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V

    2015-12-01

    Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.

  7. Evaluating the Global Precipitation Measurement mission with NOAA/NSSL Multi-Radar Multisensor: current status and future directions.

    NASA Astrophysics Data System (ADS)

    Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C. D.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.

    2016-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  8. Visualizing and measuring flow in shale matrix using in situ synchrotron X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Kohli, A. H.; Kiss, A. M.; Kovscek, A. R.; Bargar, J.

    2017-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  9. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Structural similitude and design of scaled down laminated models

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Rezaeepazhand, J.

    1993-01-01

    The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.

  11. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  12. A Physically Based Runoff Routing Model for Land Surface and Earth System Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hongyi; Wigmosta, Mark S.; Wu, Huan

    2013-06-13

    A new physically based runoff routing model, called the Model for Scale Adaptive River Transport (MOSART), has been developed to be applicable across local, regional, and global scales. Within each spatial unit, surface runoff is first routed across hillslopes and then discharged along with subsurface runoff into a ‘‘tributary subnetwork’’ before entering the main channel. The spatial units are thus linked via routing through the main channel network, which is constructed in a scale-consistent way across different spatial resolutions. All model parameters are physically based, and only a small subset requires calibration.MOSART has been applied to the Columbia River basinmore » at 1/ 168, 1/ 88, 1/ 48, and 1/ 28 spatial resolutions and was evaluated using naturalized or observed streamflow at a number of gauge stations. MOSART is compared to two other routing models widely used with land surface models, the River Transport Model (RTM) in the Community Land Model (CLM) and the Lohmann routing model, included as a postprocessor in the Variable Infiltration Capacity (VIC) model package, yielding consistent performance at multiple resolutions. MOSART is further evaluated using the channel velocities derived from field measurements or a hydraulic model at various locations and is shown to be capable of producing the seasonal variation and magnitude of channel velocities reasonably well at different resolutions. Moreover, the impacts of spatial resolution on model simulations are systematically examined at local and regional scales. Finally, the limitations ofMOSART and future directions for improvements are discussed.« less

  13. The 0.040-scale space shuttle orbiter base heating model tests in the Lewis Research Center space power facility

    NASA Technical Reports Server (NTRS)

    Dezelick, R. A.

    1976-01-01

    Space shuttle base heating tests were conducted using a 0.040-scale model in the Plum Brook Space Power Facility of The NASA Lewis Research Center. The tests measured heat transfer rates, pressure distributions, and gas recovery temperatures on the orbiter vehicle 2A base configuration resulting from engine plume impingement. One hundred and sixty-eight hydrogen-oxygen engine firings were made at simulated flight altitudes ranging from 120,000 to 360,000 feet.

  14. Accounting for microbial habitats in modeling soil organic matter dynamics

    NASA Astrophysics Data System (ADS)

    Chenu, Claire; Garnier, Patricia; Nunan, Naoise; Pot, Valérie; Raynaud, Xavier; Vieublé, Laure; Otten, Wilfred; Falconer, Ruth; Monga, Olivier

    2017-04-01

    The extreme heterogeneity of soils constituents, architecture and inhabitants at the microscopic scale is increasingly recognized. Microbial communities exist and are active in a complex 3-D physical framework of mineral and organic particles defining pores of various sizes, more or less inter-connected. This results in a frequent spatial disconnection between soil carbon, energy sources and the decomposer organisms and a variety of microhabitats that are more or less suitable for microbial growth and activity. However, current biogeochemical models account for C dynamics at the macroscale (cm, m) and consider time- and spatially averaged relationships between microbial activity and soil characteristics. Different modelling approaches have intended to account for this microscale heterogeneity, based either on considering aggregates as surrogates for microbial habitats, or pores. Innovative modelling approaches are based on an explicit representation of soil structure at the fine scale, i.e. at µm to mm scales: pore architecture and their saturation with water, localization of organic resources and of microorganisms. Three recent models are presented here, that describe the heterotrophic activity of either bacteria or fungi and are based upon different strategies to represent the complex soil pore system (Mosaic, LBios and µFun). These models allow to hierarchize factors of microbial activity in soil's heterogeneous architecture. Present limits of these approaches and challenges are presented, regarding the extensive information required on soils at the microscale and to up-scale microbial functioning from the pore to the core scale.

  15. Predicting Future-Year Ozone Concentrations: Integrated Observational-Modeling Approach for Probabilistic Evaluation of the Efficacy of Emission Control strategies

    EPA Science Inventory

    Regional-scale air quality models are being used to demonstrate attainment of the ozone air quality standard. In current regulatory applications, a regional-scale air quality model is applied for a base year and a future year with reduced emissions using the same meteorological ...

  16. Comparison of Drainmod Based Watershed Scale Models

    Treesearch

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2004-01-01

    Watershed scale hydrology and water quality models (DRAINMOD-DUFLOW, DRAINMOD-W, DRAINMOD-GIS and WATGIS) that describe the nitrogen loadings at the outlet of poorly drained watersheds were examined with respect to their accuracy and uncertainty in model predictions. Latin Hypercube Sampling (LHS) was applied to determine the impact of uncertainty in estimating field...

  17. Multiscale Modeling of Cell Interaction in Angiogenesis: From the Micro- to Macro-scale

    NASA Astrophysics Data System (ADS)

    Pillay, Samara; Maini, Philip; Byrne, Helen

    Solid tumors require a supply of nutrients to grow in size. To this end, tumors induce the growth of new blood vessels from existing vasculature through the process of angiogenesis. In this work, we use a discrete agent-based approach to model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death processes. We use the transition probabilities associated with the discrete models to determine collective cell behavior, in terms of partial differential equations, using a Markov chain and master equation framework. We find that the cell-level dynamics gives rise to a migrating cell front in the form of a traveling wave on the macro-scale. The behavior of this front depends on the cell interactions that are included and the extent to which volume exclusion is taken into account in the discrete micro-scale model. We also find that well-established continuum models of angiogenesis cannot distinguish between certain types of cell behavior on the micro-scale. This may impact drug development strategies based on these models.

  18. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    NASA Astrophysics Data System (ADS)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving mechanism of urbanization and provide decision-making support for urban management.

  19. Comparison of MODIS and SWAT evapotranspiration over a complex terrain at different spatial scales

    NASA Astrophysics Data System (ADS)

    Abiodun, Olanrewaju O.; Guan, Huade; Post, Vincent E. A.; Batelaan, Okke

    2018-05-01

    In most hydrological systems, evapotranspiration (ET) and precipitation are the largest components of the water balance, which are difficult to estimate, particularly over complex terrain. In recent decades, the advent of remotely sensed data based ET algorithms and distributed hydrological models has provided improved spatially upscaled ET estimates. However, information on the performance of these methods at various spatial scales is limited. This study compares the ET from the MODIS remotely sensed ET dataset (MOD16) with the ET estimates from a SWAT hydrological model on graduated spatial scales for the complex terrain of the Sixth Creek Catchment of the Western Mount Lofty Ranges, South Australia. ET from both models was further compared with the coarser-resolution AWRA-L model at catchment scale. The SWAT model analyses are performed on daily timescales with a 6-year calibration period (2000-2005) and 7-year validation period (2007-2013). Differences in ET estimation between the SWAT and MOD16 methods of up to 31, 19, 15, 11 and 9 % were observed at respectively 1, 4, 9, 16 and 25 km2 spatial resolutions. Based on the results of the study, a spatial scale of confidence of 4 km2 for catchment-scale evapotranspiration is suggested in complex terrain. Land cover differences, HRU parameterisation in AWRA-L and catchment-scale averaging of input climate data in the SWAT semi-distributed model were identified as the principal sources of weaker correlations at higher spatial resolution.

  20. Foraging Behaviour in Magellanic Woodpeckers Is Consistent with a Multi-Scale Assessment of Tree Quality

    PubMed Central

    Vergara, Pablo M.; Soto, Gerardo E.; Rodewald, Amanda D.; Meneses, Luis O.; Pérez-Hernández, Christian G.

    2016-01-01

    Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox’s proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales. PMID:27416115

  1. Foraging Behaviour in Magellanic Woodpeckers Is Consistent with a Multi-Scale Assessment of Tree Quality.

    PubMed

    Vergara, Pablo M; Soto, Gerardo E; Moreira-Arce, Darío; Rodewald, Amanda D; Meneses, Luis O; Pérez-Hernández, Christian G

    2016-01-01

    Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox's proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales.

  2. On temporal stochastic modeling of precipitation, nesting models across scales

    NASA Astrophysics Data System (ADS)

    Paschalis, Athanasios; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2014-01-01

    We analyze the performance of composite stochastic models of temporal precipitation which can satisfactorily reproduce precipitation properties across a wide range of temporal scales. The rationale is that a combination of stochastic precipitation models which are most appropriate for specific limited temporal scales leads to better overall performance across a wider range of scales than single models alone. We investigate different model combinations. For the coarse (daily) scale these are models based on Alternating renewal processes, Markov chains, and Poisson cluster models, which are then combined with a microcanonical Multiplicative Random Cascade model to disaggregate precipitation to finer (minute) scales. The composite models were tested on data at four sites in different climates. The results show that model combinations improve the performance in key statistics such as probability distributions of precipitation depth, autocorrelation structure, intermittency, reproduction of extremes, compared to single models. At the same time they remain reasonably parsimonious. No model combination was found to outperform the others at all sites and for all statistics, however we provide insight on the capabilities of specific model combinations. The results for the four different climates are similar, which suggests a degree of generality and wider applicability of the approach.

  3. Spatial Modeling and Uncertainty Assessment of Fine Scale Surface Processes Based on Coarse Terrain Elevation Data

    NASA Astrophysics Data System (ADS)

    Rasera, L. G.; Mariethoz, G.; Lane, S. N.

    2017-12-01

    Frequent acquisition of high-resolution digital elevation models (HR-DEMs) over large areas is expensive and difficult. Satellite-derived low-resolution digital elevation models (LR-DEMs) provide extensive coverage of Earth's surface but at coarser spatial and temporal resolutions. Although useful for large scale problems, LR-DEMs are not suitable for modeling hydrologic and geomorphic processes at scales smaller than their spatial resolution. In this work, we present a multiple-point geostatistical approach for downscaling a target LR-DEM based on available high-resolution training data and recurrent high-resolution remote sensing images. The method aims at generating several equiprobable HR-DEMs conditioned to a given target LR-DEM by borrowing small scale topographic patterns from an analogue containing data at both coarse and fine scales. An application of the methodology is demonstrated by using an ensemble of simulated HR-DEMs as input to a flow-routing algorithm. The proposed framework enables a probabilistic assessment of the spatial structures generated by natural phenomena operating at scales finer than the available terrain elevation measurements. A case study in the Swiss Alps is provided to illustrate the methodology.

  4. Inverse finite-size scaling for high-dimensional significance analysis

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki

    2018-06-01

    We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

  5. Still searching for the Holy Grail: on the use of effective soil parameters for Parflow-CLM.

    NASA Astrophysics Data System (ADS)

    Baroni, Gabriele; Schalge, Bernd; Rihani, Jehan; Attinger, Sabine

    2015-04-01

    In the last decades the advances in computer science have led to a growing number of coupled and distributed hydrological models based on Richards' equation. Several studies were conducted for understanding hydrological processes at different spatial and temporal scales and they showed promising uses of these types of models also in practical applications. However, these models are generally applied to scales different from that at which the equation is deduced and validated. For this reason, the models are implemented with effective soil parameters that, in principle, should preserve the water fluxes that would have been estimated assuming the finer resolution scale. In this context, the reduction in spatial discretization becomes a trade-off between complexity and performance of the model. The aim of the present contribution is to assess the performance of Parflow-CLM implemented at different spatial scales. A virtual experiment based on data available for the Neckar catchment (Germany) is used as reference at 100x100m resolution. Different upscaling rules for the soil hydraulic parameters are used for coarsening the model up to 1x1km. The analysis is carried out based on different model output e.g., river discharge, evapotranspiration, soil moisture and groundwater recharge. The effects of soil variability, correlation length and spatial distribution over the water flow direction on the simulation results are discussed. Further researches aim to quantify the related uncertainty in model output and the possibility to fill in the model structure inadequacy with data assimilation techniques.

  6. Multi-scale individual-based model of microbial and bioconversion dynamics in aerobic granular sludge.

    PubMed

    Xavier, Joao B; De Kreuk, Merle K; Picioreanu, Cristian; Van Loosdrecht, Mark C M

    2007-09-15

    Aerobic granular sludge is a novel compact biological wastewater treatment technology for integrated removal of COD (chemical oxygen demand), nitrogen, and phosphate charges. We present here a multiscale model of aerobic granular sludge sequencing batch reactors (GSBR) describing the complex dynamics of populations and nutrient removal. The macro scale describes bulk concentrations and effluent composition in six solutes (oxygen, acetate, ammonium, nitrite, nitrate, and phosphate). A finer scale, the scale of one granule (1.1 mm of diameter), describes the two-dimensional spatial arrangement of four bacterial groups--heterotrophs, ammonium oxidizers, nitrite oxidizers, and phosphate accumulating organisms (PAO)--using individual based modeling (IbM) with species-specific kinetic models. The model for PAO includes three internal storage compounds: polyhydroxyalkanoates (PHA), poly phosphate, and glycogen. Simulations of long-term reactor operation show how the microbial population and activity depends on the operating conditions. Short-term dynamics of solute bulk concentrations are also generated with results comparable to experimental data from lab scale reactors. Our results suggest that N-removal in GSBR occurs mostly via alternating nitrification/denitrification rather than simultaneous nitrification/denitrification, supporting an alternative strategy to improve N-removal in this promising wastewater treatment process.

  7. Scale effects in the response and failure of fiber reinforced composite laminates loaded in tension and in flexure

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Kellas, Sotiris; Morton, John

    1992-01-01

    The feasibility of using scale model testing for predicting the full-scale behavior of flat composite coupons loaded in tension and beam-columns loaded in flexure is examined. Classical laws of similitude are applied to fabricate and test replica model specimens to identify scaling effects in the load response, strength, and mode of failure. Experiments were performed on graphite-epoxy composite specimens having different laminate stacking sequences and a range of scaled sizes. From the experiments it was deduced that the elastic response of scaled composite specimens was independent of size. However, a significant scale effect in strength was observed. In addition, a transition in failure mode was observed among scaled specimens of certain laminate stacking sequences. A Weibull statistical model and a fracture mechanics based model were applied to predict the strength scale effect since standard failure criteria cannot account for the influence of absolute specimen size on strength.

  8. Developing and testing a global-scale regression model to quantify mean annual streamflow

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.

    2017-01-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.

  9. Fire frequency in the Interior Columbia River Basin: Building regional models from fire history data

    USGS Publications Warehouse

    McKenzie, D.; Peterson, D.L.; Agee, James K.

    2000-01-01

    Fire frequency affects vegetation composition and successional pathways; thus it is essential to understand fire regimes in order to manage natural resources at broad spatial scales. Fire history data are lacking for many regions for which fire management decisions are being made, so models are needed to estimate past fire frequency where local data are not yet available. We developed multiple regression models and tree-based (classification and regression tree, or CART) models to predict fire return intervals across the interior Columbia River basin at 1-km resolution, using georeferenced fire history, potential vegetation, cover type, and precipitation databases. The models combined semiqualitative methods and rigorous statistics. The fire history data are of uneven quality; some estimates are based on only one tree, and many are not cross-dated. Therefore, we weighted the models based on data quality and performed a sensitivity analysis of the effects on the models of estimation errors that are due to lack of cross-dating. The regression models predict fire return intervals from 1 to 375 yr for forested areas, whereas the tree-based models predict a range of 8 to 150 yr. Both types of models predict latitudinal and elevational gradients of increasing fire return intervals. Examination of regional-scale output suggests that, although the tree-based models explain more of the variation in the original data, the regression models are less likely to produce extrapolation errors. Thus, the models serve complementary purposes in elucidating the relationships among fire frequency, the predictor variables, and spatial scale. The models can provide local managers with quantitative information and provide data to initialize coarse-scale fire-effects models, although predictions for individual sites should be treated with caution because of the varying quality and uneven spatial coverage of the fire history database. The models also demonstrate the integration of qualitative and quantitative methods when requisite data for fully quantitative models are unavailable. They can be tested by comparing new, independent fire history reconstructions against their predictions and can be continually updated, as better fire history data become available.

  10. A tool for multi-scale modelling of the renal nephron

    PubMed Central

    Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.

    2011-01-01

    We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210

  11. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    NASA Astrophysics Data System (ADS)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  12. The small length scale effect for a non-local cantilever beam: a paradox solved.

    PubMed

    Challamel, N; Wang, C M

    2008-08-27

    Non-local continuum mechanics allows one to account for the small length scale effect that becomes significant when dealing with microstructures or nanostructures. This paper presents some simplified non-local elastic beam models, for the bending analyses of small scale rods. Integral-type or gradient non-local models abandon the classical assumption of locality, and admit that stress depends not only on the strain value at that point but also on the strain values of all points on the body. There is a paradox still unresolved at this stage: some bending solutions of integral-based non-local elastic beams have been found to be identical to the classical (local) solution, i.e. the small scale effect is not present at all. One example is the Euler-Bernoulli cantilever nanobeam model with a point load which has application in microelectromechanical systems and nanoelectromechanical systems as an actuator. In this paper, it will be shown that this paradox may be overcome with a gradient elastic model as well as an integral non-local elastic model that is based on combining the local and the non-local curvatures in the constitutive elastic relation. The latter model comprises the classical gradient model and Eringen's integral model, and its application produces small length scale terms in the non-local elastic cantilever beam solution.

  13. Application of SIR-C SAR to Hydrology

    NASA Technical Reports Server (NTRS)

    Engman, Edwin T.; ONeill, Peggy; Wood, Eric; Pauwels, Valentine; Hsu, Ann; Jackson, Tom; Shi, J. C.; Prietzsch, Corinna

    1996-01-01

    The progress, results and future plans regarding the following objectives are presented: (1) Determine and compare soil moisture patterns within one or more humid watersheds using SAR data, ground-based measurements, and hydrologic modeling; (2) Use radar data to characterize the hydrologic regime within a catchment and to identify the runoff producing characteristics of humid zone watersheds; and (3) Use radar data as the basis for scaling up from small scale, near-point process models to larger scale water balance models necessary to define and quantify the land phase of GCM's (Global Circulation Models).

  14. A two-scale model for dynamic damage evolution

    NASA Astrophysics Data System (ADS)

    Keita, Oumar; Dascalu, Cristian; François, Bertrand

    2014-03-01

    This paper presents a new micro-mechanical damage model accounting for inertial effect. The two-scale damage model is fully deduced from small-scale descriptions of dynamic micro-crack propagation under tensile loading (mode I). An appropriate micro-mechanical energy analysis is combined with homogenization based on asymptotic developments in order to obtain the macroscopic evolution law for damage. Numerical simulations are presented in order to illustrate the ability of the model to describe known behaviors like size effects for the structural response, strain-rate sensitivity, brittle-ductile transition and wave dispersion.

  15. Comparing niche- and process-based models to reduce prediction uncertainty in species range shifts under climate change.

    PubMed

    Morin, Xavier; Thuiller, Wilfried

    2009-05-01

    Obtaining reliable predictions of species range shifts under climate change is a crucial challenge for ecologists and stakeholders. At the continental scale, niche-based models have been widely used in the last 10 years to predict the potential impacts of climate change on species distributions all over the world, although these models do not include any mechanistic relationships. In contrast, species-specific, process-based predictions remain scarce at the continental scale. This is regrettable because to secure relevant and accurate predictions it is always desirable to compare predictions derived from different kinds of models applied independently to the same set of species and using the same raw data. Here we compare predictions of range shifts under climate change scenarios for 2100 derived from niche-based models with those of a process-based model for 15 North American boreal and temperate tree species. A general pattern emerged from our comparisons: niche-based models tend to predict a stronger level of extinction and a greater proportion of colonization than the process-based model. This result likely arises because niche-based models do not take phenotypic plasticity and local adaptation into account. Nevertheless, as the two kinds of models rely on different assumptions, their complementarity is revealed by common findings. Both modeling approaches highlight a major potential limitation on species tracking their climatic niche because of migration constraints and identify similar zones where species extirpation is likely. Such convergent predictions from models built on very different principles provide a useful way to offset uncertainties at the continental scale. This study shows that the use in concert of both approaches with their own caveats and advantages is crucial to obtain more robust results and that comparisons among models are needed in the near future to gain accuracy regarding predictions of range shifts under climate change.

  16. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  17. Dynamic and Thermal Turbulent Time Scale Modelling for Homogeneous Shear Flows

    NASA Technical Reports Server (NTRS)

    Schwab, John R.; Lakshminarayana, Budugur

    1994-01-01

    A new turbulence model, based upon dynamic and thermal turbulent time scale transport equations, is developed and applied to homogeneous shear flows with constant velocity and temperature gradients. The new model comprises transport equations for k, the turbulent kinetic energy; tau, the dynamic time scale; k(sub theta), the fluctuating temperature variance; and tau(sub theta), the thermal time scale. It offers conceptually parallel modeling of the dynamic and thermal turbulence at the two equation level, and eliminates the customary prescription of an empirical turbulent Prandtl number, Pr(sub t), thus permitting a more generalized prediction capability for turbulent heat transfer in complex flows and geometries. The new model also incorporates constitutive relations, based upon invariant theory, that allow the effects of nonequilibrium to modify the primary coefficients for the turbulent shear stress and heat flux. Predictions of the new model, along with those from two other similar models, are compared with experimental data for decaying homogeneous dynamic and thermal turbulence, homogeneous turbulence with constant temperature gradient, and homogeneous turbulence with constant temperature gradient and constant velocity gradient. The new model offers improvement in agreement with the data for most cases considered in this work, although it was no better than the other models for several cases where all the models performed poorly.

  18. Scaling an in situ network for high resolution modeling during SMAPVEX15

    NASA Astrophysics Data System (ADS)

    Coopersmith, E. J.; Cosh, M. H.; Jacobs, J. M.; Jackson, T. J.; Crow, W. T.; Holifield Collins, C.; Goodrich, D. C.; Colliander, A.

    2015-12-01

    Among the greatest challenges within the field of soil moisture estimation is that of scaling sparse point measurements within a network to produce higher resolution map products. Large-scale field experiments present an ideal opportunity to develop methodologies for this scaling, by coupling in situ networks, temporary networks, and aerial mapping of soil moisture. During the Soil Moisture Active Passive Validation Experiments in 2015 (SMAPVEX15) in and around the USDA-ARS Walnut Gulch Experimental Watershed and LTAR site in southeastern Arizona, USA, a high density network of soil moisture stations was deployed across a sparse, permanent in situ network in coordination with intensive soil moisture sampling and an aircraft campaign. This watershed is also densely instrumented with precipitation gages (one gauge/0.57 km2) to monitor the North American Monsoon System, which dominates the hydrologic cycle during the summer months in this region. Using the precipitation and soil moisture time series values provided, a physically-based model is calibrated that will provide estimates at the 3km, 9km, and 36km scales. The results from this model will be compared with the point-scale gravimetric samples, aircraft-based sensor, and the satellite-based products retrieved from NASA's Soil Moisture Active Passive mission.

  19. Linking a modified EPIC-based growth model (UPGM) with a component-based watershed model (AGES-W)

    USDA-ARS?s Scientific Manuscript database

    Agricultural models and decision support systems (DSS) for assessing water use and management are increasingly being applied to diverse geographic regions at different scales. This requires models that can simulate different crops, however, very few plant growth models are available that “easily” ...

  20. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  1. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.

    PubMed

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.

  2. Quality by design: scale-up of freeze-drying cycles in pharmaceutical industry.

    PubMed

    Pisano, Roberto; Fissore, Davide; Barresi, Antonello A; Rastelli, Massimo

    2013-09-01

    This paper shows the application of mathematical modeling to scale-up a cycle developed with lab-scale equipment on two different production units. The above method is based on a simplified model of the process parameterized with experimentally determined heat and mass transfer coefficients. In this study, the overall heat transfer coefficient between product and shelf was determined by using the gravimetric procedure, while the dried product resistance to vapor flow was determined through the pressure rise test technique. Once model parameters were determined, the freeze-drying cycle of a parenteral product was developed via dynamic design space for a lab-scale unit. Then, mathematical modeling was used to scale-up the above cycle in the production equipment. In this way, appropriate values were determined for processing conditions, which allow the replication, in the industrial unit, of the product dynamics observed in the small scale freeze-dryer. This study also showed how inter-vial variability, as well as model parameter uncertainty, can be taken into account during scale-up calculations.

  3. On the upscaling of process-based models in deltaic applications

    NASA Astrophysics Data System (ADS)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  4. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less

  5. The GEWEX LandFlux project: Evaluation of model evaporation using tower-based and globally gridded forcing data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCabe, M. F.; Ershadi, A.; Jimenez, C.

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, fourmore » commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m –2; 0.65), followed closely by GLEAM (0.68; 64 W m –2; 0.62), with values in parentheses representing the R 2, RMSD and Nash–Sutcliffe efficiency (NSE), respectively. PM-Mu (0.51; 78 W m –2; 0.45) tended to underestimate fluxes, while SEBS (0.72; 101 W m –2; 0.24) overestimated values relative to observations. A focused analysis across specific biome types and climate zones showed considerable variability in the performance of all models, with no single model consistently able to outperform any other. Results also indicated that the global gridded data tended to reduce the performance for all of the studied models when compared to the tower data, likely a response to scale mismatch and issues related to forcing quality. Rather than relying on any single model simulation, the spatial and temporal variability at both the tower- and grid-scale highlighted the potential benefits of developing an ensemble or blended evaporation product for global-scale LandFlux applications. Hence, challenges related to the robust assessment of the LandFlux product are also discussed.« less

  6. The GEWEX LandFlux project: Evaluation of model evaporation using tower-based and globally gridded forcing data

    DOE PAGES

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; ...

    2016-01-26

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, fourmore » commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m –2; 0.65), followed closely by GLEAM (0.68; 64 W m –2; 0.62), with values in parentheses representing the R 2, RMSD and Nash–Sutcliffe efficiency (NSE), respectively. PM-Mu (0.51; 78 W m –2; 0.45) tended to underestimate fluxes, while SEBS (0.72; 101 W m –2; 0.24) overestimated values relative to observations. A focused analysis across specific biome types and climate zones showed considerable variability in the performance of all models, with no single model consistently able to outperform any other. Results also indicated that the global gridded data tended to reduce the performance for all of the studied models when compared to the tower data, likely a response to scale mismatch and issues related to forcing quality. Rather than relying on any single model simulation, the spatial and temporal variability at both the tower- and grid-scale highlighted the potential benefits of developing an ensemble or blended evaporation product for global-scale LandFlux applications. Hence, challenges related to the robust assessment of the LandFlux product are also discussed.« less

  7. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  8. Full-Scale Numerical Modeling of Turbulent Processes in the Earth's Ionosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliasson, B.; Stenflo, L.; Department of Physics, Linkoeping University, SE-581 83 Linkoeping

    2008-10-15

    We present a full-scale simulation study of ionospheric turbulence by means of a generalized Zakharov model based on the separation of variables into high-frequency and slow time scales. The model includes realistic length scales of the ionospheric profile and of the electromagnetic and electrostatic fields, and uses ionospheric plasma parameters relevant for high-latitude radio facilities such as Eiscat and HAARP. A nested grid numerical method has been developed to resolve the different length-scales, while avoiding severe restrictions on the time step. The simulation demonstrates the parametric decay of the ordinary mode into Langmuir and ion-acoustic waves, followed by a Langmuirmore » wave collapse and short-scale caviton formation, as observed in ionospheric heating experiments.« less

  9. Automated riverine landscape characterization: GIS-based tools for watershed-scale research, assessment, and management.

    PubMed

    Williams, Bradley S; D'Amico, Ellen; Kastens, Jude H; Thorp, James H; Flotemersch, Joseph E; Thoms, Martin C

    2013-09-01

    River systems consist of hydrogeomorphic patches (HPs) that emerge at multiple spatiotemporal scales. Functional process zones (FPZs) are HPs that exist at the river valley scale and are important strata for framing whole-watershed research questions and management plans. Hierarchical classification procedures aid in HP identification by grouping sections of river based on their hydrogeomorphic character; however, collecting data required for such procedures with field-based methods is often impractical. We developed a set of GIS-based tools that facilitate rapid, low cost riverine landscape characterization and FPZ classification. Our tools, termed RESonate, consist of a custom toolbox designed for ESRI ArcGIS®. RESonate automatically extracts 13 hydrogeomorphic variables from readily available geospatial datasets and datasets derived from modeling procedures. An advanced 2D flood model, FLDPLN, designed for MATLAB® is used to determine valley morphology by systematically flooding river networks. When used in conjunction with other modeling procedures, RESonate and FLDPLN can assess the character of large river networks quickly and at very low costs. Here we describe tool and model functions in addition to their benefits, limitations, and applications.

  10. Downscaling Soil Moisture in the Southern Great Plains Through a Calibrated Multifractal Model for Land Surface Modeling Applications

    NASA Technical Reports Server (NTRS)

    Mascaro, Giuseppe; Vivoni, Enrique R.; Deidda, Roberto

    2010-01-01

    Accounting for small-scale spatial heterogeneity of soil moisture (theta) is required to enhance the predictive skill of land surface models. In this paper, we present the results of the development, calibration, and performance evaluation of a downscaling model based on multifractal theory using aircraft!based (800 m) theta estimates collected during the southern Great Plains experiment in 1997 (SGP97).We first demonstrate the presence of scale invariance and multifractality in theta fields of nine square domains of size 25.6 x 25.6 sq km, approximately a satellite footprint. Then, we estimate the downscaling model parameters and evaluate the model performance using a set of different calibration approaches. Results reveal that small-scale theta distributions are adequately reproduced across the entire region when coarse predictors include a dynamic component (i.e., the spatial mean soil moisture ) and a stationary contribution accounting for static features (i.e., topography, soil texture, vegetation). For wet conditions, we found similar multifractal properties of soil moisture across all domains, which we ascribe to the signature of rainfall spatial variability. For drier states, the theta fields in the northern domains are more intermittent than in southern domains, likely because of differences in the distribution of vegetation coverage. Through our analyses, we propose a regional downscaling relation for coarse, satellite-based soil moisture estimates, based on ancillary information (static and dynamic landscape features), which can be used in the study area to characterize statistical properties of small-scale theta distribution required by land surface models and data assimilation systems.

  11. Factor structure and internal reliability of an exercise health belief model scale in a Mexican population.

    PubMed

    Villar, Oscar Armando Esparza-Del; Montañez-Alvarado, Priscila; Gutiérrez-Vega, Marisela; Carrillo-Saucedo, Irene Concepción; Gurrola-Peña, Gloria Margarita; Ruvalcaba-Romero, Norma Alicia; García-Sánchez, María Dolores; Ochoa-Alcaraz, Sergio Gabriel

    2017-03-01

    Mexico is one of the countries with the highest rates of overweight and obesity around the world, with 68.8% of men and 73% of women reporting both. This is a public health problem since there are several health related consequences of not exercising, like having cardiovascular diseases or some types of cancers. All of these problems can be prevented by promoting exercise, so it is important to evaluate models of health behaviors to achieve this goal. Among several models the Health Belief Model is one of the most studied models to promote health related behaviors. This study validates the first exercise scale based on the Health Belief Model (HBM) in Mexicans with the objective of studying and analyzing this model in Mexico. Items for the scale called the Exercise Health Belief Model Scale (EHBMS) were developed by a health research team, then the items were applied to a sample of 746 participants, male and female, from five cities in Mexico. The factor structure of the items was analyzed with an exploratory factor analysis and the internal reliability with Cronbach's alpha. The exploratory factor analysis reported the expected factor structure based in the HBM. The KMO index (0.92) and the Barlett's sphericity test (p < 0.01) indicated an adequate and normally distributed sample. Items had adequate factor loadings, ranging from 0.31 to 0.92, and the internal consistencies of the factors were also acceptable, with alpha values ranging from 0.67 to 0.91. The EHBMS is a validated scale that can be used to measure exercise based on the HBM in Mexican populations.

  12. Application of physical scaling towards downscaling climate model precipitation data

    NASA Astrophysics Data System (ADS)

    Gaur, Abhishek; Simonovic, Slobodan P.

    2018-04-01

    Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.

  13. Modelling disease outbreaks in realistic urban social networks

    NASA Astrophysics Data System (ADS)

    Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan

    2004-05-01

    Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.

  14. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  15. Nonlinear Analysis and Scaling Laws for Noncircular Composite Structures Subjected to Combined Loads

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Rose, Cheryl A.; Starnes, James H., Jr.

    2001-01-01

    Results from an analytical study of the response of a built-up, multi-cell noncircular composite structure subjected to combined internal pressure and mechanical loads are presented. Nondimensional parameters and scaling laws based on a first-order shear-deformation plate theory are derived for this noncircular composite structure. The scaling laws are used to design sub-scale structural models for predicting the structural response of a full-scale structure representative of a portion of a blended-wing-body transport aircraft. Because of the complexity of the full-scale structure, some of the similitude conditions are relaxed for the sub-scale structural models. Results from a systematic parametric study are used to determine the effects of relaxing selected similitude conditions on the sensitivity of the effectiveness of using the sub-scale structural model response characteristics for predicting the full-scale structure response characteristics.

  16. Modeling the intersections of Food, Energy, and Water in climate-vulnerable Ethiopia with an application to small-scale irrigation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Sankaranarayanan, S.; Zaitchik, B. F.; Siddiqui, S.

    2017-12-01

    Africa is home to some of the most climate vulnerable populations in the world. Energy and agricultural development have diverse impacts on the region's food security and economic well-being from the household to the national level, particularly considering climate variability and change. Our ultimate goal is to understand coupled Food-Energy-Water (FEW) dynamics across spatial scales in order to quantify the sensitivity of critical human outcomes to FEW development strategies in Ethiopia. We are developing bottom-up and top-down multi-scale models, spanning local, sub-national and national scales to capture the FEW linkages across communities and climatic adaptation zones. The focus of this presentation is the sub-national scale multi-player micro-economic (MME) partial-equilibrium model with coupled food and energy sector for Ethiopia. With fixed large-scale economic, demographic, and resource factors from the national scale computable general equilibrium (CGE) model and inferences of behavior parameters from the local scale agent-based model (ABM), the MME studies how shocks such as drought (crop failure) and development of resilience technologies would influence FEW system at a sub-national scale. The MME model is based on aggregating individual optimization problems for relevant players. It includes production, storage, and consumption of food and energy at spatially disaggregated zones, and transportation in between with endogenously modeled infrastructure. The aggregated players for each zone have different roles such as crop producers, storage managers, and distributors, who make decisions according to their own but interdependent objective functions. The food and energy supply chain across zones is therefore captured. Ethiopia is dominated by rain-fed agriculture with only 2% irrigated farmland. Small-scale irrigation has been promoted as a resilience technology that could potentially play a critical role in food security and economic well-being in Ethiopia, but that also intersects with energy and water consumption. Here, we focus on the energy usage for small-scale irrigation and the collective impact on crop production and water resources across zones in the MME model.

  17. Posttraumatic Stress Disorder: Diagnostic Data Analysis by Data Mining Methodology

    PubMed Central

    Marinić, Igor; Supek, Fran; Kovačić, Zrnka; Rukavina, Lea; Jendričko, Tihana; Kozarić-Kovačić, Dragica

    2007-01-01

    Aim To use data mining methods in assessing diagnostic symptoms in posttraumatic stress disorder (PTSD) Methods The study included 102 inpatients: 51 with a diagnosis of PTSD and 51 with psychiatric diagnoses other than PTSD. Several models for predicting diagnosis were built using the random forest classifier, one of the intelligent data analysis methods. The first prediction model was based on a structured psychiatric interview, the second on psychiatric scales (Clinician-administered PTSD Scale – CAPS, Positive and Negative Syndrome Scale – PANSS, Hamilton Anxiety Scale – HAMA, and Hamilton Depression Scale – HAMD), and the third on combined data from both sources. Additional models placing more weight on one of the classes (PTSD or non-PTSD) were trained, and prototypes representing subgroups in the classes constructed. Results The first model was the most relevant for distinguishing PTSD diagnosis from comorbid diagnoses such as neurotic, stress-related, and somatoform disorders. The second model pointed out the scores obtained on the Clinician-administered PTSD Scale (CAPS) and additional Positive and Negative Syndrome Scale (PANSS) scales, together with comorbid diagnoses of neurotic, stress-related, and somatoform disorders as most relevant. In the third model, psychiatric scales and the same group of comorbid diagnoses were found to be most relevant. Specialized models placing more weight on either the PTSD or non-PTSD class were able to better predict their targeted diagnoses at some expense of overall accuracy. Class subgroup prototypes mainly differed in values achieved on psychiatric scales and frequency of comorbid diagnoses. Conclusion Our work demonstrated the applicability of data mining methods for the analysis of structured psychiatric data for PTSD. In all models, the group of comorbid diagnoses, including neurotic, stress-related, and somatoform disorders, surfaced as important. The important attributes of the data, based on the structured psychiatric interview, were the current symptoms and conditions such as presence and degree of disability, hospitalizations, and duration of military service during the war, while CAPS total scores, symptoms of increased arousal, and PANSS additional criteria scores were indicated as relevant from the psychiatric symptom scales. PMID:17436383

  18. Chondrocyte Deformations as a Function of Tibiofemoral Joint Loading Predicted by a Generalized High-Throughput Pipeline of Multi-Scale Simulations

    PubMed Central

    Sibole, Scott C.; Erdemir, Ahmet

    2012-01-01

    Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro-scale model providing application for other multi-scale continuum mechanics problems. PMID:22649535

  19. Vertical Scaling with the Rasch Model Utilizing Default and Tight Convergence Settings with WINSTEPS and BILOG-MG

    ERIC Educational Resources Information Center

    Custer, Michael; Omar, Md Hafidz; Pomplun, Mark

    2006-01-01

    This study compared vertical scaling results for the Rasch model from BILOG-MG and WINSTEPS. The item and ability parameters for the simulated vocabulary tests were scaled across 11 grades; kindergarten through 10th. Data were based on real data and were simulated under normal and skewed distribution assumptions. WINSTEPS and BILOG-MG were each…

  20. A hybrid modeling with data assimilation to evaluate human exposure level

    NASA Astrophysics Data System (ADS)

    Koo, Y. S.; Cheong, H. K.; Choi, D.; Kim, A. L.; Yun, H. Y.

    2015-12-01

    Exposure models are designed to better represent human contact with PM (Particulate Matter) and other air pollutants such as CO, SO2, O3, and NO2. The exposure concentrations of the air pollutants to human are determined by global and regional long range transport of global and regional scales from Europe and China as well as local emissions from urban and road vehicle sources. To assess the exposure level in detail, the multiple scale influence from background to local sources should be considered. A hybrid air quality modeling methodology combing a grid-based chemical transport model with a local plume dispersion model was used to provide spatially and temporally resolved air quality concentration for human exposure levels in Korea. In the hybrid modeling approach, concentrations from a grid-based chemical transport model and a local plume dispersion model are added to provide contributions from photochemical interactions, long-range (regional) transport and local-scale dispersion. The CAMx (Comprehensive Air quality Model with Extensions was used for the background concentrations from anthropogenic and natural emissions in East Asia including Korea while the road dispersion by vehicle emission was calculated by CALPUFF model. The total exposure level of the pollutants was finally assessed by summing the background and road contributions. In the hybrid modeling, the data assimilation method based on the optimal interpolation was applied to overcome the discrepancies between the model predicted concentrations and observations. The air quality data from the air quality monitoring stations in Korea. The spatial resolution of the hybrid model was 50m for the Seoul Metropolitan Ares. This example clearly demonstrates that the exposure level could be estimated to the fine scale for the exposure assessment by using the hybrid modeling approach with data assimilation.

  1. Application of a flexible lattice Boltzmann method based simulation tool for modelling physico-chemical processes at different scales

    NASA Astrophysics Data System (ADS)

    Patel, Ravi A.; Perko, Janez; Jacques, Diederik

    2017-04-01

    Often, especially in the disciplines related to natural porous media, such as for example vadoze zone or aquifer hydrology or contaminant transport, the relevant spatial and temporal scales on which we need to provide information is larger than the scale where the processes actually occur. Usual techniques used to deal with these problems assume the existence of a REV. However, in order to understand the behavior on larger scales it is important to downscale the problem onto the relevant scale of the processes. Due to the limitations of resources (time, memory) the downscaling can only be made up to the certain lower scale. At this lower scale still several scales may co-exist - the scale which can be explicitly described and a scale which needs to be conceptualized by effective properties. Hence, models which are supposed to provide effective properties on relevant scales should therefor be flexible enough to represent complex pore-structure by explicit geometry on one side, and differently defined processes (e.g. by the effective properties) which emerge on lower scale. In this work we present the state-of-the-art lattice Boltzmann method based simulation tool applicable to advection-diffusion equation coupled to geochemical processes. The lattice Boltzmann transport solver can be coupled with an external geochemical solver which allows to account for a wide range of geochemical reaction networks through thermodynamic databases. The applicability to multiphase systems is ongoing. We provide several examples related to the calculation of an effective diffusion properties, permeability and effective reaction rate based on a continuum scale based on the pore scale geometry.

  2. Hybrid network modeling and the effect of image resolution on digitally-obtained petrophysical and two-phase flow properties

    NASA Astrophysics Data System (ADS)

    Aghaei, A.

    2017-12-01

    Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.

  3. Use of an uncertainty analysis for genome-scale models as a prediction tool for microbial growth processes in subsurface environments.

    PubMed

    Klier, Christine

    2012-03-06

    The integration of genome-scale, constraint-based models of microbial cell function into simulations of contaminant transport and fate in complex groundwater systems is a promising approach to help characterize the metabolic activities of microorganisms in natural environments. In constraint-based modeling, the specific uptake flux rates of external metabolites are usually determined by Michaelis-Menten kinetic theory. However, extensive data sets based on experimentally measured values are not always available. In this study, a genome-scale model of Pseudomonas putida was used to study the key issue of uncertainty arising from the parametrization of the influx of two growth-limiting substrates: oxygen and toluene. The results showed that simulated growth rates are highly sensitive to substrate affinity constants and that uncertainties in specific substrate uptake rates have a significant influence on the variability of simulated microbial growth. Michaelis-Menten kinetic theory does not, therefore, seem to be appropriate for descriptions of substrate uptake processes in the genome-scale model of P. putida. Microbial growth rates of P. putida in subsurface environments can only be accurately predicted if the processes of complex substrate transport and microbial uptake regulation are sufficiently understood in natural environments and if data-driven uptake flux constraints can be applied.

  4. Large-scale model-based assessment of deer-vehicle collision risk.

    PubMed

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife-vehicle collisions elsewhere.

  5. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife–vehicle collisions elsewhere. PMID:22359535

  6. Development of the Internet addiction scale based on the Internet Gaming Disorder criteria suggested in DSM-5.

    PubMed

    Cho, Hyun; Kwon, Min; Choi, Ji-Hye; Lee, Sang-Kyu; Choi, Jung Seok; Choi, Sam-Wook; Kim, Dai-Jin

    2014-09-01

    This study was conducted to develop and validate a standardized self-diagnostic Internet addiction (IA) scale based on the diagnosis criteria for Internet Gaming Disorder (IGD) in the Diagnostic and Statistical Manual of Mental Disorder, 5th edition (DSM-5). Items based on the IGD diagnosis criteria were developed using items of the previous Internet addiction scales. Data were collected from a community sample. The data were divided into two sets, and confirmatory factor analysis (CFA) was performed repeatedly. The model was modified after discussion with professionals based on the first CFA results, after which the second CFA was performed. The internal consistency reliability was generally good. The items that showed significantly low correlation values based on the item-total correlation of each factor were excluded. After the first CFA was performed, some factors and items were excluded. Seven factors and 26 items were prepared for the final model. The second CFA results showed good general factor loading, Squared Multiple Correlation (SMC) and model fit. The model fit of the final model was good, but some factors were very highly correlated. It is recommended that some of the factors be refined through further studies. Copyright © 2014. Published by Elsevier Ltd.

  7. A Hydrologic Routing Model Based on Geomorphological Characteristics of the River Network

    NASA Astrophysics Data System (ADS)

    Krajewski, W. F.; Quintero, F.; Ghimire, G.; Rojas, M.

    2017-12-01

    The Iowa Flood Center (IFC) provides streamflow forecasts for about 2000 locations in Iowa using a real-time distributed hydrologic model, forced with radar and raingage rainfall information. The model structure is based on ordinary differential equations that represent the flow of water from the hillslopes to the channels of the river network. The formulation of the routing of water across the rivers constitutes a fundamental aspect of the model, because this component is mostly responsible for providing estimates of the time-to-peak and peak magnitude. The routing model structure of the system is based on the scaling properties of river velocity with the discharge and drainage area of the channel, which can be written in terms of a power-law function. This study examines how this scaling relation is connected to the Horton-Strahler order of the channel network. This evaluation represents a step forward towards formulating model structures that are based on characteristics that are invariant across spatial scales. We proposed a routing model for every different Horton orders of the network, by adjusting a power-law function to available observations of velocity and discharge provided by USGS. The models were implemented into the Hillslope-Link Model (HLM) of the IFC for offline evaluation. Model simulations were compared to discharge observations to assess their performance, and compared to simulations obtained with other hydrologic routing schemes, to determine if the new formulation improves performance of the model.

  8. Performance of distributed multiscale simulations

    PubMed Central

    Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.

    2014-01-01

    Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258

  9. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Na; Zhang, Peng; Kang, Wei

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters aremore » systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately.« less

  11. Reply to 'Comments on upscaling geochemical reaction rates usingpore-scale network modeling' by Peter C. Lichtner and Qinjun Kang

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Li; Peters, Catherine A.; Celia, Michael A.

    2006-05-03

    Our paper "Upscaling geochemical reaction rates usingpore-scale network modeling" presents a novel application of pore-scalenetwork modeling to upscale mineral dissolution and precipitationreaction rates from the pore scale to the continuum scale, anddemonstrates the methodology by analyzing the scaling behavior ofanorthite and kaolinite reaction kinetics under conditions related to CO2sequestration. We conclude that under highly acidic conditions relevantto CO2 sequestration, the traditional continuum-based methodology may notcapture the spatial variation in concentrations from pore to pore, andscaling tools may be important in correctly modeling reactive transportprocesses in such systems. This work addresses the important butdifficult question of scaling mineral dissolution and precipitationreactionmore » kinetics, which is often ignored in fields such as geochemistry,water resources, and contaminant hydrology. Although scaling of physicalprocesses has been studied for almost three decades, very few studieshave examined the scaling issues related to chemical processes, despitetheir importance in governing the transport and fate of contaminants insubsurface systems.« less

  12. Cultural consensus modeling to measure transactional sex in Swaziland: Scale building and validation.

    PubMed

    Fielding-Miller, Rebecca; Dunkle, Kristin L; Cooper, Hannah L F; Windle, Michael; Hadley, Craig

    2016-01-01

    Transactional sex is associated with increased risk of HIV and gender based violence in southern Africa and around the world. However the typical quantitative operationalization, "the exchange of gifts or money for sex," can be at odds with a wide array of relationship types and motivations described in qualitative explorations. To build on the strengths of both qualitative and quantitative research streams, we used cultural consensus models to identify distinct models of transactional sex in Swaziland. The process allowed us to build and validate emic scales of transactional sex, while identifying key informants for qualitative interviews within each model to contextualize women's experiences and risk perceptions. We used logistic and multinomial logistic regression models to measure associations with condom use and social status outcomes. Fieldwork was conducted between November 2013 and December 2014 in the Hhohho and Manzini regions. We identified three distinct models of transactional sex in Swaziland based on 124 Swazi women's emic valuation of what they hoped to receive in exchange for sex with their partners. In a clinic-based survey (n = 406), consensus model scales were more sensitive to condom use than the etic definition. Model consonance had distinct effects on social status for the three different models. Transactional sex is better measured as an emic spectrum of expectations within a relationship, rather than an etic binary relationship type. Cultural consensus models allowed us to blend qualitative and quantitative approaches to create an emicly valid quantitative scale grounded in qualitative context. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    NASA Astrophysics Data System (ADS)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  14. Cosmological signatures of a UV-conformal standard model.

    PubMed

    Dorsch, Glauber C; Huber, Stephan J; No, Jose Miguel

    2014-09-19

    Quantum scale invariance in the UV has been recently advocated as an attractive way of solving the gauge hierarchy problem arising in the standard model. We explore the cosmological signatures at the electroweak scale when the breaking of scale invariance originates from a hidden sector and is mediated to the standard model by gauge interactions (gauge mediation). These scenarios, while being hard to distinguish from the standard model at LHC, can give rise to a strong electroweak phase transition leading to the generation of a large stochastic gravitational wave signal in possible reach of future space-based detectors such as eLISA and BBO. This relic would be the cosmological imprint of the breaking of scale invariance in nature.

  15. Physically based modeling in catchment hydrology at 50: Survey and outlook

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Putti, Mario

    2015-09-01

    Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.

  16. A fast boosting-based screening method for large-scale association study in complex traits with genetic heterogeneity.

    PubMed

    Wang, Lu-Yong; Fasulo, D

    2006-01-01

    Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.

  17. Compendium of Anomaly Detection and Reaction Tools and Projects

    DTIC Science & Technology

    2000-05-17

    identify changes to the risk levels of business network functions based on proposed modifications. Expert can model networks as well (see special...can easily scale to support any size network from departmental systems to enterprise-wide environments. ACX is scaled with the use of a Policy Model ...Defender is a host-based intrusion detector designed for use on home or small business systems. It scans all inbound and outbound Internet traffic for

  18. To have or to be? A comparison of materialism-based theories and self-determination theory as explanatory frameworks of prejudice.

    PubMed

    Van Hiel, Alain; Cornelis, Ilse; Roets, Arne

    2010-06-01

    The present study aimed to delineate the psychological structure of materialism and intrinsic and extrinsic value pursuit. Moreover, we compared models based on self-determination theory (SDT), Fromm's marketing character, and Inglehart's theory of social change to account for racial prejudice. In a sample of undergraduate students (n=131) and adults (n=176) it was revealed that the extrinsic value pursuit Financial Success/Materialism could be distinguished from the extrinsic value scales Physical Appeal and Social Recognition, and Community Concern could be distinguished from the intrinsic value pursuit scales Self-acceptance and Affiliation. Moreover, Financial Success/Materialism and Community Concern were consistently and significantly related to prejudice, whereas the other SDT facet scales yielded weaker relationships with prejudice. Structural models based on SDT and Inglehart were not corroborated, but instead the present data supported a mediation model based on Fromm's work in which the effect of Community Concern was mediated by Financial Success/Materialism. Broader implications for SDT are critically assessed.

  19. Scale-Dependent Stiffness and Internal Tension of a Model Brush Polymer

    NASA Astrophysics Data System (ADS)

    Berezney, John P.; Marciel, Amanda B.; Schroeder, Charles M.; Saleh, Omar A.

    2017-09-01

    Bottle-brush polymers exhibit closely grafted side chains that interact by steric repulsion, thereby causing stiffening of the main polymer chain. We use single-molecule elasticity measurements of model brush polymers to quantify this effect. We find that stiffening is only significant on long length scales, with the main chain retaining flexibility on short scales. From the elasticity data, we extract an estimate of the internal tension generated by side-chain repulsion; this estimate is consistent with the predictions of blob-based scaling theories.

  20. Agent-based modeling of malaria vectors: the importance of spatial simulation.

    PubMed

    Bomblies, Arne

    2014-07-03

    The modeling of malaria vector mosquito populations yields great insight into drivers of malaria transmission at the village scale. Simulation of individual mosquitoes as "agents" in a distributed, dynamic model domain may be greatly beneficial for simulation of spatial relationships of vectors and hosts. In this study, an agent-based model is used to simulate the life cycle and movement of individual malaria vector mosquitoes in a Niger Sahel village, with individual simulated mosquitoes interacting with their physical environment as well as humans. Various processes that are known to be epidemiologically important, such as the dependence of parity on flight distance between developmental habitat and blood meal hosts and therefore spatial relationships of pools and houses, are readily simulated using this modeling paradigm. Impacts of perturbations can be evaluated on the basis of vectorial capacity, because the interactions between individuals that make up the population- scale metric vectorial capacity can be easily tracked for simulated mosquitoes and human blood meal hosts, without the need to estimate vectorial capacity parameters. As expected, model results show pronounced impacts of pool source reduction from larvicide application and draining, but with varying degrees of impact depending on the spatial relationship between pools and human habitation. Results highlight the importance of spatially-explicit simulation that can model individuals such as in an agent-based model. The impacts of perturbations on village scale malaria transmission depend on spatial locations of individual mosquitoes, as well as the tracking of relevant life cycle events and characteristics of individual mosquitoes. This study demonstrates advantages of using an agent-based approach for village-scale mosquito simulation to address questions in which spatial relationships are known to be important.

  1. Assessing the radar rainfall estimates in watershed-scale water quality model

    USDA-ARS?s Scientific Manuscript database

    Watershed-scale water quality models are effective science-based tools for interpreting change in complex environmental systems that affect hydrology cycle, soil erosion and nutrient fate and transport in watershed. Precipitation is one of the primary input data to achieve a precise rainfall-runoff ...

  2. MODELLING OF POLYCHLORINATED-DIOXIN AND -FURAN CONGENER PROFILES FROM MUNICIPAL WASTE COMBUSTION

    EPA Science Inventory

    The paper discusses a model, based on experimental data, that was developed to describe the profile of polychlorinated dibenzo-p-dioxin (PCDD) and polychlorinated dibenzofuran (PCDF) congeners formed in the duct region of a pilot-scale combustor simulating large-scale municipal w...

  3. MODELING SCALE-DEPENDENT LANDSCAPE PATTERN, DISPERSAL, AND CONNECTIVITY FROM THE PERSPECTIVE OF THE ORGANISM

    EPA Science Inventory

    Effects of fine- to broad-scale patterns of landscape heterogeneity on dispersal success were examined for organisms varying in life history traits. To systematically control spatial pattern, a landscape model was created by merging physiographically-based maps of simulated land...

  4. Protein docking by the interface structure similarity: how much structure is needed?

    PubMed

    Sinha, Rohita; Kundrotas, Petras J; Vakser, Ilya A

    2012-01-01

    The increasing availability of co-crystallized protein-protein complexes provides an opportunity to use template-based modeling for protein-protein docking. Structure alignment techniques are useful in detection of remote target-template similarities. The size of the structure involved in the alignment is important for the success in modeling. This paper describes a systematic large-scale study to find the optimal definition/size of the interfaces for the structure alignment-based docking applications. The results showed that structural areas corresponding to the cutoff values <12 Å across the interface inadequately represent structural details of the interfaces. With the increase of the cutoff beyond 12 Å, the success rate for the benchmark set of 99 protein complexes, did not increase significantly for higher accuracy models, and decreased for lower-accuracy models. The 12 Å cutoff was optimal in our interface alignment-based docking, and a likely best choice for the large-scale (e.g., on the scale of the entire genome) applications to protein interaction networks. The results provide guidelines for the docking approaches, including high-throughput applications to modeled structures.

  5. Integration of climatic water deficit and fine-scale physiography in process-based modeling of forest landscape resilience to large-scale tree mortality

    NASA Astrophysics Data System (ADS)

    Yang, J.; Weisberg, P.; Dilts, T.

    2016-12-01

    Climate warming can lead to large-scale drought-induced tree mortality events and greatly affect forest landscape resilience. Climatic water deficit (CWD) and its physiographic variations provide a key mechanism in driving landscape dynamics in response to climate change. Although CWD has been successfully applied in niche-based species distribution models, its application in process-based forest landscape models is still scarce. Here we present a framework incorporating fine-scale influence of terrain on ecohydrology in modeling forest landscape dynamics. We integrated CWD with a forest landscape succession and disturbance model (LANDIS-II) to evaluate how tree species distribution might shift in response to different climate-fire scenarios across an elevation-aspect gradient in a semi-arid montane landscape of northeastern Nevada, USA. Our simulations indicated that drought-intolerant tree species such as quaking aspen could experience greatly reduced distributions in the more arid portions of their existing ranges due to water stress limitations under future climate warming scenarios. However, even at the most xeric portions of its range, aspen is likely to persist in certain environmental settings due to unique and often fine-scale combinations of resource availability, species interactions and disturbance regime. The modeling approach presented here allowed identification of these refugia. In addition, this approach helped quantify how the direction and magnitude of fire influences on species distribution would vary across topoclimatic gradients, as well as furthers our understanding on the role of environmental conditions, fire, and inter-specific competition in shaping potential responses of landscape resilience to climate change.

  6. Developing Flexible, Integrated Hydrologic Modeling Systems for Multiscale Analysis in the Midwest and Great Lakes Region

    NASA Astrophysics Data System (ADS)

    Hamlet, A. F.; Chiu, C. M.; Sharma, A.; Byun, K.; Hanson, Z.

    2016-12-01

    Physically based hydrologic modeling of surface and groundwater resources that can be flexibly and efficiently applied to support water resources policy/planning/management decisions at a wide range of spatial and temporal scales are greatly needed in the Midwest, where stakeholder access to such tools is currently a fundamental barrier to basic climate change assessment and adaptation efforts, and also the co-production of useful products to support detailed decision making. Based on earlier pilot studies in the Pacific Northwest Region, we are currently assembling a suite of end-to-end tools and resources to support various kinds of water resources planning and management applications across the region. One of the key aspects of these integrated tools is that the user community can access gridded products at any point along the end-to-end chain of models, looking backwards in time about 100 years (1915-2015), and forwards in time about 85 years using CMIP5 climate model projections. The integrated model is composed of historical and projected future meteorological data based on station observations and statistical and dynamically downscaled climate model output respectively. These gridded meteorological data sets serve as forcing data for the macro-scale VIC hydrologic model implemented over the Midwest at 1/16 degree resolution. High-resolution climate model (4km WRF) output provides inputs for the analyses of urban impacts, hydrologic extremes, agricultural impacts, and impacts to the Great Lakes. Groundwater recharge estimated by the surface water model provides input data for fine-scale and macro-scale groundwater models needed for specific applications. To highlight the multi-scale use of the integrated models in support of co-production of scientific information for decision making, we briefly describe three current case studies addressing different spatial scales of analysis: 1) Effects of climate change on the water balance of the Great Lakes, 2) Future hydropower resources in the St. Joseph River basin, 3) Effects of climate change on carbon cycling in small lakes in the Northern Highland Lakes District.

  7. Customization of a generic 3D model of the distal femur using diagnostic radiographs.

    PubMed

    Schmutz, B; Reynolds, K J; Slavotinek, J P

    2008-01-01

    A method for the customization of a generic 3D model of the distal femur is presented. The customization method involves two steps: acquisition of calibrated orthogonal planar radiographs; and linear scaling of the generic model based on the width of a subject's femoral condyles as measured on the planar radiographs. Planar radiographs of seven intact lower cadaver limbs were obtained. The customized generic models were validated by comparing their surface geometry with that of CT-reconstructed reference models. The overall mean error was 1.2 mm. The results demonstrate that uniform scaling as a first step in the customization process produced a base model of accuracy comparable to other models reported in the literature.

  8. Use of satellite and modeled soil moisture data for predicting event soil loss at plot scale

    NASA Astrophysics Data System (ADS)

    Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.

    2015-09-01

    The potential of coupling soil moisture and a Universal Soil Loss Equation-based (USLE-based) model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e., the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the USLE enhances the capability of the model to account for variations in event soil losses, the soil moisture being an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to ~ 0.35 and a root mean square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.

  9. Differential item functioning magnitude and impact measures from item response theory models.

    PubMed

    Kleinman, Marjorie; Teresi, Jeanne A

    2016-01-01

    Measures of magnitude and impact of differential item functioning (DIF) at the item and scale level, respectively are presented and reviewed in this paper. Most measures are based on item response theory models. Magnitude refers to item level effect sizes, whereas impact refers to differences between groups at the scale score level. Reviewed are magnitude measures based on group differences in the expected item scores and impact measures based on differences in the expected scale scores. The similarities among these indices are demonstrated. Various software packages are described that provide magnitude and impact measures, and new software presented that computes all of the available statistics conveniently in one program with explanations of their relationships to one another.

  10. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  11. Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture

    NASA Astrophysics Data System (ADS)

    Hassan, Ezeldin A.

    Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.

  12. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  13. Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis

    NASA Technical Reports Server (NTRS)

    Olevsky, Eugene; German, Randall M.

    2012-01-01

    A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.

  14. A comprehensive study on urban true orthorectification

    USGS Publications Warehouse

    Zhou, G.; Chen, W.; Kelmelis, J.A.; Zhang, Dongxiao

    2005-01-01

    To provide some advanced technical bases (algorithms and procedures) and experience needed for national large-scale digital orthophoto generation and revision of the Standards for National Large-Scale City Digital Orthophoto in the National Digital Orthophoto Program (NDOP), this paper presents a comprehensive study on theories, algorithms, and methods of large-scale urban orthoimage generation. The procedures of orthorectification for digital terrain model (DTM)-based and digital building model (DBM)-based orthoimage generation and their mergence for true orthoimage generation are discussed in detail. A method of compensating for building occlusions using photogrammetric geometry is developed. The data structure needed to model urban buildings for accurately generating urban orthoimages is presented. Shadow detection and removal, the optimization of seamline for automatic mosaic, and the radiometric balance of neighbor images are discussed. Street visibility analysis, including the relationship between flight height, building height, street width, and relative location of the street to the imaging center, is analyzed for complete true orthoimage generation. The experimental results demonstrated that our method can effectively and correctly orthorectify the displacements caused by terrain and buildings in urban large-scale aerial images. ?? 2005 IEEE.

  15. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  16. Local Scale Radiobrightness Modeling During the Intensive Observing Period-4 of the Cold Land Processes Experiment-1

    NASA Astrophysics Data System (ADS)

    Kim, E.; Tedesco, M.; de Roo, R.; England, A. W.; Gu, H.; Pham, H.; Boprie, D.; Graf, T.; Koike, T.; Armstrong, R.; Brodzik, M.; Hardy, J.; Cline, D.

    2004-12-01

    The NASA Cold Land Processes Field Experiment (CLPX-1) was designed to provide microwave remote sensing observations and ground truth for studies of snow and frozen ground remote sensing, particularly issues related to scaling. CLPX-1 was conducted in 2002 and 2003 in Colorado, USA. One of the goals of the experiment was to test the capabilities of microwave emission models at different scales. Initial forward model validation work has concentrated on the Local-Scale Observation Site (LSOS), a 0.8~ha study site consisting of open meadows separated by trees where the most detailed measurements were made of snow depth and temperature, density, and grain size profiles. Results obtained in the case of the 3rd Intensive Observing Period (IOP3) period (February, 2003, dry snow) suggest that a model based on Dense Medium Radiative Transfer (DMRT) theory is able to model the recorded brightness temperatures using snow parameters derived from field measurements. This paper focuses on the ability of forward DMRT modelling, combined with snowpack measurements, to reproduce the radiobrightness signatures observed by the University of Michigan's Truck-Mounted Radiometer System (TMRS) at 19 and 37~GHz during the 4th IOP (IOP4) in March, 2003. Unlike in IOP3, conditions during IOP4 include both wet and dry periods, providing a valuable test of DMRT model performance. In addition, a comparison will be made for the one day of coincident observations by the University of Tokyo's Ground-Based Microwave Radiometer-7 (GBMR-7) and the TMRS. The plot-scale study in this paper establishes a baseline of DMRT performance for later studies at successively larger scales. And these scaling studies will help guide the choice of future snow retrieval algorithms and the design of future Cold Lands observing systems.

  17. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  18. Physical Model Study of Flowerpot Discharge Outlet, Western Closure Complex, New Orleans, Louisiana

    DTIC Science & Technology

    2013-05-01

    FPDO ........................................................................................ 12  3  Flowerpot Model with Straight Pipe Immediately...used at downstream end of 90-degree elbow. .................... 23  Figure 18. 1:20.377-scale preliminary FPDO model showing 7-ft-long PVC pipe ...27  Figure 23. 1:20.377-scale preliminary model with 1.3 in. lip. The black material at base of pipe was a sealant used to

  19. Multi-Scale Modeling, Surrogate-Based Analysis, and Optimization of Lithium-Ion Batteries for Vehicle Applications

    NASA Astrophysics Data System (ADS)

    Du, Wenbo

    A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.

  20. Assessments of a Turbulence Model Based on Menter's Modification to Rotta's Two-Equation Model

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    2013-01-01

    The main objective of this paper is to construct a turbulence model with a more reliable second equation simulating length scale. In the present paper, we assess the length scale equation based on Menter s modification to Rotta s two-equation model. Rotta shows that a reliable second equation can be formed in an exact transport equation from the turbulent length scale L and kinetic energy. Rotta s equation is well suited for a term-by-term modeling and shows some interesting features compared to other approaches. The most important difference is that the formulation leads to a natural inclusion of higher order velocity derivatives into the source terms of the scale equation, which has the potential to enhance the capability of Reynolds-averaged Navier-Stokes (RANS) to simulate unsteady flows. The model is implemented in the PAB3D solver with complete formulation, usage methodology, and validation examples to demonstrate its capabilities. The detailed studies include grid convergence. Near-wall and shear flows cases are documented and compared with experimental and Large Eddy Simulation (LES) data. The results from this formulation are as good or better than the well-known SST turbulence model and much better than k-epsilon results. Overall, the study provides useful insights into the model capability in predicting attached and separated flows.

  1. Thresholds of understanding: Exploring assumptions of scale invariance vs. scale dependence in global biogeochemical models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bradford, M.; Koven, C.; Talbot, J. M.; Wood, S.; Chadwick, O.

    2016-12-01

    High uncertainty and low confidence in terrestrial carbon (C) cycle projections reflect the incomplete understanding of how best to represent biologically-driven C cycle processes at global scales. Ecosystem theories, and consequently biogeochemical models, are based on the assumption that different belowground communities function similarly and interact with the abiotic environment in consistent ways. This assumption of "Scale Invariance" posits that environmental conditions will change the rate of ecosystem processes, but the biotic response will be consistent across sites. Indeed, cross-site comparisons and global-scale analyses suggest that climate strongly controls rates of litter mass loss and soil organic matter turnover. Alternatively, activities of belowground communities are shaped by particular local environmental conditions, such as climate and edaphic conditions. Under this assumption of "Scale Dependence", relationships generated by evolutionary trade-offs in acquiring resources and withstanding environmental stress dictate the activities of belowground communities and their functional response to environmental change. Similarly, local edaphic conditions (e.g. permafrost soils or reactive minerals that physicochemically stabilize soil organic matter on mineral surfaces) may strongly constrain the availability of substrates that biota decompose—altering the trajectory of soil biogeochemical response to perturbations. Identifying when scale invariant assumptions hold vs. where local variation in biotic communities or edaphic conditions must be considered is critical to advancing our understanding and representation of belowground processes in the face of environmental change. Here we introduce data sets that support assumptions of scale invariance and scale dependent processes and discuss their application in global-scale biogeochemical models. We identify particular domains over which assumptions of scale invariance may be appropriate and potential thresholds where shifts in ecosystem function may be expected. Finally, we discuss the mechanistic insight that can be applied in process-based models and datasets that can evaluate models across spatial and temporal scales.

  2. Modeling hydrology and in-stream transport on drained forested lands in coastal Carolinas, U.S.A.

    Treesearch

    Devendra Amatya

    2005-01-01

    This study summarizes the successional development and testing of forest hydrologic models based on DRAINMOD that predicts the hydrology of low-gradient poorly drained watersheds as affected by land management and climatic variation. The field scale (DRAINLOB) and watershed-scale in-stream routing (DRAINWAT) models were successfully tested with water table and outflow...

  3. Multi-Scale Characterization of Orthotropic Microstructures

    DTIC Science & Technology

    2008-04-01

    D. Valiveti, S. J. Harris, J. Boileau, A domain partitioning based pre-processor for multi-scale modelling of cast aluminium alloys , Modelling and...SUPPLEMENTARY NOTES Journal article submitted to Modeling and Simulation in Materials Science and Engineering. PAO Case Number: WPAFB 08-3362...element for charac- terization or simulation to avoid misleading predictions of macroscopic defor- mation, fracture, or transport behavior. Likewise

  4. Longitudinal Evaluation of a Scale-up Model for Teaching Mathematics with Trajectories and Technologies: Persistence of Effects in the Third Year

    ERIC Educational Resources Information Center

    Clements, Douglas H.; Sarama, Julie; Wolfe, Christopher B.; Spitler, Mary Elaine

    2013-01-01

    Using a cluster randomized trial design, we evaluated the persistence of effects of a research-based model for scaling up educational interventions. The model was implemented in 42 schools in two city districts serving low-resource communities, randomly assigned to three conditions. In pre-kindergarten, the two experimental interventions were…

  5. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  6. A novel land surface-hydrologic-sediment dynamics model for stream corridor conservation assessment and its first application

    NASA Astrophysics Data System (ADS)

    Smithgall, K.; Shen, C.; Langendoen, E. J.; Johnson, P. A.

    2015-12-01

    Nationally and in the Chesapeake Bay (CB), Stream Corridor restoration costs unsustainable amount of public resources, but decisions are often made with inadequate knowledge of regional-scale system behavior. Bank erosion is a significant issue relevant to sediment and nutrient pollution, aquatic and riparian habitat and stream health. Existing modeling effort either focuses only on reach-scale responses or overly simplifies the descriptions for bank failure mechanics. In this work we present a novel regional-scale processes model integrating hydrology, vegetation dynamics, hydraulics, bank mechanics and sediment transport, based on a coupling between Community Land Model, Process-based Adaptive Watershed Simulator and CONservational Channel Evolution and Pollutant Transport System (CLM + PAWS + CONCEPTS, CPC). We illustrate the feasibility of this modeling platform in a Valley and Ridge basin in Pennsylvania, USA, with channel geometry data collected in 2004 and 2014. The simulations are able to reproduce essential pattern of the observed trends. We study the causes of the noticeable evolution of a relocated channel and the hydrologic controls. Bridging processes on multiple scales, the CPC model creates a new, integrated system that may serve as a confluence point for inter-disciplinary research.

  7. Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma

    PubMed Central

    Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan

    2014-01-01

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470

  8. The Scaled SLW model of gas radiation in non-uniform media based on Planck-weighted moments of gas absorption cross-section

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.

    2018-02-01

    The Scaled SLW model for prediction of radiation transfer in non-uniform gaseous media is presented. The paper considers a new approach for construction of a Scaled SLW model. In order to maintain the SLW method as a simple and computationally efficient engineering method special attention is paid to explicit non-iterative methods of calculation of the scaling coefficient. The moments of gas absorption cross-section weighted by the Planck blackbody emissive power (in particular, the first moment - Planck mean, and first inverse moment - Rosseland mean) are used as the total characteristics of the absorption spectrum to be preserved by scaling. Generalized SLW modelling using these moments including both discrete gray gases and the continuous formulation is presented. Application of line-by-line look-up table for corresponding ALBDF and inverse ALBDF distribution functions (such that no solution of implicit equations is needed) ensures that the method is flexible and efficient. Predictions for radiative transfer using the Scaled SLW model are compared to line-by-line benchmark solutions, and predictions using the Rank Correlated SLW model and SLW Reference Approach. Conclusions and recommendations regarding application of the Scaled SLW model are made.

  9. LES-based generation of high-frequency fluctuation in wind turbulence obtained by meteorological model

    NASA Astrophysics Data System (ADS)

    Tamura, Tetsuro; Kawaguchi, Masaharu; Kawai, Hidenori; Tao, Tao

    2017-11-01

    The connection between a meso-scale model and a micro-scale large eddy simulation (LES) is significant to simulate the micro-scale meteorological problem such as strong convective events due to the typhoon or the tornado using LES. In these problems the mean velocity profiles and the mean wind directions change with time according to the movement of the typhoons or tornadoes. Although, a fine grid micro-scale LES could not be connected to a coarse grid meso-scale WRF directly. In LES when the grid is suddenly refined at the interface of nested grids which is normal to the mean advection the resolved shear stresses decrease due to the interpolation errors and the delay of the generation of smaller scale turbulence that can be resolved on the finer mesh. For the estimation of wind gust disaster the peak wind acting on buildings and structures has to be correctly predicted. In the case of meteorological model the velocity fluctuations have a tendency of diffusive variation without the high frequency component due to the numerically filtering effects. In order to predict the peak value of wind velocity with good accuracy, this paper proposes a LES-based method for generating the higher frequency components of velocity and temperature fields obtained by meteorological model.

  10. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  11. Stochastic simulation and decadal prediction of hydroclimate in the Western Himalayas

    NASA Astrophysics Data System (ADS)

    Robertson, A. W.; Chekroun, M. D.; Cook, E.; D'Arrigo, R.; Ghil, M.; Greene, A. M.; Holsclaw, T.; Kondrashov, D. A.; Lall, U.; Lu, M.; Smyth, P.

    2012-12-01

    Improved estimates of climate over the next 10 to 50 years are needed for long-term planning in water resource and flood management. However, the task of effectively incorporating the results of climate change research into decision-making face a ``double conflict of scales'': the temporal scales of climate model projections are too long, while their usable spatial scales (global to planetary) are much larger than those needed for actual decision making (at the regional to local level). This work is designed to help tackle this ``double conflict'' in the context of water management over monsoonal Asia, based on dendroclimatic multi-century reconstructions of drought indices and river flows. We identify low-frequency modes of variability with time scales from interannual to interdecadal based on these series, and then generate future scenarios based on (a) empirical model decadal predictions, and (b) stochastic simulations generated with autoregressive models that reproduce the power spectrum of the data. Finally, we consider how such scenarios could be used to develop reservoir optimization models. Results will be presented based on multi-century Upper Indus river discharge reconstructions that exhibit a strong periodicity near 27 years that is shown to yield some retrospective forecasting skill over the 1700-2000 period, at a 15-yr yield time. Stochastic simulations of annual PDSI drought index values over the Upper Indus basin are constructed using Empirical Model Reduction; their power spectra are shown to be quite realistic, with spectral peaks near 5--8 years.

  12. Relationships between aquatic vegetation and water turbidity: A field survey across seasons and spatial scales

    PubMed Central

    Austin, Åsa N.; Hansen, Joakim P.; Donadi, Serena; Eklöf, Johan S.

    2017-01-01

    Field surveys often show that high water turbidity limits cover of aquatic vegetation, while many small-scale experiments show that vegetation can reduce turbidity by decreasing water flow, stabilizing sediments, and competing with phytoplankton for nutrients. Here we bridged these two views by exploring the direction and strength of causal relationships between aquatic vegetation and turbidity across seasons (spring and late summer) and spatial scales (local and regional), using causal modeling based on data from a field survey along the central Swedish Baltic Sea coast. The two best-fitting regional-scale models both suggested that in spring, high cover of vegetation reduces water turbidity. In summer, the relationships differed between the two models; in the first model high vegetation cover reduced turbidity; while in the second model reduction of summer turbidity by high vegetation cover in spring had a positive effect on summer vegetation which suggests a positive feedback of vegetation on itself. Nitrogen load had a positive effect on turbidity in both seasons, which was comparable in strength to the effect of vegetation on turbidity. To assess whether the effect of vegetation was primarily caused by sediment stabilization or a reduction of phytoplankton, we also tested models where turbidity was replaced by phytoplankton fluorescence or sediment-driven turbidity. The best-fitting regional-scale models suggested that high sediment-driven turbidity in spring reduces vegetation cover in summer, which in turn has a negative effect on sediment-driven turbidity in summer, indicating a potential positive feedback of sediment-driven turbidity on itself. Using data at the local scale, few relationships were significant, likely due to the influence of unmeasured variables and/or spatial heterogeneity. In summary, causal modeling based on data from a large-scale field survey suggested that aquatic vegetation can reduce turbidity at regional scales, and that high vegetation cover vs. high sediment-driven turbidity may represent two self-enhancing, alternative states of shallow bay ecosystems. PMID:28854185

  13. Relationships between aquatic vegetation and water turbidity: A field survey across seasons and spatial scales.

    PubMed

    Austin, Åsa N; Hansen, Joakim P; Donadi, Serena; Eklöf, Johan S

    2017-01-01

    Field surveys often show that high water turbidity limits cover of aquatic vegetation, while many small-scale experiments show that vegetation can reduce turbidity by decreasing water flow, stabilizing sediments, and competing with phytoplankton for nutrients. Here we bridged these two views by exploring the direction and strength of causal relationships between aquatic vegetation and turbidity across seasons (spring and late summer) and spatial scales (local and regional), using causal modeling based on data from a field survey along the central Swedish Baltic Sea coast. The two best-fitting regional-scale models both suggested that in spring, high cover of vegetation reduces water turbidity. In summer, the relationships differed between the two models; in the first model high vegetation cover reduced turbidity; while in the second model reduction of summer turbidity by high vegetation cover in spring had a positive effect on summer vegetation which suggests a positive feedback of vegetation on itself. Nitrogen load had a positive effect on turbidity in both seasons, which was comparable in strength to the effect of vegetation on turbidity. To assess whether the effect of vegetation was primarily caused by sediment stabilization or a reduction of phytoplankton, we also tested models where turbidity was replaced by phytoplankton fluorescence or sediment-driven turbidity. The best-fitting regional-scale models suggested that high sediment-driven turbidity in spring reduces vegetation cover in summer, which in turn has a negative effect on sediment-driven turbidity in summer, indicating a potential positive feedback of sediment-driven turbidity on itself. Using data at the local scale, few relationships were significant, likely due to the influence of unmeasured variables and/or spatial heterogeneity. In summary, causal modeling based on data from a large-scale field survey suggested that aquatic vegetation can reduce turbidity at regional scales, and that high vegetation cover vs. high sediment-driven turbidity may represent two self-enhancing, alternative states of shallow bay ecosystems.

  14. Scaling approach of terrestrial carbon cycle over Alaska's black spruce forests: a synthesis of field observation, remote sensing, and ecosystem modeling

    NASA Astrophysics Data System (ADS)

    Ueyama, M.; Date, T.; Harazono, Y.; Ichii, K.

    2007-12-01

    Spatio-temporal scale up of the eddy covariance data is an important challenge especially in the northern high latitude ecosystems, since continuous ground observations are rarely conducted. In this study, we measured the carbon fluxes at a black spruce forest in interior Alaska, and then scale up the eddy covariance data to spatio- temporal variations in regional carbon budget by using satellite remote sensing data and a process based ecosystem model, Biome-BGC. At point scale, both satellite-based empirical model and Biome-BGC could reproduce seasonal and interannual variations in GPP/RE/NEE. The magnitude of GPP/RE is also consistent among the models. However, spatial patterns in GPP/RE are something different among the models; high productivity in low elevation area is estimated by the satellite-based model whereas insignificant relationship is simulated by Biome-BGC. Long- term satellite records, AVHRR and MODIS, show the gradual decline of NDVI in Alaska's black spruce forests between 1981 and 2006, resulting in a general trend of decreasing GPP/RE for Alaska's black spruce forests. These trends are consistent with the Biome-BGC simulation. The trend of carbon budget is also consistent among the models, where the carbon budget of black spruce forests did not significantly change in the period. The simulated results suggest that the carbon fluxes in black spruce forests could be more sensitive to water availability than air temperature.

  15. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  16. Hyperviscosity for unstructured ALE meshes

    NASA Astrophysics Data System (ADS)

    Cook, Andrew W.; Ulitsky, Mark S.; Miller, Douglas S.

    2013-01-01

    An artificial viscosity, originally designed for Eulerian schemes, is adapted for use in arbitrary Lagrangian-Eulerian simulations. Changes to the Eulerian model (dubbed 'hyperviscosity') are discussed, which enable it to work within a Lagrangian framework. New features include a velocity-weighted grid scale and a generalised filtering procedure, applicable to either structured or unstructured grids. The model employs an artificial shear viscosity for treating small-scale vorticity and an artificial bulk viscosity for shock capturing. The model is based on the Navier-Stokes form of the viscous stress tensor, including the diagonal rate-of-expansion tensor. A second-order version of the model is presented, in which Laplacian operators act on the velocity divergence and the grid-weighted strain-rate magnitude to ensure that the velocity field remains smooth at the grid scale. Unlike sound-speed-based artificial viscosities, the hyperviscosity model is compatible with the low Mach number limit. The new model outperforms a commonly used Lagrangian artificial viscosity on a variety of test problems.

  17. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    NASA Technical Reports Server (NTRS)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  18. Hypersonic Glider Model in Full Scale Tunnel 1957

    NASA Image and Video Library

    1957-09-07

    L57-1439 A model based on Langley s concept of a hypersonic glider was test flown on an umbilical cord inside the Full Scale Tunnel in 1957. Photograph published in Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917-1958 by James R. Hansen. Page 374.

  19. Landscape habitat suitability index software

    Treesearch

    William D. Dijak; Chadwick D. Rittenhouse; Michael A. Larson; Frank R. III Thompson; Joshua J. Millspaugh

    2007-01-01

    Habitat suitability index (HSI) models are traditionally used to evaluate habitat quality for wildlife at a local scale. Rarely have such models incorporated spatial relationships of habitat components. We introduce Landscape HSImodels, a new Microsoft Windowst (Microsoft, Redmond, WA)-based program that incorporates local habitat as well as landscape-scale attributes...

  20. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    USDA-ARS?s Scientific Manuscript database

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  1. Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru

    2015-05-01

    Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.

  2. Toward seamless hydrologic predictions across spatial scales

    NASA Astrophysics Data System (ADS)

    Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin H.; Warrach-Sagi, Kirsten; Attinger, Sabine

    2017-09-01

    Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the chosen resolution, i.e., fulfills a flux-matching condition across scales. An analysis of state-of-the-art LSMs and HMs reveals that most do not have consistent hydrologic parameter fields. Multiple experiments with the mHM, Noah-MP, PCR-GLOBWB, and WaterGAP models demonstrate the pitfalls of deficient parameterization practices currently used in most operational models, which are insufficient to satisfy the flux-matching condition. These examples demonstrate that J. Dooge's 1982 statement on the unsolved problem of parameterization in these models remains true. Based on a review of existing parameter regionalization techniques, we postulate that the multiscale parameter regionalization (MPR) technique offers a practical and robust method that provides consistent (seamless) parameter and flux fields across scales. Herein, we develop a general model protocol to describe how MPR can be applied to a particular model and present an example application using the PCR-GLOBWB model. Finally, we discuss potential advantages and limitations of MPR in obtaining the seamless prediction of hydrological fluxes and states across spatial scales.

  3. Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-10-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.

  4. Photogrammetric Recording and Reconstruction of Town Scale Models - the Case of the Plan-Relief of Strasbourg

    NASA Astrophysics Data System (ADS)

    Macher, H.; Grussenmeyer, P.; Landes, T.; Halin, G.; Chevrier, C.; Huyghe, O.

    2017-08-01

    The French collection of Plan-Reliefs, scale models of fortified towns, constitutes a precious testimony of the history of France. The aim of the URBANIA project is the valorisation and the diffusion of this Heritage through the creation of virtual models. The town scale model of Strasbourg at 1/600 currently exhibited in the Historical Museum of Strasbourg was selected as a case study. In this paper, the photogrammetric recording of this scale model is first presented. The acquisition protocol as well as the data post-processing are detailed. Then, the modelling of the city and more specially building blocks is investigated. Based on point clouds of the scale model, the extraction of roof elements is considered. It deals first with the segmentation of the point cloud into building blocks. Then, for each block, points belonging to roofs are identified and the extraction of chimney point clouds as well as roof ridges and roof planes is performed. Finally, the 3D parametric modelling of the building blocks is studied by considering roof polygons and polylines describing chimneys as input. In a future works section, the semantically enrichment and the potential usage scenarios of the scale model are envisaged.

  5. Accuracy and Reliability of Marker-Based Approaches to Scale the Pelvis, Thigh, and Shank Segments in Musculoskeletal Models.

    PubMed

    Kainz, Hans; Hoang, Hoa X; Stockton, Chris; Boyd, Roslyn R; Lloyd, David G; Carty, Christopher P

    2017-10-01

    Gait analysis together with musculoskeletal modeling is widely used for research. In the absence of medical images, surface marker locations are used to scale a generic model to the individual's anthropometry. Studies evaluating the accuracy and reliability of different scaling approaches in a pediatric and/or clinical population have not yet been conducted and, therefore, formed the aim of this study. Magnetic resonance images (MRI) and motion capture data were collected from 12 participants with cerebral palsy and 6 typically developed participants. Accuracy was assessed by comparing the scaled model's segment measures to the corresponding MRI measures, whereas reliability was assessed by comparing the model's segments scaled with the experimental marker locations from the first and second motion capture session. The inclusion of joint centers into the scaling process significantly increased the accuracy of thigh and shank segment length estimates compared to scaling with markers alone. Pelvis scaling approaches which included the pelvis depth measure led to the highest errors compared to the MRI measures. Reliability was similar between scaling approaches with mean ICC of 0.97. The pelvis should be scaled using pelvic width and height and the thigh and shank segment should be scaled using the proximal and distal joint centers.

  6. High-resolution, regional-scale crop yield simulations for the Southwestern United States

    NASA Astrophysics Data System (ADS)

    Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.

    2012-12-01

    Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and maximum temperature), beyond which the yields were negatively affected. These results are now being used for further regional-scale yield analysis as the aforementioned framework is adaptable to multiple geographic regions and crop types.

  7. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  8. Seasonal Habitat Use by Greater Sage-Grouse (Centrocercus urophasianus) on a Landscape with Low Density Oil and Gas Development.

    PubMed

    Rice, Mindy B; Rossi, Liza G; Apa, Anthony D

    2016-01-01

    Fragmentation of the sagebrush (Artemisia spp.) ecosystem has led to concern about a variety of sagebrush obligates including the greater sage-grouse (Centrocercus urophasianus). Given the increase of energy development within greater sage-grouse habitats, mapping seasonal habitats in pre-development populations is critical. The North Park population in Colorado is one of the largest and most stable in the state and provides a unique case study for investigating resource selection at a relatively low level of energy development compared to other populations both within and outside the state. We used locations from 117 radio-marked female greater sage-grouse in North Park, Colorado to develop seasonal resource selection models. We then added energy development variables to the base models at both a landscape and local scale to determine if energy variables improved the fit of the seasonal models. The base models for breeding and winter resource selection predicted greater use in large expanses of sagebrush whereas the base summer model predicted greater use along the edge of riparian areas. Energy development variables did not improve the winter or the summer models at either scale of analysis, but distance to oil/gas roads slightly improved model fit at both scales in the breeding season, albeit in opposite ways. At the landscape scale, greater sage-grouse were closer to oil/gas roads whereas they were further from oil/gas roads at the local scale during the breeding season. Although we found limited effects from low level energy development in the breeding season, the scale of analysis can influence the interpretation of effects. The lack of strong effects from energy development may be indicative that energy development at current levels are not impacting greater sage-grouse in North Park. Our baseline seasonal resource selection maps can be used for conservation to help identify ways of minimizing the effects of energy development.

  9. Application of particle and lattice codes to simulation of hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Damjanac, Branko; Detournay, Christine; Cundall, Peter A.

    2016-04-01

    With the development of unconventional oil and gas reservoirs over the last 15 years, the understanding and capability to model the propagation of hydraulic fractures in inhomogeneous and naturally fractured reservoirs has become very important for the petroleum industry (but also for some other industries like mining and geothermal). Particle-based models provide advantages over other models and solutions for the simulation of fracturing of rock masses that cannot be assumed to be continuous and homogeneous. It has been demonstrated (Potyondy and Cundall Int J Rock Mech Min Sci Geomech Abstr 41:1329-1364, 2004) that particle models based on a simple force criterion for fracture propagation match theoretical solutions and scale effects derived using the principles of linear elastic fracture mechanics (LEFM). The challenge is how to apply these models effectively (i.e., with acceptable models sizes and computer run times) to the coupled hydro-mechanical problems of relevant time and length scales for practical field applications (i.e., reservoir scale and hours of injection time). A formulation of a fully coupled hydro-mechanical particle-based model and its application to the simulation of hydraulic treatment of unconventional reservoirs are presented. Model validation by comparing with available analytical asymptotic solutions (penny-shape crack) and some examples of field application (e.g., interaction with DFN) are also included.

  10. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  11. Wave models for turbulent free shear flows

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Morris, P. J.

    1991-01-01

    New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.

  12. SRNL PARTICIPATION IN THE MULTI-SCALE ENSEMBLE EXERCISES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, R

    2007-10-29

    Consequence assessment during emergency response often requires atmospheric transport and dispersion modeling to guide decision making. A statistical analysis of the ensemble of results from several models is a useful way of estimating the uncertainty for a given forecast. ENSEMBLE is a European Union program that utilizes an internet-based system to ingest transport results from numerous modeling agencies. A recent set of exercises required output on three distinct spatial and temporal scales. The Savannah River National Laboratory (SRNL) uses a regional prognostic model nested within a larger-scale synoptic model to generate the meteorological conditions which are in turn used inmore » a Lagrangian particle dispersion model. A discussion of SRNL participation in these exercises is given, with particular emphasis on requirements for provision of results in a timely manner with regard to the various spatial scales.« less

  13. Coupled land surface/hydrologic/atmospheric models

    NASA Technical Reports Server (NTRS)

    Pielke, Roger; Steyaert, Lou; Arritt, Ray; Lahtakia, Mercedes; Smith, Chris; Ziegler, Conrad; Soong, Su Tzai; Avissar, Roni; Wetzel, Peter; Sellers, Piers

    1993-01-01

    The topics covered include the following: prototype land cover characteristics data base for the conterminous United States; surface evapotranspiration effects on cumulus convection and implications for mesoscale models; the use of complex treatment of surface hydrology and thermodynamics within a mesoscale model and some related issues; initialization of soil-water content for regional-scale atmospheric prediction models; impact of surface properties on dryline and MCS evolution; a numerical simulation of heavy precipitation over the complex topography of California; representing mesoscale fluxes induced by landscape discontinuities in global climate models; emphasizing the role of subgrid-scale heterogeneity in surface-air interaction; and problems with modeling and measuring biosphere-atmosphere exchanges of energy, water, and carbon on large scales.

  14. Coarse-graining to the meso and continuum scales with molecular-dynamics-like models

    NASA Astrophysics Data System (ADS)

    Plimpton, Steve

    Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.

  15. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE

  16. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  17. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  18. Modelling remediation scenarios in historical mining catchments.

    PubMed

    Gamarra, Javier G P; Brewer, Paul A; Macklin, Mark G; Martin, Katherine

    2014-01-01

    Local remediation measures, particularly those undertaken in historical mining areas, can often be ineffective or even deleterious because erosion and sedimentation processes operate at spatial scales beyond those typically used in point-source remediation. Based on realistic simulations of a hybrid landscape evolution model combined with stochastic rainfall generation, we demonstrate that similar remediation strategies may result in differing effects across three contrasting European catchments depending on their topographic and hydrologic regimes. Based on these results, we propose a conceptual model of catchment-scale remediation effectiveness based on three basic catchment characteristics: the degree of contaminant source coupling, the ratio of contaminated to non-contaminated sediment delivery, and the frequency of sediment transport events.

  19. Upscaling of Hydraulic Conductivity using the Double Constraint Method

    NASA Astrophysics Data System (ADS)

    El-Rawy, Mustafa; Zijl, Wouter; Batelaan, Okke

    2013-04-01

    The mathematics and modeling of flow through porous media is playing an increasingly important role for the groundwater supply, subsurface contaminant remediation and petroleum reservoir engineering. In hydrogeology hydraulic conductivity data are often collected at a scale that is smaller than the grid block dimensions of a groundwater model (e.g. MODFLOW). For instance, hydraulic conductivities determined from the field using slug and packer tests are measured in the order of centimeters to meters, whereas numerical groundwater models require conductivities representative of tens to hundreds of meters of grid cell length. Therefore, there is a need for upscaling to decrease the number of grid blocks in a groundwater flow model. Moreover, models with relatively few grid blocks are simpler to apply, especially when the model has to run many times, as is the case when it is used to assimilate time-dependent data. Since the 1960s different methods have been used to transform a detailed description of the spatial variability of hydraulic conductivity to a coarser description. In this work we will investigate a relatively simple, but instructive approach: the Double Constraint Method (DCM) to identify the coarse-scale conductivities to decrease the number of grid blocks. Its main advantages are robustness and easy implementation, enabling to base computations on any standard flow code with some post processing added. The inversion step of the double constraint method is based on a first forward run with all known fluxes on the boundary and in the wells, followed by a second forward run based on the heads measured on the phreatic surface (i.e. measured in shallow observation wells) and in deeper observation wells. Upscaling, in turn is inverse modeling (DCM) to determine conductivities in coarse-scale grid blocks from conductivities in fine-scale grid blocks. In such a way that the head and flux boundary conditions applied to the fine-scale model are also honored at the coarse-scale. Exemplification will be presented for the Kleine Nete catchment, Belgium. As a result we identified coarse-scale conductivities while decreasing the number of grid blocks with the advantage that a model run costs less computation time and requires less memory space. In addition, ranking of models was investigated.

  20. Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale

    NASA Astrophysics Data System (ADS)

    Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-08-01

    This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.

  1. Pattern formation in individual-based systems with time-varying parameters

    NASA Astrophysics Data System (ADS)

    Ashcroft, Peter; Galla, Tobias

    2013-12-01

    We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.

  2. Toward a periodic table of personality: Mapping personality scales between the five-factor model and the circumplex model.

    PubMed

    Woods, Stephen A; Anderson, Neil R

    2016-04-01

    In this study, we examine the structures of 10 personality inventories (PIs) widely used for personnel assessment by mapping the scales of PIs to the lexical Big Five circumplex model resulting in a Periodic Table of Personality. Correlations between 273 scales from 10 internationally popular PIs with independent markers of the lexical Big Five are reported, based on data from samples in 2 countries (United Kingdom, N = 286; United States, N = 1,046), permitting us to map these scales onto the Abridged Big Five Dimensional Circumplex model (Hofstee, de Raad, & Goldberg, 1992). Emerging from our findings we propose a common facet framework derived from the scales of the PIs in our study. These results provide important insights into the literature on criterion-related validity of personality traits, and enable researchers and practitioners to understand how different PI scales converge and diverge and how compound PI scales may be constructed or replicated. Implications for research and practice are considered. (c) 2016 APA, all rights reserved).

  3. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  4. Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.

    Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less

  5. Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals

    DOE PAGES

    Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.; ...

    2018-02-09

    Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less

  6. Universal scaling laws in metro area election results.

    PubMed

    Bokányi, Eszter; Szállási, Zoltán; Vattay, Gábor

    2018-01-01

    We explain the anomaly of election results between large cities and rural areas in terms of urban scaling in the 1948-2016 US elections and in the 2016 EU referendum of the UK. The scaling curves are all universal and depend on a single parameter only, and one of the parties always shows superlinear scaling and drives the process, while the sublinear exponent of the other party is merely the consequence of probability conservation. Based on the recently developed model of urban scaling, we give a microscopic model of voter behavior in which we replace diversity characterizing humans in creative aspects with social diversity and tolerance. The model can also predict new political developments such as the fragmentation of the left and the immigration paradox.

  7. Local air temperature tolerance: a sensible basis for estimating climate variability

    NASA Astrophysics Data System (ADS)

    Kärner, Olavi; Post, Piia

    2016-11-01

    The customary representation of climate using sample moments is generally biased due to the noticeably nonstationary behaviour of many climate series. In this study, we introduce a moment-free climate representation based on a statistical model fitted to a long-term daily air temperature anomaly series. This model allows us to separate the climate and weather scale variability in the series. As a result, the climate scale can be characterized using the mean annual cycle of series and local air temperature tolerance, where the latter is computed using the fitted model. The representation of weather scale variability is specified using the frequency and the range of outliers based on the tolerance. The scheme is illustrated using five long-term air temperature records observed by different European meteorological stations.

  8. Scale dependent inference in landscape genetics

    Treesearch

    Samuel A. Cushman; Erin L. Landguth

    2010-01-01

    Ecological relationships between patterns and processes are highly scale dependent. This paper reports the first formal exploration of how changing scale of research away from the scale of the processes governing gene flow affects the results of landscape genetic analysis. We used an individual-based, spatially explicit simulation model to generate patterns of genetic...

  9. Friendship Quality Scale: Conceptualization, Development and Validation

    ERIC Educational Resources Information Center

    Thien, Lei Mee; Razak, Nordin Abd; Jamil, Hazri

    2012-01-01

    The purpose of this study is twofold: (1) to initialize a new conceptualization of positive feature based Friendship Quality (FQUA) scale on the basis of four dimensions: Closeness, Help, Acceptance, and Safety; and (2) to develop and validate FQUA scale in the form of reflective measurement model. The scale development and validation procedures…

  10. The Resiliency Scale for Young Adults

    ERIC Educational Resources Information Center

    Prince-Embury, Sandra; Saklofske, Donald H.; Nordstokke, David W.

    2017-01-01

    The Resiliency Scale for Young Adults (RSYA) is presented as an upward extension of the Resiliency Scales for Children and Adolescents (RSCA). The RSYA is based on the "three-factor model of personal resiliency" including "mastery," "relatedness," and "emotional reactivity." Several stages of scale…

  11. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  12. Module-based multiscale simulation of angiogenesis in skeletal muscle

    PubMed Central

    2011-01-01

    Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529

  13. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  14. Representative Structural Element - A New Paradigm for Multi-Scale Structural Modeling

    DTIC Science & Technology

    2016-07-05

    developed by NASA Glenn Research Center based on Aboudi’s micromechanics theories [5] that provides a wide range of capabilities for modeling ...to use appropriate models for related problems based on the capability of corresponding approaches. Moreover, the analyses will give a general...interface of heterogeneous materials but also help engineers to use appropriate models for related problems based on the capability of corresponding

  15. A tilted cold dark matter cosmological scenario

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Gnedin, Nickolay Y.; Kofman, Lev A.; Ostriker, Jeremiah P.

    1992-01-01

    A new cosmological scenario based on CDM but with a power spectrum index of about 0.7-0.8 is suggested. This model is predicted by various inflationary models with no fine tuning. This tilted CDM model, if normalized to COBE, alleviates many problems of the standard CDM model related to both small-scale and large-scale power. A physical bias of galaxies over dark matter of about two is required to fit spatial observations.

  16. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1988-01-01

    The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.

  17. Incorporating linguistic knowledge for learning distributed word representations.

    PubMed

    Wang, Yan; Liu, Zhiyuan; Sun, Maosong

    2015-01-01

    Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining.

  18. Incorporating Linguistic Knowledge for Learning Distributed Word Representations

    PubMed Central

    Wang, Yan; Liu, Zhiyuan; Sun, Maosong

    2015-01-01

    Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining. PMID:25874581

  19. Scale separation for multi-scale modeling of free-surface and two-phase flows with the conservative sharp interface method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de

    In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less

  20. Evaluation of coral reef carbonate production models at a global scale

    NASA Astrophysics Data System (ADS)

    Jones, N. S.; Ridgwell, A.; Hendy, E. J.

    2014-09-01

    Calcification by coral reef communities is estimated to account for half of all carbonate produced in shallow water environments and more than 25% of the total carbonate buried in marine sediments globally. Production of calcium carbonate by coral reefs is therefore an important component of the global carbon cycle. It is also threatened by future global warming and other global change pressures. Numerical models of reefal carbonate production are essential for understanding how carbonate deposition responds to environmental conditions including future atmospheric CO2 concentrations, but these models must first be evaluated in terms of their skill in recreating present day calcification rates. Here we evaluate four published model descriptions of reef carbonate production in terms of their predictive power, at both local and global scales, by comparing carbonate budget outputs with independent estimates. We also compile available global data on reef calcification to produce an observation-based dataset for the model evaluation. The four calcification models are based on functions sensitive to combinations of light availability, aragonite saturation (Ωa) and temperature and were implemented within a specifically-developed global framework, the Global Reef Accretion Model (GRAM). None of the four models correlated with independent rate estimates of whole reef calcification. The temperature-only based approach was the only model output to significantly correlate with coral-calcification rate observations. The absence of any predictive power for whole reef systems, even when consistent at the scale of individual corals, points to the overriding importance of coral cover estimates in the calculations. Our work highlights the need for an ecosystem modeling approach, accounting for population dynamics in terms of mortality and recruitment and hence coral cover, in estimating global reef carbonate budgets. In addition, validation of reef carbonate budgets is severely hampered by limited and inconsistent methodology in reef-scale observations.

  1. Testing of the SEE and OEE post-hip fracture.

    PubMed

    Resnick, Barbara; Orwig, Denise; Zimmerman, Sheryl; Hawkes, William; Golden, Justine; Werner-Bronzert, Michelle; Magaziner, Jay

    2006-08-01

    The purpose of this study was to test the reliability and validity of the Self-Efficacy for Exercise (SEE) and the Outcome Expectations for Exercise (OEE) scales in a sample of 166 older women post-hip fracture. There was some evidence of validity of the SEE and OEE based on confirmatory factor analysis and Rasch model testing, criterion based and convergent validity, and evidence of internal consistency based on alpha coefficients and separation indices and reliability based on R2 estimates. Rasch model testing demonstrated that some items had high variability. Based on these findings suggestions are made for how items could be revised and the scales improved for future use.

  2. A Liver-Centric Multiscale Modeling Framework for Xenobiotics.

    PubMed

    Sluka, James P; Fu, Xiao; Swat, Maciej; Belmonte, Julio M; Cosmanescu, Alin; Clendenon, Sherry G; Wambaugh, John F; Glazier, James A

    2016-01-01

    We describe a multi-scale, liver-centric in silico modeling framework for acetaminophen pharmacology and metabolism. We focus on a computational model to characterize whole body uptake and clearance, liver transport and phase I and phase II metabolism. We do this by incorporating sub-models that span three scales; Physiologically Based Pharmacokinetic (PBPK) modeling of acetaminophen uptake and distribution at the whole body level, cell and blood flow modeling at the tissue/organ level and metabolism at the sub-cellular level. We have used standard modeling modalities at each of the three scales. In particular, we have used the Systems Biology Markup Language (SBML) to create both the whole-body and sub-cellular scales. Our modeling approach allows us to run the individual sub-models separately and allows us to easily exchange models at a particular scale without the need to extensively rework the sub-models at other scales. In addition, the use of SBML greatly facilitates the inclusion of biological annotations directly in the model code. The model was calibrated using human in vivo data for acetaminophen and its sulfate and glucuronate metabolites. We then carried out extensive parameter sensitivity studies including the pairwise interaction of parameters. We also simulated population variation of exposure and sensitivity to acetaminophen. Our modeling framework can be extended to the prediction of liver toxicity following acetaminophen overdose, or used as a general purpose pharmacokinetic model for xenobiotics.

  3. A Liver-Centric Multiscale Modeling Framework for Xenobiotics

    PubMed Central

    Swat, Maciej; Cosmanescu, Alin; Clendenon, Sherry G.; Wambaugh, John F.; Glazier, James A.

    2016-01-01

    We describe a multi-scale, liver-centric in silico modeling framework for acetaminophen pharmacology and metabolism. We focus on a computational model to characterize whole body uptake and clearance, liver transport and phase I and phase II metabolism. We do this by incorporating sub-models that span three scales; Physiologically Based Pharmacokinetic (PBPK) modeling of acetaminophen uptake and distribution at the whole body level, cell and blood flow modeling at the tissue/organ level and metabolism at the sub-cellular level. We have used standard modeling modalities at each of the three scales. In particular, we have used the Systems Biology Markup Language (SBML) to create both the whole-body and sub-cellular scales. Our modeling approach allows us to run the individual sub-models separately and allows us to easily exchange models at a particular scale without the need to extensively rework the sub-models at other scales. In addition, the use of SBML greatly facilitates the inclusion of biological annotations directly in the model code. The model was calibrated using human in vivo data for acetaminophen and its sulfate and glucuronate metabolites. We then carried out extensive parameter sensitivity studies including the pairwise interaction of parameters. We also simulated population variation of exposure and sensitivity to acetaminophen. Our modeling framework can be extended to the prediction of liver toxicity following acetaminophen overdose, or used as a general purpose pharmacokinetic model for xenobiotics. PMID:27636091

  4. Dynamic testing for shuttle design verification

    NASA Technical Reports Server (NTRS)

    Green, C. E.; Leadbetter, S. A.; Rheinfurth, M. H.

    1972-01-01

    Space shuttle design verification requires dynamic data from full scale structural component and assembly tests. Wind tunnel and other scaled model tests are also required early in the development program to support the analytical models used in design verification. Presented is a design philosophy based on mathematical modeling of the structural system strongly supported by a comprehensive test program; some of the types of required tests are outlined.

  5. Large scale shell model study of the evolution of mixed-symmetry states in chains of nuclei around 132Sn

    NASA Astrophysics Data System (ADS)

    Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.

    2012-10-01

    Large scale shell model calculations based on a new diagonalization algorithm are performed in order to investigate the mixed symmetry states in chains of nuclei in the proximity of N=82. The resulting spectra and transitions are in agreement with the experiments and consistent with the scheme provided by the interacting boson model.

  6. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.

  7. Development and initial validation of a cognitive-based work-nonwork conflict scale.

    PubMed

    Ezzedeen, Souha R; Swiercz, Paul M

    2007-06-01

    Current research related to work and life outside work specifies three types of work-nonwork conflict: time, strain, and behavior-based. Overlooked in these models is a cognitive-based type of conflict whereby individuals experience work-nonwork conflict from cognitive preoccupation with work. Four studies on six different groups (N=549) were undertaken to develop and validate an initial measure of this construct. Structural equation modeling confirmed a two-factor, nine-item scale. Hypotheses regarding cognitive-based conflict's relationship with life satisfaction, work involvement, work-nonwork conflict, and work hours were supported. The relationship with knowledge work was partially supported in that only the cognitive dimension of cognitive-based conflict was related to extent of knowledge work. Hypotheses regarding cognitive-based conflict's relationship with family demands were rejected in that the cognitive dimension correlated positively rather than negatively with number of dependent children and perceived family demands. The study provides encouraging preliminary evidence of scale validity.

  8. Predicting in-hospital mortality of traffic victims: A comparison between AIS-and ICD-9-CM-related injury severity scales when only ICD-9-CM is reported.

    PubMed

    Van Belleghem, Griet; Devos, Stefanie; De Wit, Liesbet; Hubloue, Ives; Lauwaert, Door; Pien, Karen; Putman, Koen

    2016-01-01

    Injury severity scores are important in the context of developing European and national goals on traffic safety, health-care benchmarking and improving patient communication. Various severity scores are available and are mostly based on Abbreviated Injury Scale (AIS) or International Classification of Diseases (ICD). The aim of this paper is to compare the predictive value for in-hospital mortality between the various severity scores if only International Classification of Diseases, 9th revision, Clinical Modification ICD-9-CM is reported. To estimate severity scores based on the AIS lexicon, ICD-9-CM codes were converted with ICD Programmes for Injury Categorization (ICDPIC) and four AIS-based severity scores were derived: Maximum AIS (MaxAIS), Injury Severity Score (ISS), New Injury Severity Score (NISS) and Exponential Injury Severity Score (EISS). Based on ICD-9-CM, six severity scores were calculated. Determined by the number of injuries taken into account and the means by which survival risk ratios (SRRs) were calculated, four different approaches were used to calculate the ICD-9-based Injury Severity Scores (ICISS). The Trauma Mortality Prediction Model (TMPM) was calculated with the ICD-9-CM-based model averaged regression coefficients (MARC) for both the single worst injury and multiple injuries. Severity scores were compared via model discrimination and calibration. Model comparisons were performed separately for the severity scores based on the single worst injury and multiple injuries. For ICD-9-based scales, estimation of area under the receiver operating characteristic curve (AUROC) ranges between 0.94 and 0.96, while AIS-based scales range between 0.72 and 0.76, respectively. The intercept in the calibration plots is not significantly different from 0 for MaxAIS, ICISS and TMPM. When only ICD-9-CM codes are reported, ICD-9-CM-based severity scores perform better than severity scores based on the conversion to AIS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  10. Simulation of tropical cyclone activity over the western North Pacific based on CMIP5 models

    NASA Astrophysics Data System (ADS)

    Shen, Haibo; Zhou, Weican; Zhao, Haikun

    2017-09-01

    Based on the Coupled Model Inter-comparison Project 5 (CMIP5) models, the tropical cyclone (TC) activity in the summers of 1965-2005 over the western North Pacific (WNP) is simulated by a TC dynamically downscaling system. In consideration of diversity among climate models, Bayesian model averaging (BMA) and equal-weighed model averaging (EMA) methods are applied to produce the ensemble large-scale environmental factors of the CMIP5 model outputs. The environmental factors generated by BMA and EMA methods are compared, as well as the corresponding TC simulations by the downscaling system. Results indicate that BMA method shows a significant advantage over the EMA. In addition, impacts of model selections on BMA method are examined. To each factor, ten models with better performance are selected from 30 CMIP5 models and then conduct BMA, respectively. As a consequence, the ensemble environmental factors and simulated TC activity are similar with the results from the 30 models' BMA, which verifies the BMA method can afford corresponding weight for each model in the ensemble based on the model's predictive skill. Thereby, the existence of poor performance models will not particularly affect the BMA effectiveness and the ensemble outcomes are improved. Finally, based upon the BMA method and downscaling system, we analyze the sensitivity of TC activity to three important environmental factors, i.e., sea surface temperature (SST), large-scale steering flow, and vertical wind shear. Among three factors, SST and large-scale steering flow greatly affect TC tracks, while average intensity distribution is sensitive to all three environmental factors. Moreover, SST and vertical wind shear jointly play a critical role in the inter-annual variability of TC lifetime maximum intensity and frequency of intense TCs.

  11. LAMMPS integrated materials engine (LIME) for efficient automation of particle-based simulations: application to equation of state generation

    NASA Astrophysics Data System (ADS)

    Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.

    2017-07-01

    We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.

  12. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex

    PubMed Central

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were “non-task-specific” (NS) neurons that served as noise generators to “task-specific” neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors. PMID:27536235

  13. Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Simons, F. J.; Bozdag, E.

    2014-12-01

    We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.

  14. Large-Eddy Simulations of Radiatively Driven Convection: Sensitivities to the Representation of Small Scales.

    NASA Astrophysics Data System (ADS)

    Stevens, Bjorn; Moeng, Chin-Hoh; Sullivan, Peter P.

    1999-12-01

    Large-eddy simulations of a smoke cloud are examined with respect to their sensitivity to small scales as manifest in either the grid spacing or the subgrid-scale (SGS) model. Calculations based on a Smagorinsky SGS model are found to be more sensitive to the effective resolution of the simulation than are calculations based on the prognostic turbulent kinetic energy (TKE) SGS model. The difference between calculations based on the two SGS models is attributed to the advective transport, diffusive transport, and/or time-rate-of-change terms in the TKE equation. These terms are found to be leading order in the entrainment zone and allow the SGS TKE to behave in a way that tends to compensate for changes that result in larger or smaller resolved scale entrainment fluxes. This compensating behavior of the SGS TKE model is attributed to the fact that changes that reduce the resolved entrainment flux (viz., values of the eddy viscosity in the upper part of the PBL) simultaneously tend to increase the buoyant production of SGS TKE in the radiatively destabilized portion of the smoke cloud. Increased production of SGS TKE in this region then leads to increased amounts of transported, or fossil, SGS TKE in the entrainment zone itself, which in turn leads to compensating increases in the SGS entrainment fluxes. In the Smagorinsky model, the absence of a direct connection between SGS TKE in the entrainment and radiatively destabilized zones prevents this compensating mechanism from being active, and thus leads to calculations whose entrainment rate sensitivities as a whole reflect the sensitivities of the resolved-scale fluxes to values of upper PBL eddy viscosities.

  15. Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  16. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  17. Regional climates in the GISS general circulation model: Surface air temperature

    NASA Technical Reports Server (NTRS)

    Hewitson, Bruce

    1994-01-01

    One of the more viable research techniques into global climate change for the purpose of understanding the consequent environmental impacts is based on the use of general circulation models (GCMs). However, GCMs are currently unable to reliably predict the regional climate change resulting from global warming, and it is at the regional scale that predictions are required for understanding human and environmental responses. Regional climates in the extratropics are in large part governed by the synoptic-scale circulation and the feasibility of using this interscale relationship is explored to provide a way of moving to grid cell and sub-grid cell scales in the model. The relationships between the daily circulation systems and surface air temperature for points across the continental United States are first developed in a quantitative form using a multivariate index based on principal components analysis (PCA) of the surface circulation. These relationships are then validated by predicting daily temperature using observed circulation and comparing the predicted values with the observed temperatures. The relationships predict surface temperature accurately over the major portion of the country in winter, and for half the country in summer. These relationships are then applied to the surface synoptic circulation of the Goddard Institute for Space Studies (GISS) GCM control run, and a set of surface grid cell temperatures are generated. These temperatures, based on the larger-scale validated circulation, may now be used with greater confidence at the regional scale. The generated temperatures are compared to those of the model and show that the model has regional errors of up to 10 C in individual grid cells.

  18. Factor Analysis of the HEW National Strategy for Youth Development Model's Community Program Impact Scales.

    ERIC Educational Resources Information Center

    Truckenmiller, James L.

    The former HEW (Health, Education, and Welfare) National Strategy for Youth Development Model proposed a community-based program to promote positive youth development and to prevent delinquency through a sequence of youth needs assessments, needs-targeted programs, and program impact evaluation. HEW Community Program Impact Scales data obtained…

  19. A Teacher-Friendly Instrument in Identifying Learning Styles in the Classroom.

    ERIC Educational Resources Information Center

    Pitts, Joseph I.

    This report describes a reliability and validity study on a learning styles instrument that was developed based on the Dunn, Dunn, & Price model. That model included 104 Likert five-point scale items for investigating 24 scales grouped into five categories considered likely to affect learning. The Learning Style Preference Inventory (LSPI)…

  20. Recursive renormalization group theory based subgrid modeling

    NASA Technical Reports Server (NTRS)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  1. Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2007-01-01

    A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…

  2. Using GIS Models to Identify Relative Nitrogen Attenuation by Riparian Buffers in the Coastal Plain of North Carolina

    EPA Science Inventory

    Riparian areas have demonstrated the ability to attenuate nutrients and provide water quality services at the field scale, but services of riparian buffers for downstream users should be assessed at watershed scales. GIS-based riparian models have been developed to connect ripari...

  3. Comparison of Local Scale Measured and Modeled Brightness Temperatures and Snow Parameters from the CLPX 2003 by Means of a Dense Medium Radiative Transfer Theory Model

    NASA Technical Reports Server (NTRS)

    Tedescol, Marco; Kim, Edward J.; Cline, Don; Graf, Tobias; Koike, Toshio; Armstrong, Richard; Brodzik, Mary J.; Hardy, Janet

    2004-01-01

    Microwave remote sensing offers distinct advantages for observing the cryosphere. Solar illumination is not required, and spatial and temporal coverage are excellent from polar-orbiting satellites. Passive microwave measurements are sensitive to the two most useful physical quantities for many hydrological applications: physical temperature and water content/state. Sensitivity to the latter is a direct result of the microwave sensitivity to the dielectric properties of natural media, including snow, ice, soil (frozen or thawed), and vegetation. These considerations are factors motivating the development of future cryospheric satellite remote sensing missions, continuing and improving on a 26-year microwave measurement legacy. Perhaps the biggest issues regarding the use of such satellite measurements involve how to relate parameter values at spatial scales as small as a hectare to observations with sensor footprints that may be up to 25 x 25 km. The NASA Cold-land Processes Field Experiment (CLPX) generated a dataset designed to enhance understanding of such scaling issues. CLPX observations were made in February (dry snow) and March (wet snow), 2003 in Colorado, USA, at scales ranging from plot scale to 25 x 25 km satellite footprints. Of interest here are passive microwave observations from ground-based, airborne, and satellite sensors, as well as meteorological and snowpack measurements that will enable studies of the effects of spatial heterogeneity of surface conditions on the observations. Prior to performing such scaling studies, an evaluation of snowpack forward modelling at the plot scale (least heterogeneous scale) is in order. This is the focus of this paper. Many forward models of snow signatures (brightness temperatures) have been developed over the years. It is now recognized that a dense medium radiative transfer (DMRT) treatment represents a high degree of physical fidelity for snow modeling, yet dense medium models are particularly sensitive to snowpack structural parameters such as grain size, density, and depth---parameters that may vary substantially within a snowpack. Microwave radiometric data and snow pit measurements collected at the Local-Scale Observation Site (LSOS) during the third Intensive Observation Period (IOP3) of the CLPX have been used to test the capabilities of a DMRT model using the Quasi Crystalline Approximation with Coherent Potential (QCA-CP). The radiometric measurements were made by the University of Tokyo s Ground Based Microwave Radiometer-7 (GBMR-7) system. We evaluate the degree to which a DMRT-based model can accurately reproduce the GBMR-7 brightness temperatures at different frequencies and incidence angles.

  4. Granular activated carbon adsorption of MIB in the presence of dissolved organic matter.

    PubMed

    Summers, R Scott; Kim, Soo Myung; Shimabuku, Kyle; Chae, Seon-Ha; Corwin, Christopher J

    2013-06-15

    Based on the results of over twenty laboratory granular activated carbon (GAC) column runs, models were developed and utilized for the prediction of 2-methylisoborneol (MIB) breakthrough behavior at parts per trillion levels and verified with pilot-scale data. The influent MIB concentration was found not to impact the concentration normalized breakthrough. Increasing influent background dissolved organic matter (DOM) concentration was found to systematically decrease the GAC adsorption capacity for MIB. A series of empirical models were developed that related the throughput in bed volumes for a range of MIB breakthrough targets to the influent DOM concentration. The proportional diffusivity (PD) designed rapid small-scale column test (RSSCT) could be directly used to scale-up MIB breakthrough performance below 15% breakthrough. The empirical model to predict the throughput to 50% breakthrough based on the influent DOM concentration served as input to the pore diffusion model (PDM) and well-predicted the MIB breakthrough performance below a 50% breakthrough. The PDM predictions of throughput to 10% breakthrough well simulated the PD-RSSCT and pilot-scale 10% MIB breakthrough. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Toward Improved Parameterization of a Meso-Scale Hydrologic Model in a Discontinuous Permafrost, Boreal Forest Ecosystem

    NASA Astrophysics Data System (ADS)

    Endalamaw, A. M.; Bolton, W. R.; Young, J. M.; Morton, D.; Hinzman, L. D.

    2013-12-01

    The sub-arctic environment can be characterized as being located in the zone of discontinuous permafrost. Although the distribution of permafrost is site specific, it dominates many of the hydrologic and ecologic responses and functions including vegetation distribution, stream flow, soil moisture, and storage processes. In this region, the boundaries that separate the major ecosystem types (deciduous dominated and coniferous dominated ecosystems) as well as permafrost (permafrost verses non-permafrost) occur over very short spatial scales. One of the goals of this research project is to improve parameterizations of meso-scale hydrologic models in this environment. Using the Caribou-Poker Creeks Research Watershed (CPCRW) as the test area, simulations of the headwater catchments of varying permafrost and vegetation distributions were performed. CPCRW, located approximately 50 km northeast of Fairbanks, Alaska, is located within the zone of discontinuous permafrost and the boreal forest ecosystem. The Variable Infiltration Capacity (VIC) model was selected as the hydrologic model. In CPCRW, permafrost and coniferous vegetation is generally found on north facing slopes and valley bottoms. Permafrost free soils and deciduous vegetation is generally found on south facing slopes. In this study, hydrologic simulations using fine scale vegetation and soil parameterizations - based upon slope and aspect analysis at a 50 meter resolution - were conducted. Simulations were also conducted using downscaled vegetation from the Scenarios Network for Alaska and Arctic Planning (SNAP) (1 km resolution) and soil data sets from the Food and Agriculture Organization (FAO) (approximately 9 km resolution). Preliminary simulation results show that soil and vegetation parameterizations based upon fine scale slope/aspect analysis increases the R2 values (0.5 to 0.65 in the high permafrost (53%) basin; 0.43 to 0.56 in the low permafrost (2%) basin) relative to parameterization based on coarse scale data. These results suggest that using fine resolution parameterizations can be used to improve meso-scale hydrological modeling in this region.

  7. Rapid high-throughput characterisation, classification and selection of recombinant mammalian cell line phenotypes using intact cell MALDI-ToF mass spectrometry fingerprinting and PLS-DA modelling.

    PubMed

    Povey, Jane F; O'Malley, Christopher J; Root, Tracy; Martin, Elaine B; Montague, Gary A; Feary, Marc; Trim, Carol; Lang, Dietmar A; Alldread, Richard; Racher, Andrew J; Smales, C Mark

    2014-08-20

    Despite many advances in the generation of high producing recombinant mammalian cell lines over the last few decades, cell line selection and development is often slowed by the inability to predict a cell line's phenotypic characteristics (e.g. growth or recombinant protein productivity) at larger scale (large volume bioreactors) using data from early cell line construction at small culture scale. Here we describe the development of an intact cell MALDI-ToF mass spectrometry fingerprinting method for mammalian cells early in the cell line construction process whereby the resulting mass spectrometry data are used to predict the phenotype of mammalian cell lines at larger culture scale using a Partial Least Squares Discriminant Analysis (PLS-DA) model. Using MALDI-ToF mass spectrometry, a library of mass spectrometry fingerprints was generated for individual cell lines at the 96 deep well plate stage of cell line development. The growth and productivity of these cell lines were evaluated in a 10L bioreactor model of Lonza's large-scale (up to 20,000L) fed-batch cell culture processes. Using the mass spectrometry information at the 96 deep well plate stage and phenotype information at the 10L bioreactor scale a PLS-DA model was developed to predict the productivity of unknown cell lines at the 10L scale based upon their MALDI-ToF fingerprint at the 96 deep well plate scale. This approach provides the basis for the very early prediction of cell lines' performance in cGMP manufacturing-scale bioreactors and the foundation for methods and models for predicting other mammalian cell phenotypes from rapid, intact-cell mass spectrometry based measurements. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Development of in situ stiffness properties for shuttle booster filament wound case

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1984-01-01

    Subscale tests were extensively used to develop a data base on elastic properties of graphite-epoxy wound pressure vessels. Scaling phenomena were observed to influence biaxial strains. Causes for scaling are proposed and lamina models with scaling factors are presented.

  9. Agent-based modeling of the interaction between CD8+ T cells and Beta cells in type 1 diabetes.

    PubMed

    Ozturk, Mustafa Cagdas; Xu, Qian; Cinar, Ali

    2018-01-01

    We propose an agent-based model for the simulation of the autoimmune response in T1D. The model incorporates cell behavior from various rules derived from the current literature and is implemented on a high-performance computing system, which enables the simulation of a significant portion of the islets in the mouse pancreas. Simulation results indicate that the model is able to capture the trends that emerge during the progression of the autoimmunity. The multi-scale nature of the model enables definition of rules or equations that govern cellular or sub-cellular level phenomena and observation of the outcomes at the tissue scale. It is expected that such a model would facilitate in vivo clinical studies through rapid testing of hypotheses and planning of future experiments by providing insight into disease progression at different scales, some of which may not be obtained easily in clinical studies. Furthermore, the modular structure of the model simplifies tasks such as the addition of new cell types, and the definition or modification of different behaviors of the environment and the cells with ease.

  10. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    PubMed

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  11. Study on the millimeter-wave scale absorber based on the Salisbury screen

    NASA Astrophysics Data System (ADS)

    Yuan, Liming; Dai, Fei; Xu, Yonggang; Zhang, Yuan

    2018-03-01

    In order to solve the problem on the millimeter-wave scale absorber, the Salisbury screen absorber is employed and designed based on the RL. By optimizing parameters including the sheet resistance of the surface resistive layer, the permittivity and the thickness of the grounded dielectric layer, the RL of the Salisbury screen absorber could be identical with that of the theoretical scale absorber. An example is given to verify the effectiveness of the method, where the Salisbury screen absorber is designed by the proposed method and compared with the theoretical scale absorber. Meanwhile, plate models and tri-corner reflector (TCR) models are constructed according to the designed result and their scattering properties are simulated by FEKO. Results reveal that the deviation between the designed Salisbury screen absorber and the theoretical scale absorber falls within the tolerance of radar Cross section (RCS) measurement. The work in this paper has important theoretical and practical significance in electromagnetic measurement of large scale ratio.

  12. The NASA modern technology rotors program

    NASA Technical Reports Server (NTRS)

    Watts, M. E.; Cross, J. L.

    1986-01-01

    Existing data bases regarding helicopters are based on work conducted on 'old-technology' rotor systems. The Modern Technology Rotors (MTR) Program is to provide extensive data bases on rotor systems using present and emerging technology. The MTR is concerned with modern, four-bladed, rotor systems presently being manufactured or under development. Aspects of MTR philosophy are considered along with instrumentation, the MTR test program, the BV 360 Rotor, and the UH-60 Black Hawk. The program phases include computer modelling, shake test, model-scale test, minimally instrumented flight test, extensively pressure-instrumented-blade flight test, and full-scale wind tunnel test.

  13. COSP - A computer model of cyclic oxidation

    NASA Technical Reports Server (NTRS)

    Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.

    1991-01-01

    A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.

  14. Stream classification of the Apalachicola-Chattahoochee-Flint River System to support modeling of aquatic habitat response to climate change

    USGS Publications Warehouse

    Elliott, Caroline M.; Jacobson, Robert B.; Freeman, Mary C.

    2014-01-01

    A stream classification and associated datasets were developed for the Apalachicola-Chattahoochee-Flint River Basin to support biological modeling of species response to climate change in the southeastern United States. The U.S. Geological Survey and the Department of the Interior’s National Climate Change and Wildlife Science Center established the Southeast Regional Assessment Project (SERAP) which used downscaled general circulation models to develop landscape-scale assessments of climate change and subsequent effects on land cover, ecosystems, and priority species in the southeastern United States. The SERAP aquatic and hydrologic dynamics modeling efforts involve multiscale watershed hydrology, stream-temperature, and fish-occupancy models, which all are based on the same stream network. Models were developed for the Apalachicola-Chattahoochee-Flint River Basin and subbasins in Alabama, Florida, and Georgia, and for the Upper Roanoke River Basin in Virginia. The stream network was used as the spatial scheme through which information was shared across the various models within SERAP. Because these models operate at different scales, coordinated pair versions of the network were delineated, characterized, and parameterized for coarse- and fine-scale hydrologic and biologic modeling. The stream network used for the SERAP aquatic models was extracted from a 30-meter (m) scale digital elevation model (DEM) using standard topographic analysis of flow accumulation. At the finer scale, reaches were delineated to represent lengths of stream channel with fairly homogenous physical characteristics (mean reach length = 350 m). Every reach in the network is designated with geomorphic attributes including upstream drainage basin area, channel gradient, channel width, valley width, Strahler and Shreve stream order, stream power, and measures of stream confinement. The reach network was aggregated from tributary junction to tributary junction to define segments for the benefit of hydrological, soil erosion, and coarser ecological modeling. Reach attributes are summarized for each segment. In six subbasins segments are assigned additional attributes about barriers (usually impoundments) to fish migration and stream isolation. Segments in the six sub-basins are also attributed with percent urban area for the watershed upstream from the stream segment for each decade from 2010–2100 from models of urban growth. On a broader scale, for application in a coarse-scale species-response model, the stream-network information is aggregated and summarized by 256 drainage subbasins (Hydrologic Response Units) used for watershed hydrologic and stream-temperature models. A model of soil erodibility based on the Revised Universal Soil Loss Equation also was developed at this scale to parameterize a model to evaluate stream condition. The reach-scale network was classified using multivariate clustering based on modeled channel width, valley width, and mean reach gradient as variables. The resulting classification consists of a 6-cluster and a 12-cluster classification for every reach in the Apalachicola-Chattahoochee-Flint Basin. We present an example of the utility of the classification that was tested using the occurrence of two species of darters and two species of minnows in the Apalachicola-Chattahoochee-Flint River Basin, the blackbanded darter and Halloween darter, and the bluestripe shiner and blacktail shiner.

  15. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  16. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1987-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  17. A multiple-time-scale turbulence model based on variable partitioning of the turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1989-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  18. Experimental Research on the Dense CFB's Riser and the Simulation Based on the EMMS Model

    NASA Astrophysics Data System (ADS)

    Wang, X. Y.; Wang, S. D.; Fan, B. G.; Liao, L. L.; Jiang, F.; Xu, X.; Wu, X. Z.; Xiao, Y. H.

    2010-03-01

    The flow structure in the CFB (circulating fluidized bed) riser has been investigated. Experimental studies were performed in a cold square section unit with 270 mm×270 mm×10 m. Since the drag force model based on homogeneous two-phase flow such as the Gidaspow drag model could not depict the heterogeneous structures of the gas-solid flow, the structure-dependent energy-minimization multi-scale (EMMS) model based on the heterogenerity was applied in the paper and a revised drag force model based on the EMMS model was proposed. A 2D two-fluid model was used to simulate a bench-scale square cross-section riser of a cold CFB. The typical core-annulus structure and the back-mixing near the wall of the riser were observed and the assembly and fragmentation processes of clusters were captured. By comparing with the Gidaspow drag model, the results obtained by the revised drag model based on EMMS shows better consistency with the experimental data. The model can also depict the difference from the two exit configurations. This study once again proves the key role of drag force in CFD (Computational Fluid Dynamics) simulation and also shows the availability of the revised drag model to describe the gas-solid flow in CFB risers.

  19. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  20. Data for Room Fire Model Comparisons

    PubMed Central

    Peacock, Richard D.; Davis, Sanford; Babrauskas, Vytenis

    1991-01-01

    With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system. PMID:28184121

  1. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  2. Data for Room Fire Model Comparisons.

    PubMed

    Peacock, Richard D; Davis, Sanford; Babrauskas, Vytenis

    1991-01-01

    With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system.

  3. Comparative Model Evaluation Studies of Biogenic Trace Gas Fluxes in Tropical Forests

    NASA Technical Reports Server (NTRS)

    Potter, C. S.; Peterson, David L. (Technical Monitor)

    1997-01-01

    Simulation modeling can play a number of important roles in large-scale ecosystem studies, including synthesis of patterns and changes in carbon and nutrient cycling dynamics, scaling up to regional estimates, and formulation of testable hypotheses for process studies. Recent comparative studies have shown that ecosystem models of soil trace gas exchange with the atmosphere are evolving into several distinct simulation approaches. Different levels of detail exist among process models in the treatment of physical controls on ecosystem nutrient fluxes and organic substrate transformations leading to gas emissions. These differences are is in part from distinct objectives of scaling and extrapolation. Parameter requirements for initialization scalings, boundary conditions, and time-series driven therefore vary among ecosystem simulation models, such that the design of field experiments for integration with modeling should consider a consolidated series of measurements that will satisfy most of the various model requirements. For example, variables that provide information on soil moisture holding capacity, moisture retention characteristics, potential evapotranspiration and drainage rates, and rooting depth appear to be of the first order in model evaluation trials for tropical moist forest ecosystems. The amount and nutrient content of labile organic matter in the soil, based on accurate plant production estimates, are also key parameters that determine emission model response. Based on comparative model results, it is possible to construct a preliminary evaluation matrix along categories of key diagnostic parameters and temporal domains. Nevertheless, as large-scale studied are planned, it is notable that few existing models age designed to simulate transient states of ecosystem change, a feature which will be essential for assessment of anthropogenic disturbance on regional gas budgets, and effects of long-term climate variability on biosphere-atmosphere exchange.

  4. A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data

    NASA Technical Reports Server (NTRS)

    Barnes, J. R.

    1993-01-01

    Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.

  5. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  6. High performance cellular level agent-based simulation with FLAME for the GPU.

    PubMed

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  7. Pharmacokinetic-Pharmacodynamic Modeling in Pediatric Drug Development, and the Importance of Standardized Scaling of Clearance.

    PubMed

    Germovsek, Eva; Barker, Charlotte I S; Sharland, Mike; Standing, Joseph F

    2018-04-19

    Pharmacokinetic/pharmacodynamic (PKPD) modeling is important in the design and conduct of clinical pharmacology research in children. During drug development, PKPD modeling and simulation should underpin rational trial design and facilitate extrapolation to investigate efficacy and safety. The application of PKPD modeling to optimize dosing recommendations and therapeutic drug monitoring is also increasing, and PKPD model-based dose individualization will become a core feature of personalized medicine. Following extensive progress on pediatric PK modeling, a greater emphasis now needs to be placed on PD modeling to understand age-related changes in drug effects. This paper discusses the principles of PKPD modeling in the context of pediatric drug development, summarizing how important PK parameters, such as clearance (CL), are scaled with size and age, and highlights a standardized method for CL scaling in children. One standard scaling method would facilitate comparison of PK parameters across multiple studies, thus increasing the utility of existing PK models and facilitating optimal design of new studies.

  8. Ocean Acidification Portends Acute Habitat Compression for Atlantic Cod (Gadus morhua) in a Physiologically-informed Metabolic Rate Model

    NASA Astrophysics Data System (ADS)

    Del Raye, G.; Weng, K.

    2011-12-01

    Ocean acidification affects organisms on a biochemical scale, yet its societal impacts manifest from changes that propagate through entire populations. Successful forecasting of the effects of ocean acidification therefore depends on at least two steps: (1) deducing systemic physiology based on subcellular stresses and (2) scaling individual physiology up to ecosystem processes. Predictions that are based on known biological processes (process-based models) may fare better than purely statistical models in both these steps because the latter are less robust to novel environmental conditions. Here we present a process-based model that uses temperature, pO2, and pCO2 to predict maximal aerobic scope in Atlantic cod. Using this model, we show that (i) experimentally-derived physiological parameters are sufficient to capture the response of cod aerobic scope to temperature and oxygen, and (ii) subcellular pH effects can be used to predict the systemic physiological response of cod to an acidified ocean. We predict that acute pH stress (on a scale of hours) could limit the mobility of Atlantic cod during diel vertical migration across a pCO2 gradient, promoting habitat compression. Finally, we use a global sensitivity analysis to identify opportunities for the improvement of model uncertainty as well as some physiological adaptations that could mitigate climate stresses on cod in the future.

  9. Tracing Multi-Scale Climate Change at Low Latitude from Glacier Shrinkage

    NASA Astrophysics Data System (ADS)

    Moelg, T.; Cullen, N. J.; Hardy, D. R.; Kaser, G.

    2009-12-01

    Significant shrinkage of glaciers on top of Africa's highest mountain (Kilimanjaro, 5895 m a.s.l.) has been observed between the late 19th century and the present. Multi-year data from our automatic weather station on the largest remaining slope glacier at 5873 m allow us to force and verify a process-based distributed glacier mass balance model. This generates insights into energy and mass fluxes at the glacier-atmosphere interface, their feedbacks, and how they are linked to atmospheric conditions. By means of numerical atmospheric modeling and global climate model simulations, we explore the linkages of the local climate in Kilimanjaro's summit zone to larger-scale climate dynamics - which suggests a causal connection between Indian Ocean dynamics, mesoscale mountain circulation, and glacier mass balance. Based on this knowledge, the verified mass balance model is used for backward modeling of the steady-state glacier extent observed in the 19th century, which yields the characteristics of local climate change between that time and the present (30-45% less precipitation, 0.1-0.3 hPa less water vapor pressure, 2-4 percentage units less cloud cover at present). Our multi-scale approach provides an important contribution, from a cryospheric viewpoint, to the understanding of how large-scale climate change propagates to the tropical free troposphere. Ongoing work in this context targets the millennium-scale relation between large-scale climate and glacier behavior (by downscaling precipitation), and the possible effects of regional anthropogenic activities (land use change) on glacier mass balance.

  10. Detection of crossover time scales in multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Ge, Erjia; Leung, Yee

    2013-04-01

    Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.

  11. Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?

    NASA Astrophysics Data System (ADS)

    Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.

    2016-12-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.

  12. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

  13. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    NASA Astrophysics Data System (ADS)

    Kou, Jisheng; Sun, Shuyu

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.

  14. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Jisheng; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng–Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from themore » microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young–Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young–Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young–Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.« less

  15. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions.

    PubMed

    Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A

    2017-01-01

    Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models.

  16. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions

    PubMed Central

    Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A

    2017-01-01

    Purpose Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Methods Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. Results All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. Conclusion The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models. PMID:29355212

  17. A multi-scale Lattice Boltzmann model for simulating solute transport in 3D X-ray micro-tomography images of aggregated porous materials

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxian; Crawford, John W.; Flavel, Richard J.; Young, Iain M.

    2016-10-01

    The Lattice Boltzmann (LB) model and X-ray computed tomography (CT) have been increasingly used in combination over the past decade to simulate water flow and chemical transport at pore scale in porous materials. Because of its limitation in resolution and the hierarchical structure of most natural soils, the X-ray CT tomography can only identify pores that are greater than its resolution and treats other pores as solid. As a result, the so-called solid phase in X-ray images may in reality be a grey phase, containing substantial connected pores capable of conducing fluids and solute. Although modified LB models have been developed to simulate fluid flow in such media, models for solute transport are relatively limited. In this paper, we propose a LB model for simulating solute transport in binary soil images containing permeable solid phase. The model is based on the single-relaxation time approach and uses a modified partial bounce-back method to describe the resistance caused by the permeable solid phase to chemical transport. We derive the relationship between the diffusion coefficient and the parameter introduced in the partial bounce-back method, and test the model against analytical solution for movement of a pulse of tracer. We also validate it against classical finite volume method for solute diffusion in a simple 2D image, and then apply the model to a soil image acquired using X-ray tomography at resolution of 30 μm in attempts to analyse how the ability of the solid phase to diffuse solute at micron-scale affects the behaviour of the solute at macro-scale after a volumetric average. Based on the simulated results, we discuss briefly the danger in interpreting experimental results using the continuum model without fully understanding the pore-scale processes, as well as the potential of using pore-scale modelling and tomography to help improve the continuum models.

  18. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    NASA Astrophysics Data System (ADS)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  19. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  20. Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Pohlmann, K.

    2016-12-01

    Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.

  1. A model of urban scaling laws based on distance dependent interactions

    NASA Astrophysics Data System (ADS)

    Ribeiro, Fabiano L.; Meirelles, Joao; Ferreira, Fernando F.; Neto, Camilo Rodrigues

    2017-03-01

    Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log-log plot, the so-called superlinear scaling. Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences.

  2. A model of urban scaling laws based on distance dependent interactions.

    PubMed

    Ribeiro, Fabiano L; Meirelles, Joao; Ferreira, Fernando F; Neto, Camilo Rodrigues

    2017-03-01

    Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log-log plot, the so-called superlinear scaling . Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences.

  3. Simulating the heterogeneity in braided channel belt deposits: 1. A geometric-based methodology and code

    NASA Astrophysics Data System (ADS)

    Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.; Dominic, David F.; Freedman, Vicky L.; Scheibe, Timothy D.; Lunt, Ian A.

    2010-04-01

    A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the kilometer scale to the centimeter scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing of upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in part 1 of this paper. In part 2 (Guin et al., 2010), models generated by the code are presented and evaluated.

  4. Simulating the Heterogeneity in Braided Channel Belt Deposits: Part 1. A Geometric-Based Methodology and Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.

    A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the km scale to the cm scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing ofmore » upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in Part 1 of this series. In Part 2, models generated by the code are presented and evaluated.« less

  5. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  6. Mineral scale management. Part II, Fundamental chemistry

    Treesearch

    Alan W. Rudie; Peter W. Hart

    2006-01-01

    The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process.As such, it is accurately modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that must be met for a scale problem to exist, the acid-base equilibria of the scaling anions often control where...

  7. Fundamental chemistry of precipitation and mineral scale formation

    Treesearch

    Alan W. Rudie; Peter W. Hart

    2005-01-01

    The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process. As such, it is accurately described or modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that need to be met for a scale problem to exist, the acid base equilibria of the scaling anions...

  8. Numerical Study of Microstructural Evolution During Homogenization of Al-Si-Mg-Fe-Mn Alloys

    NASA Astrophysics Data System (ADS)

    Priya, Pikee; Johnson, David R.; Krane, Matthew J. M.

    2016-09-01

    Microstructural evolution during homogenization of Al-Si-Mg-Fe-Mn alloys occurs in two stages at different length scales: while holding at the homogenization temperature (diffusion on the scale of the secondary dendrite arm spacing (SDAS) in micrometers) and during quenching to room temperature (dispersoid precipitation at the nanometer to submicron scale). Here a numerical study estimates microstructural changes during both stages. A diffusion-based model developed to simulate evolution at the SDAS length scale predicts homogenization times and microstructures matching experiments. That model is coupled with a Kampmann Wagner Neumann-based precipitate nucleation and growth model to study the effect of temperature, composition, as-cast microstructure, and cooling rates during posthomogenization quenching on microstructural evolution. A homogenization schedule of 853 K (580 °C) for 8 hours, followed by cooling at 250 K/h, is suggested to optimize microstructures for easier extrusion, consisting of minimal α-Al(FeMn)Si, no β-AlFeSi, and Mg2Si dispersoids <1 μm size.

  9. A two-scale Weibull approach to the failure of porous ceramic structures made by robocasting: possibilities and limits

    PubMed Central

    Genet, Martin; Houmard, Manuel; Eslava, Salvador; Saiz, Eduardo; Tomsia, Antoni P.

    2012-01-01

    This paper introduces our approach to modeling the mechanical behavior of cellular ceramics, through the example of calcium phosphate scaffolds made by robocasting for bone-tissue engineering. The Weibull theory is used to deal with the scaffolds’ constitutive rods statistical failure, and the Sanchez-Palencia theory of periodic homogenization is used to link the rod- and scaffold-scales. Uniaxial compression of scaffolds and three-point bending of rods were performed to calibrate and validate the model. If calibration based on rod-scale data leads to over-conservative predictions of scaffold’s properties (as rods’ successive failures are not taken into account), we show that, for a given rod diameter, calibration based on scaffold-scale data leads to very satisfactory predictions for a wide range of rod spacing, i.e. of scaffold porosity, as well as for different loading conditions. This work establishes the proposed model as a reliable tool for understanding and optimizing cellular ceramics’ mechanical properties. PMID:23439936

  10. A Mixed-dimensional Model for Determining the Impact of Permafrost Polygonal Ground Degradation on Arctic Hydrology.

    NASA Astrophysics Data System (ADS)

    Coon, E.; Jan, A.; Painter, S. L.; Moulton, J. D.; Wilson, C. J.

    2017-12-01

    Many permafrost-affected regions in the Arctic manifest a polygonal patterned ground, which contains large carbon stores and is vulnerability to climate change as warming temperatures drive melting ice wedges, polygon degradation, and thawing of the underlying carbon-rich soils. Understanding the fate of this carbon is difficult. The system is controlled by complex, nonlinear physics coupling biogeochemistry, thermal-hydrology, and geomorphology, and there is a strong spatial scale separation between microtopograpy (at the scale of an individual polygon) and the scale of landscape change (at the scale of many thousands of polygons). Physics-based models have come a long way, and are now capable of representing the diverse set of processes, but only on individual polygons or a few polygons. Empirical models have been used to upscale across land types, including ecotypes evolving from low-centered (pristine) polygons to high-centered (degraded) polygon, and do so over large spatial extent, but are limited in their ability to discern causal process mechanisms. Here we present a novel strategy that looks to use physics-based models across scales, bringing together multiple capabilities to capture polygon degradation under a warming climate and its impacts on thermal-hydrology. We use fine-scale simulations on individual polygons to motivate a mixed-dimensional strategy that couples one-dimensional columns representing each individual polygon through two-dimensional surface flow. A subgrid model is used to incorporate the effects of surface microtopography on surface flow; this model is described and calibrated to fine-scale simulations. And critically, a subsidence model that tracks volume loss in bulk ice wedges is used to alter the subsurface structure and subgrid parameters, enabling the inclusion of the feedbacks associated with polygon degradation. This combined strategy results in a model that is able to capture the key features of polygon permafrost degradation, but in a simulation across a large spatial extent of polygonal tundra.

  11. Modeling a full-scale primary sedimentation tank using artificial neural networks.

    PubMed

    Gamal El-Din, A; Smith, D W

    2002-05-01

    Modeling the performance of full-scale primary sedimentation tanks has been commonly done using regression-based models, which are empirical relationships derived strictly from observed daily average influent and effluent data. Another approach to model a sedimentation tank is using a hydraulic efficiency model that utilizes tracer studies to characterize the performance of model sedimentation tanks based on eddy diffusion. However, the use of hydraulic efficiency models to predict the dynamic behavior of a full-scale sedimentation tank is very difficult as the development of such models has been done using controlled studies of model tanks. In this paper, another type of model, namely artificial neural network modeling approach, is used to predict the dynamic response of a full-scale primary sedimentation tank. The neuralmodel consists of two separate networks, one uses flow and influent total suspended solids data in order to predict the effluent total suspended solids from the tank, and the other makes predictions of the effluent chemical oxygen demand using data of the flow and influent chemical oxygen demand as inputs. An extensive sampling program was conducted in order to collect a data set to be used in training and validating the networks. A systematic approach was used in the building process of the model which allowed the identification of a parsimonious neural model that is able to learn (and not memorize) from past data and generalize very well to unseen data that were used to validate the model. Theresults seem very promising. The potential of using the model as part of a real-time process control system isalso discussed.

  12. Topographic Controls on Soil Carbon Distribution in Iowa Croplands, USA

    NASA Astrophysics Data System (ADS)

    McCarty, Greg; Li, Xia

    2017-04-01

    Topography is a key factor affecting soil organic carbon (SOC) redistribution (erosion or deposition) because it influences several hydrological indices including soil moisture dynamics, runoff velocity and acceleration, and flow divergence and convergence. In this study, we examined the relationship between 15 topographic metrics derived from Light Detection and Ranging (Lidar) data and SOC redistribution in agricultural fields. We adopted the fallout 137Cesium (137Cs) technique to estimate SOC redistribution rates across 560 sampling plots in Iowa. Then, using stepwise ordinarily least square regression (SOLSR) and stepwise principle component analysis (SPCA), topography-based SOC models were developed to simulate spatial patterns of SOC content and redistribution. Results suggested that erosion and deposition of topsoil SOC were regulated by topography with SOC gain in lowland areas and SOC loss in sloping areas. Topographic wetness index (TWI) and slope were the most influential variables controlling SOC content and redistribution. The topography-based models exhibited good performances in simulating SOC content and redistribution across two crop sites with intensive samplings. SPCA models had slightly lower coefficients of determination and Nash-Sutcliffe efficiency values compared to SOLSR models at the field scale. However, significantly SPCA outperformed SOLAR in predicting SOC redistribution patterns at the watershed scale. Results of this study suggest that the topography-based SPCA model was more robust for scaling up models to the watershed scale because SPCA models may better represent the landscapes and are less subject to over fitting. This work suggests an improved method to sample and characterize landscapes for better prediction of soil property distribution.

  13. Understanding hydrologic budgets, dynamics in an arid basin and explore spatial scaling properties using Process-based Adaptive Watershed Simulator (PAWS)

    NASA Astrophysics Data System (ADS)

    Fang, K.; Shen, C.; Salve, R.

    2013-12-01

    The Southern California hot desert hosts a fragile ecosystem as well as a range of human economic activities, primarily mining, energy production and recreation. This inland arid landscape is characterized by occasional intensive precipitation events and year-round strong potential evapotranspiration. In this landscape, water and especially groundwater is vital for ecosystem functions and human use. However, the impact of recent development on the sustainability of groundwater resources in the area has not been thoroughly investigated. We apply an integrated, physically-based hydrologic-land surface model, the Process-based Adaptive Watershed Simulator + Community Land Model (PAWS+CLM) to evaluate the sustainability of the groundwater resources in the area. We elucidate the spatio-temporal patterns of hydrologic fluxes and budgets. The modeling results indicate that mountain front recharge is the essential recharging mechanism for the alluvial aquifer. Although pumping activities do not exceed annual-average recharge values, they are still expected to contribute significantly to groundwater drawdown in business-as-usual scenario. The impact of groundwater withdrawals is significant on the desert ecosystem. The relative importance of groundwater flow on NPP rises significantly as compared to other ecosystems. We further evaluate the fractal scaling properties of soil moisture in this very arid system and found the relationship to be much more static in time than that found in a humid continental climate system. The scaling exponents can be predicted using simple functions of the mean. Therefore, multi-scale model based on coarse-resolution surrogate model is expected to perform well in this system. The modeling result is also important for assessing the groundwater sustainability and impact of human activities in the desert environment.

  14. CMOL/CMOS hardware architectures and performance/price for Bayesian memory - The building block of intelligent systems

    NASA Astrophysics Data System (ADS)

    Zaveri, Mazad Shaheriar

    The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC implementation. We later use this methodology to investigate the hardware implementations of cortex-scale spiking neural system, which is an approximate neural equivalent of BICM based cortex-scale system. The results of this investigation also suggest that CMOL is a promising candidate to implement such large-scale neuromorphic systems. In general, the assessment of such hypothetical baseline hardware architectures provides the prospects for building large-scale (mammalian cortex-scale) implementations of neuromorphic/Bayesian/intelligent systems using state-of-the-art and beyond state-of-the-art silicon structures.

  15. Stochastic downscaling of numerically simulated spatial rain and cloud fields using a transient multifractal approach

    NASA Astrophysics Data System (ADS)

    Nogueira, M.; Barros, A. P.; Miranda, P. M.

    2012-04-01

    Atmospheric fields can be extremely variable over wide ranges of spatial scales, with a scale ratio of 109-1010 between largest (planetary) and smallest (viscous dissipation) scale. Furthermore atmospheric fields with strong variability over wide ranges in scale most likely should not be artificially split apart into large and small scales, as in reality there is no scale separation between resolved and unresolved motions. Usually the effects of the unresolved scales are modeled by a deterministic bulk formula representing an ensemble of incoherent subgrid processes on the resolved flow. This is a pragmatic approach to the problem and not the complete solution to it. These models are expected to underrepresent the small-scale spatial variability of both dynamical and scalar fields due to implicit and explicit numerical diffusion as well as physically based subgrid scale turbulent mixing, resulting in smoother and less intermittent fields as compared to observations. Thus, a fundamental change in the way we formulate our models is required. Stochastic approaches equipped with a possible realization of subgrid processes and potentially coupled to the resolved scales over the range of significant scale interactions range provide one alternative to address the problem. Stochastic multifractal models based on the cascade phenomenology of the atmosphere and its governing equations in particular are the focus of this research. Previous results have shown that rain and cloud fields resulting from both idealized and realistic numerical simulations display multifractal behavior in the resolved scales. This result is observed even in the absence of scaling in the initial conditions or terrain forcing, suggesting that multiscaling is a general property of the nonlinear solutions of the Navier-Stokes equations governing atmospheric dynamics. Our results also show that the corresponding multiscaling parameters for rain and cloud fields exhibit complex nonlinear behavior depending on large scale parameters such as terrain forcing and mean atmospheric conditions at each location, particularly mean wind speed and moist stability. A particularly robust behavior found is the transition of the multiscaling parameters between stable and unstable cases, which has a clear physical correspondence to the transition from stratiform to organized (banded) convective regime. Thus multifractal diagnostics of moist processes are fundamentally transient and should provide a physically robust basis for the downscaling and sub-grid scale parameterizations of moist processes. Here, we investigate the possibility of using a simplified computationally efficient multifractal downscaling methodology based on turbulent cascades to produce statistically consistent fields at scales higher than the ones resolved by the model. Specifically, we are interested in producing rainfall and cloud fields at spatial resolutions necessary for effective flash flood and earth flows forecasting. The results are examined by comparing downscaled field against observations, and tendency error budgets are used to diagnose the evolution of transient errors in the numerical model prediction which can be attributed to aliasing.

  16. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    PubMed

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  17. Integrative Systems Models of Cardiac Excitation Contraction Coupling

    PubMed Central

    Greenstein, Joseph L.; Winslow, Raimond L.

    2010-01-01

    Excitation-contraction coupling in the cardiac myocyte is mediated by a number of highly integrated mechanisms of intracellular Ca2+ transport. The complexity and integrative nature of heart cell electrophysiology and Ca2+-cycling has led to an evolution of computational models that have played a crucial role in shaping our understanding of heart function. An important emerging theme in systems biology is that the detailed nature of local signaling events, such as those that occur in the cardiac dyad, have important consequences at higher biological scales. Multi-scale modeling techniques have revealed many mechanistic links between micro-scale events, such as Ca2+ binding to a channel protein, and macro-scale phenomena, such as excitation-contraction coupling gain. Here we review experimentally based multi-scale computational models of excitation-contraction coupling and the insights that have been gained through their application. PMID:21212390

  18. Observation-Based Dissipation and Input Terms for Spectral Wave Models, with End-User Testing

    DTIC Science & Technology

    2014-09-30

    scale influence of the Great barrier reef matrix on wave attenuation, Coral Reefs [published, refereed] Ghantous, M., and A.V. Babanin, 2014: One...Observation-Based Dissipation and Input Terms for Spectral Wave Models...functions, based on advanced understanding of physics of air-sea interactions, wave breaking and swell attenuation, in wave - forecast models. OBJECTIVES The

  19. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  20. A transparently scalable visualization architecture for exploring the universe.

    PubMed

    Fu, Chi-Wing; Hanson, Andrew J

    2007-01-01

    Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.

  1. An individual-based model of skipjack tuna (Katsuwonus pelamis) movement in the tropical Pacific ocean

    NASA Astrophysics Data System (ADS)

    Scutt Phillips, Joe; Sen Gupta, Alex; Senina, Inna; van Sebille, Erik; Lange, Michael; Lehodey, Patrick; Hampton, John; Nicol, Simon

    2018-05-01

    The distribution of marine species is often modeled using Eulerian approaches, in which changes to population density or abundance are calculated at fixed locations in space. Conversely, Lagrangian, or individual-based, models simulate the movement of individual particles moving in continuous space, with broader-scale patterns such as distribution being an emergent property of many, potentially adaptive, individuals. These models offer advantages in examining dynamics across spatiotemporal scales and making comparisons with observations from individual-scale data. Here, we introduce and describe such a model, the Individual-based Kinesis, Advection and Movement of Ocean ANimAls model (Ikamoana), which we use to replicate the movement processes of an existing Eulerian model for marine predators (the Spatial Ecosystem and Population Dynamics Model, SEAPODYM). Ikamoana simulates the movement of either individual or groups of animals by physical ocean currents, habitat-dependent stochastic movements (kinesis), and taxis movements representing active searching behaviours. Applying our model to Pacific skipjack tuna (Katsuwonus pelamis), we show that it accurately replicates the evolution of density distribution simulated by SEAPODYM with low time-mean error and a spatial correlation of density that exceeds 0.96 at all times. We demonstrate how the Lagrangian approach permits easy tracking of individuals' trajectories for examining connectivity between different regions, and show how the model can provide independent estimates of transfer rates between commonly used assessment regions. In particular, we find that retention rates in most assessment regions are considerably smaller (up to a factor of 2) than those estimated by this population of skipjack's primary assessment model. Moreover, these rates are sensitive to ocean state (e.g. El Nino vs La Nina) and so assuming fixed transfer rates between regions may lead to spurious stock estimates. A novel feature of the Lagrangian approach is that individual schools can be tracked through time, and we demonstrate that movement between two assessment regions at broad temporal scales includes extended transits through other regions at finer-scales. Finally, we discuss the utility of this modeling framework for the management of marine reserves, designing effective monitoring programmes, and exploring hypotheses regarding the behaviour of hard-to-observe oceanic animals.

  2. A new degree-2190 (10 km resolution) gravity field model for Antarctica developed from GRACE, GOCE and Bedmap2 data

    NASA Astrophysics Data System (ADS)

    Hirt, Christian; Rexer, Moritz; Scheinert, Mirko; Pail, Roland; Claessens, Sten; Holmes, Simon

    2016-02-01

    The current high-degree global geopotential models EGM2008 and EIGEN-6C4 resolve gravity field structures to ˜ 10 km spatial scales over most parts of the of Earth's surface. However, a notable exception is continental Antarctica, where the gravity information in these and other recent models is based on satellite gravimetry observations only, and thus limited to about ˜ 80-120 km spatial scales. Here, we present a new degree-2190 global gravity model (GGM) that for the first time improves the spatial resolution of the gravity field over the whole of continental Antarctica to ˜ 10 km spatial scales. The new model called SatGravRET2014 is a combination of recent Gravity Recovery and Climate Experiment (GRACE) and Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite gravimetry with gravitational signals derived from the 2013 Bedmap2 topography/ice thickness/bedrock model with gravity forward modelling in ellipsoidal approximation. Bedmap2 is a significantly improved description of the topographic mass distribution over the Antarctic region based on a multitude of topographic surveys, and a well-suited source for modelling short-scale gravity signals as we show in our study. We describe the development of SatGravRET2014 which entirely relies on spherical harmonic modelling techniques. Details are provided on the least-squares combination procedures and on the conversion of topography to implied gravitational potential. The main outcome of our work is the SatGravRET2014 spherical harmonic series expansion to degree 2190, and derived high-resolution grids of 3D-synthesized gravity and quasigeoid effects over the whole of Antarctica. For validation, six data sets from the IAG Subcommission 2.4f "Gravity and Geoid in Antarctica" (AntGG) database were used comprising a total of 1,092,981 airborne gravimetric observations. All subsets consistently show that the Bedmap2-based short-scale gravity modelling improves the agreement over satellite-only data considerably (improvement rates ranging between 9 and 75 % with standard deviations from residuals between SatGravRET2014 and AntGG gravity ranging between 8 and 25 mGal). For comparison purposes, a degree-2190 GGM was generated based on the year-2001 Bedmap1 (using the ETOPO1 topography) instead of 2013 Bedmap2 topography product. Comparison of both GGMs against AntGG consistently reveals a closer fit over all test areas when Bedmap2 is used. This experiment provides evidence for clear improvements in Bedmap2 topographic information over Bedmap1 at spatial scales of ˜ 80-10 km, obtained from independent gravity data used as validation tool. As a general conclusion, our modelling effort fills—in approximation—some gaps in short-scale gravity knowledge over Antarctica and demonstrates the value of the Bedmap2 topography data for short-scale gravity refinement in GGMs. SatGravRET2014 can be used, e.g. as a reference model for future gravity modelling efforts over Antarctica, e.g. as foundation for a combination with the AntGG data set to obtain further improved gravity information.

  3. Development of WRF-CO2 4DVAR Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Zheng, T.; French, N. H. F.

    2016-12-01

    Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.

  4. A geometric scaling model for assessing the impact of aneurysm size ratio on hemodynamic characteristics

    PubMed Central

    2014-01-01

    Background The intracranial aneurysm (IA) size has been proved to have impacts on the hemodynamics and can be applied for the prediction of IA rupture risk. Although the relationship between aspect ratio and hemodynamic parameters was investigated using real patients and virtual models, few studies focused on longitudinal experiments of IAs based on patient-specific aneurysm models. We attempted to do longitudinal simulation experiments of IAs by developing a series of scaled models. Methods In this work, a novel scaling approach was proposed to create IA models with different aneurysm size ratios (ASRs) defined as IA height divided by average neck diameter from a patient-specific aneurysm model and the relationship between the ASR and hemodynamics was explored based on a simulated longitudinal experiment. Wall shear stress, flow patterns and vessel wall displacement were computed from these models. Pearson correlation analysis was performed to elucidate the relationship between the ASR and wall shear stress. The correlation of the ASR and flow velocity was also computed and analyzed. Results The experiment results showed that there was a significant increase in IA area exposed to low WSS once the ASR > 0.7, and the flow became slower and the blood was more difficult to flow into the aneurysm as the ASR increased. Meanwhile, the results also indicated that average blood flow velocity and WSS had strongly negative correlations with the ASR (r = −0.938 and −0.925, respectively). A narrower impingement region and a more concentrated inflow jet appeared as the ASR increased, and the large local deformation at aneurysm apex could be found as the ASR >1.7 or 0.7 < the ASR <1.0. Conclusion Hemodynamic characteristics varied with the ASR. Besides, it is helpful to further explore the relationship between morphologies and hemodynamics based on a longitudinal simulation by building a series of patient-specific aneurysm scaled models applying our proposed IA scaling algorithm. PMID:24528952

  5. A Liver-centric Multiscale Modeling Framework for Xenobiotics ...

    EPA Pesticide Factsheets

    We describe a multi-scale framework for modeling acetaminophen-induced liver toxicity. Acetaminophen is a widely used analgesic. Overdose of acetaminophen can result in liver injury via its biotransformation into toxic product, which further induce massive necrosis. Our study focuses on developing a multi-scale computational model to characterize both phase I and phase II metabolism of acetaminophen, by bridging Physiologically Based Pharmacokinetic (PBPK) modeling at the whole body level, cell movement and blood flow at the tissue level and cell signaling and drug metabolism at the sub-cellular level. To validate the model, we estimated our model parameters by fi?tting serum concentrations of acetaminophen and its glucuronide and sulfate metabolites to experiments, and carried out sensitivity analysis on 35 parameters selected from three modules. Our study focuses on developing a multi-scale computational model to characterize both phase I and phase II metabolism of acetaminophen, by bridging Physiologically Based Pharmacokinetic (PBPK) modeling at the whole body level, cell movement and blood flow at the tissue level and cell signaling and drug metabolism at the sub-cellular level. This multiscale model bridges the CompuCell3D tool used by the Virtual Tissue project with the httk tool developed by the Rapid Exposure and Dosimetry project.

  6. [Integrated Peer Teaching of Communication and Clinical Skills: How to Train Student Tutors?].

    PubMed

    Ringel, Nadja; Bürmann, Barbara Maatouk; Fellmer-Drueg, Erika; Roos, Marco; Herzog, Wolfgang; Nikendei, Christoph; Wischmann, Tewes; Weiss, Carmen; Eicher, Christiane; Engeser, Peter; Schultz, Jobst-Hendrik; Jünger, Jana

    2015-08-01

    This paper describes the theory-based development of a standardized training model for peer tutors. The aim is to qualify tutors to teach communication skills integrated with practical clinical skills, to medical students in the pre-clinical curriculum. As a result, students are encouraged to form a basic understanding of the biopsychosocial model of diseases early in their studies. The training model's design is based on the Kern model for curriculum development as adapted by McLean et al., who outlined the following steps: planning, implementation, and evaluation/feedback. Our focus is on development, review of feasibility, and evaluation as measured by the subjectively perceived effectiveness of the implemented training model. 2 target groups were considered: the peer tutors and the student tutees. In 2009, a 3-step training model consisting of 12 units was developed, based on the theory of patient-centered communication and the biopsychosocial model. The training was rated very positively on a 5-point Likert scale by all tutors at 2 points in time: t1 (directly after training) and t2 (after conducting 2 tutorials) (t1: M=1.67; SD=±0.86; t2: M=1.75; SD=±0.71). On a 6-point Likert scale, the tutees also evaluated their communication and clinical skills as being significantly better after completing the 10 tutorials (t2: scale for interaction and communication: M=4.81; SD: 1.09; scale for clinical examination: M=4.99; SD: 0.85) than before the tutorials (t0: scale for communication and interaction: M=3.18; SD=1.15; scale for clinical examination: M=2.88; SD: 1.09). By implementing a standardized tutor training model, one can qualify peer tutors to teach communication skills integrated with practical clinical skills during the pre-clinical phase. Practice teaching of the curricular material via role playing, tutorial simulation and an extensive feedback session, along with the definition of clinical standards for recording case histories and performing examinations, have proven themselves to be core elements of effective tutor training. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Discovery of a diamond-based photonic crystal structure in beetle scales.

    PubMed

    Galusha, Jeremy W; Richey, Lauren R; Gardner, John S; Cha, Jennifer N; Bartl, Michael H

    2008-05-01

    We investigated the photonic crystal structure inside iridescent scales of the weevil Lamprocyphus augustus. By combining a high-resolution structure analysis technique based on sequential focused ion beam milling and scanning electron microscopy imaging with theoretical modeling and photonic band-structure calculations, we discovered a natural three-dimensional photonic structure with a diamond-based crystal lattice operating at visible wavelengths. Moreover, we found that within individual scales, the diamond-based structure is assembled in the form of differently oriented single-crystalline micrometer-sized pixels with only selected lattice planes facing the scales' top surface. A comparison of results obtained from optical microreflectance measurements with photonic band-structure calculations reveals that it is this sophisticated microassembly of the diamond-based crystal lattice that lends Lamprocyphus augustus its macroscopically near angle-independent green coloration.

  8. Universal scaling laws in metro area election results

    PubMed Central

    Szállási, Zoltán; Vattay, Gábor

    2018-01-01

    We explain the anomaly of election results between large cities and rural areas in terms of urban scaling in the 1948–2016 US elections and in the 2016 EU referendum of the UK. The scaling curves are all universal and depend on a single parameter only, and one of the parties always shows superlinear scaling and drives the process, while the sublinear exponent of the other party is merely the consequence of probability conservation. Based on the recently developed model of urban scaling, we give a microscopic model of voter behavior in which we replace diversity characterizing humans in creative aspects with social diversity and tolerance. The model can also predict new political developments such as the fragmentation of the left and the immigration paradox. PMID:29470518

  9. A process-based inventory model for landfill CH4 emissions inclusive of seasonal soil microclimate and CH4 oxidation

    USDA-ARS?s Scientific Manuscript database

    We have developed and field-validated an annual inventory model for California landfill CH4 emissions that incorporates both site-specific soil properties and soil microclimate modeling coupled to 0.5o scale global climatic models. Based on 1-D diffusion, CALMIM (California Landfill Methane Inventor...

  10. Development and testing of a physically based model of streambank erosion for coupling with a basin-scale hydrologic model SWAT

    USDA-ARS?s Scientific Manuscript database

    A comprehensive stream bank erosion model based on excess shear stress has been developed and incorporated in the hydrological model Soil and Water Assessment Tool (SWAT). It takes into account processes such as weathering, vegetative cover, and channel meanders to adjust critical and effective str...

  11. Prediction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation

    DTIC Science & Technology

    2016-08-04

    soil type. The modeling approach is based on (i) a seamless integration of multibody dynamics and discrete element method (DEM) solvers, and (ii...ensure that the vehicle follows a desired path. The soil is modeled as a Discrete Element Model (DEM) with a general cohesive material model that is

  12. Estimating Vegetation Rainfall Interception Using Remote Sensing Observations at Very High Resolution

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Zhao, P.; Hong, Y.; Fan, W.; Yan, B.; Xie, H.

    2017-12-01

    Abstract: As an important compont of evapotranspiration, vegetation rainfall interception is the proportion of gross rainfall that is intercepted, stored and subsequently evaporated from all parts of vegetation during or following rainfall. Accurately quantifying the vegetation rainfall interception at a high resolution is critical for rainfall-runoff modeling and flood forecasting, and is also essential for understanding its further impact on local, regional, and even global water cycle dynamics. In this study, the Remote Sensing-based Gash model (RS-Gash model) is developed based on a modified Gash model for interception loss estimation using remote sensing observations at the regional scale, and has been applied and validated in the upper reach of the Heihe River Basin of China for different types of vegetation. To eliminate the scale error and the effect of mixed pixels, the RS-Gash model is applied at a fine scale of 30 m with the high resolution vegetation area index retrieved by using the unified model of bidirectional reflectance distribution function (BRDF-U) for the vegetation canopy. Field validation shows that the RMSE and R2 of the interception ratio are 3.7% and 0.9, respectively, indicating the model's strong stability and reliability at fine scale. The temporal variation of vegetation rainfall interception loss and its relationship with precipitation are further investigated. In summary, the RS-Gash model has demonstrated its effectiveness and reliability in estimating vegetation rainfall interception. When compared to the coarse resolution results, the application of this model at 30-m fine resolution is necessary to resolve the scaling issues as shown in this study. Keywords: rainfall interception; remote sensing; RS-Gash analytical model; high resolution

  13. Reconstruction of 24 Penicillium genome-scale metabolic models shows diversity based on their secondary metabolism.

    PubMed

    Prigent, Sylvain; Nielsen, Jens Christian; Frisvad, Jens Christian; Nielsen, Jens

    2018-06-05

    Modelling of metabolism at the genome-scale have proved to be an efficient method for explaining observed phenotypic traits in living organisms. Further, it can be used as a means of predicting the effect of genetic modifications e.g. for development of microbial cell factories. With the increasing amount of genome sequencing data available, a need exists to accurately and efficiently generate such genome-scale metabolic models (GEMs) of non-model organisms, for which data is sparse. In this study, we present an automatic reconstruction approach applied to 24 Penicillium species, which have potential for production of pharmaceutical secondary metabolites or used in the manufacturing of food products such as cheeses. The models were based on the MetaCyc database and a previously published Penicillium GEM, and gave rise to comprehensive genome-scale metabolic descriptions. The models proved that while central carbon metabolism is highly conserved, secondary metabolic pathways represent the main diversity among the species. The automatic reconstruction approach presented in this study can be applied to generate GEMs of other understudied organisms, and the developed GEMs are a useful resource for the study of Penicillium metabolism, for example with the scope of developing novel cell factories. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method

    NASA Technical Reports Server (NTRS)

    Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.

    2014-01-01

    A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.

  15. Adjacent-Categories Mokken Models for Rater-Mediated Assessments

    PubMed Central

    Wind, Stefanie A.

    2016-01-01

    Molenaar extended Mokken’s original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken’s original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are used, including rater-mediated educational assessments. Because their underlying item step response functions (i.e., category response functions) are defined using cumulative probabilities, polytomous Mokken models can be classified as cumulative models based on the classifications of polytomous item response theory models proposed by several scholars. In order to permit a closer conceptual alignment with educational performance assessments, this study presents an adjacent-categories variation on the polytomous monotone homogeneity and double monotonicity models. Data from a large-scale rater-mediated writing assessment are used to illustrate the adjacent-categories approach, and results are compared with the original formulations. Major findings suggest that the adjacent-categories models provide additional diagnostic information related to individual raters’ use of rating scale categories that is not observed under the original formulation. Implications are discussed in terms of methods for evaluating rating quality. PMID:29795916

  16. Incorporation of Solar-Induced Chlorophyll Fluorescence into the Breathing Earth System Simulator (BESS)

    NASA Astrophysics Data System (ADS)

    Dechant, B.; Ryu, Y.; Jiang, C.; Yang, K.

    2017-12-01

    Solar-induced chlorophyll fluorescence (SIF) is rapidly becoming an important tool to remotely estimate terrestrial gross primary productivity (GPP) at large spatial scales. Many findings, however, are based on empirical relationships between SIF and GPP that have been found to be dependent on plant functional types. Therefore, combining model-based analysis with observations is crucial to improve our understanding of SIF-GPP relationships. So far, most model-based results were based on SCOPE, a complex ecophysiological model with explicit description of canopy layers and a large number of parameters that may not be easily obtained reliably on large scales. Here, we report on our efforts to incorporate SIF into a two-big leaf (sun and shade) process-based model that is suitable for obtaining its inputs entirely from satellite products. We examine if the SIF-GPP relationships are consistent with the findings from SCOPE simulations and investigate if incorporation of the SIF signal into BESS can help improve GPP estimation. A case study in a rice paddy is presented.

  17. The morphing of geographical features by Fourier transformation.

    PubMed

    Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.

  18. The Development and Validation of the Online Shopping Addiction Scale.

    PubMed

    Zhao, Haiyan; Tian, Wei; Xin, Tao

    2017-01-01

    We report the development and validation of a scale to measure online shopping addiction. Inspired by previous theories and research on behavioral addiction, the Griffiths's widely accepted six-factor component model was referred to and an 18-item scale was constructed, with each component measured by three items. The results of exploratory factor analysis, based on Sample 1 (999 college students) and confirmatory factor analysis, based on Sample 2 (854 college students) showed the Griffiths's substantive six-factor structure underlay the online shopping addiction scale. Cronbach's alpha suggested that the resulting scale was highly reliable. Concurrent validity, based on Sample 3 (328 college students), was also satisfactory as indicated by correlations between the scale and measures of similar constructs. Finally, self-perceived online shopping addiction can be predicted to a relatively high degree. The present 18-item scale is a solid theory-based instrument to empirically measure online shopping addiction and can be used for understanding the phenomena among young adults.

  19. The Development and Validation of the Online Shopping Addiction Scale

    PubMed Central

    Zhao, Haiyan; Tian, Wei; Xin, Tao

    2017-01-01

    We report the development and validation of a scale to measure online shopping addiction. Inspired by previous theories and research on behavioral addiction, the Griffiths's widely accepted six-factor component model was referred to and an 18-item scale was constructed, with each component measured by three items. The results of exploratory factor analysis, based on Sample 1 (999 college students) and confirmatory factor analysis, based on Sample 2 (854 college students) showed the Griffiths's substantive six-factor structure underlay the online shopping addiction scale. Cronbach's alpha suggested that the resulting scale was highly reliable. Concurrent validity, based on Sample 3 (328 college students), was also satisfactory as indicated by correlations between the scale and measures of similar constructs. Finally, self-perceived online shopping addiction can be predicted to a relatively high degree. The present 18-item scale is a solid theory-based instrument to empirically measure online shopping addiction and can be used for understanding the phenomena among young adults. PMID:28559864

  20. Systems metabolic engineering: genome-scale models and beyond.

    PubMed

    Blazeck, John; Alper, Hal

    2010-07-01

    The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches--based on the data collected with high throughput technologies--to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems.

  1. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, Seulgi; Shelef, Eitan; Hilley, George E.

    2015-05-01

    In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.

  2. A Chimpanzee (Pan troglodytes) Model of Triarchic Psychopathy Constructs: Development and Initial Validation

    PubMed Central

    Latzman, Robert D.; Drislane, Laura E.; Hecht, Lisa K.; Brislin, Sarah J.; Patrick, Christopher J.; Lilienfeld, Scott O.; Freeman, Hani J.; Schapiro, Steven J.; Hopkins, William D.

    2015-01-01

    The current work sought to operationalize constructs of the triarchic model of psychopathy in chimpanzees (Pan troglodytes), a species well-suited for investigations of basic biobehavioral dispositions relevant to psychopathology. Across three studies, we generated validity evidence for scale measures of the triarchic model constructs in a large sample (N=238) of socially-housed chimpanzees. Using a consensus-based rating approach, we first identified candidate items for the chimpanzee triarchic (CHMP-Tri) scales from an existing primate personality instrument and refined these into scales. In Study 2, we collected data for these scales from human informants (N=301), and examined their convergent and divergent relations with scales from another triarchic inventory developed for human use. In Study 3, we undertook validation work examining associations between CHMP-Tri scales and task measures of approach-avoidance behavior (N=73) and ability to delay gratification (N=55). Current findings provide support for a chimpanzee model of core dispositions relevant to psychopathy and other forms of psychopathology. PMID:26779396

  3. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  4. Can Regional Climate Models be used in the assessment of vulnerability and risk caused by extreme events?

    NASA Astrophysics Data System (ADS)

    Nunes, Ana

    2015-04-01

    Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.

  5. Internal Fluid Dynamics and Frequency Scaling of Sweeping Jet Fluidic Oscillators

    NASA Astrophysics Data System (ADS)

    Seo, Jung Hee; Salazar, Erik; Mittal, Rajat

    2017-11-01

    Sweeping jet fluidic oscillators (SJFOs) are devices that produce a spatially oscillating jet solely based on intrinsic flow instability mechanisms without any moving parts. Recently, SJFOs have emerged as effective actuators for flow control, but the internal fluid dynamics of the device that drives the oscillatory flow mechanism is not yet fully understood. In the current study, the internal fluid dynamics of the fluidic oscillator with feedback channels has been investigated by employing incompressible flow simulations. The study is focused on the oscillation mechanisms and scaling laws that underpin the jet oscillation. Based on the simulation results, simple phenomenological models that connect the jet deflection to the feedback flow are developed. Several geometric modifications are considered in order to explore the characteristic length scales and phase relationships associated with the jet oscillation and to assess the proposed phenomenological model. A scaling law for the jet oscillation frequency is proposed based on the detailed analysis. This research is supported by AFOSR Grant FA9550-14-1-0289 monitored by Dr. Douglas Smith.

  6. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  7. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  8. The Importance of Uncertainty and Sensitivity Analysis in Process-based Models of Carbon and Nitrogen Cycling in Terrestrial Ecosystems with Particular Emphasis on Forest Ecosystems — Selected Papers from a Workshop Organized by the International Society for Ecological Modelling (ISEM) at the Third Biennal Meeting of the International Environmental Modelling and Software Society (IEMSS) in Burlington, Vermont, USA, August 9-13, 2006

    USGS Publications Warehouse

    Larocque, Guy R.; Bhatti, Jagtar S.; Liu, Jinxun; Ascough, James C.; Gordon, Andrew M.

    2008-01-01

    Many process-based models of carbon (C) and nitrogen (N) cycles have been developed for terrestrial ecosystems, including forest ecosystems. They address many basic issues of ecosystems structure and functioning, such as the role of internal feedback in ecosystem dynamics. The critical factor in these phenomena is scale, as these processes operate at scales from the minute (e.g. particulate pollution impacts on trees and other organisms) to the global (e.g. climate change). Research efforts remain important to improve the capability of such models to better represent the dynamics of terrestrial ecosystems, including the C, nutrient, (e.g. N) and water cycles. Existing models are sufficiently well advanced to help decision makers develop sustainable management policies and planning of terrestrial ecosystems, as they make realistic predictions when used appropriately. However, decision makers must be aware of their limitations by having the opportunity to evaluate the uncertainty associated with process-based models (Smith and Heath, 2001 and Allen et al., 2004). The variation in scale of issues currently being addressed by modelling efforts makes the evaluation of uncertainty a daunting task.

  9. Electrical conductivity of nanocomposites based on carbon nanotubes: a 3D multiscale modeling approach

    NASA Astrophysics Data System (ADS)

    Grabowski, Krzysztof; Zbyrad, Paulina; Staszewski, Wieslaw J.; Uhl, Tadeusz; Wiatr, Kazimierz; Packo, Pawel

    2016-04-01

    Remarkable electrical properties of carbon nanotubes (CNT) have lead to increased interest in studying CNT- based devices. Many of current researches are devoted to using all kinds of carbon nanomaterials in the con- struction of sensory elements. One of the most common applications is the development of high performance, large scale sensors. Due to the remarkable conductivity of CNT's such devices represent very high sensitivity. However, there are no sufficient tools for studying and designing such sensors. The main objective of this paper is to develop and validate a multiscale numerical model for a carbon nanotubes based sensor. The device utilises the change of electrical conductivity of a nanocomposite material under applied deformation. The nanocomposite consists of a number of CNTs dispersed in polymer matrix. The paper is devoted to the analysis of the impact of spatial distribution of carbon nanotubes in polymer matrix on electrical conductivity of the sensor. One of key elements is also to examine the impact of strain on electric charge ow in such anisotropic composite structures. In the following work a multiscale electro-mechanical model for CNT - based nanocomposites is proposed. The model comprises of two length scales, namely the meso- and the macro-scale for mechanical and electrical domains. The approach allows for evaluation of macro-scale mechanical response of a strain sensor. Electrical properties of polymeric material with certain CNT fractions were derived considering electrical properties of CNTs, their contact and the tunnelling effect.

  10. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    NASA Astrophysics Data System (ADS)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  11. A priori testing of subgrid-scale models for large-eddy simulation of the atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Juneja, Anurag; Brasseur, James G.

    1996-11-01

    Subgrid-scale models are generally developed assuming homogeneous isotropic turbulence with the filter cutoff lying in the inertial range. In the surface layer and capping inversion regions of the atmospheric boundary layer, the turbulence is strongly anisotropic and, in general, influenced by both buoyancy and shear. Furthermore, the integral scale motions are under-resolved in these regions. Herein we perform direct numerical simulations of shear and buoyancy-generated homogeneous anisotropic turbulence to compute and analyze the actual subgrid-resolved-scale (SGS-RS) dynamics as the filter cutoff moves into the energy-containing scales. These are compared with the SGS-RS dynamics predicted by Smagorinsky-based models with a focus on motivating improved closures. We find that, in general, the underlying assumption of such models, that the anisotropic part of the subgrid stress tensor be aligned with the resolved strain rate tensor, is a poor approximation. Similarly, we find poor alignment between the actual and predicted stress divergence, and find low correlations between the actual and modeled subgrid-scale contribution to the pressure and pressure gradient. Details will be given in the talk.

  12. Pore scale Assessment of Heat and Mass transfer in Porous Medium Using Phase Field Method with Application to Soil Borehole Thermal Storage (SBTES) Systems

    NASA Astrophysics Data System (ADS)

    Moradi, A.

    2015-12-01

    To properly model soil thermal performance in unsaturated porous media, for applications such as SBTES systems, knowledge of both soil hydraulic and thermal properties and how they change in space and time is needed. Knowledge obtained from pore scale to macroscopic scale studies can help us to better understand these systems and contribute to the state of knowledge which can then be translated to engineering applications in the field (i.e. implementation of SBTES systems at the field scale). One important thermal property that varies with soil water content, effective thermal conductivity, is oftentimes included in numerical models through the use of empirical relationships and simplified mathematical formulations developed based on experimental data obtained at either small laboratory or field scales. These models assume that there is local thermodynamic equilibrium between the air and water phases for a representative elementary volume. However, this assumption may not always be valid at the pore scale, thus questioning the validity of current modeling approaches. The purpose of this work is to evaluate the validity of the local thermodynamic equilibrium assumption as related to the effective thermal conductivity at pore scale. A numerical model based on the coupled Cahn-Hilliard and heat transfer equation was developed to solve for liquid flow and heat transfer through variably saturated porous media. In this model, the evolution of phases and the interfaces between phases are related to a functional form of the total free energy of the system. A unique solution for the system is obtained by solving the Navier-Stokes equation through free energy minimization. Preliminary results demonstrate that there is a correlation between soil temperature / degree of saturation and equivalent thermal conductivity / heat flux. Results also confirm the correlation between pressure differential magnitude and equilibrium time for multiphase flow to reach steady state conditions. Based on these results, the equivalent time for steady-state heat transfer is much larger than the equivalent time for steady-state multiphase flow for a given pressure differential. Moreover, the wetting phase flow and consequently heat transfer appear to be sensitive to contact angle and porosity of the domain.

  13. ScaleNet: a literature-based model of scale insect biology and systematics

    PubMed Central

    García Morales, Mayrolin; Denno, Barbara D.; Miller, Douglass R.; Miller, Gary L.; Ben-Dov, Yair; Hardy, Nate B.

    2016-01-01

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found on all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis and plant-insect interactions. ScaleNet was launched in 1995 to provide insect identifiers, pest managers, insect systematists, evolutionary biologists and ecologists efficient access to information about scale insect biological diversity. It provides comprehensive information on scale insects taken directly from the primary literature. Currently, it draws from 23 477 articles and describes the systematics and biology of 8194 valid species. For 20 years, ScaleNet ran on the same software platform. That platform is no longer viable. Here, we present a new, open-source implementation of ScaleNet. We have normalized the data model, begun the process of correcting invalid data, upgraded the user interface, and added online administrative tools. These improvements make ScaleNet easier to use and maintain and make the ScaleNet data more accurate and extendable. Database URL: http://scalenet.info PMID:26861659

  14. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  15. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  16. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187

  17. Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.

    PubMed

    Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin

    2010-05-12

    Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.

  18. Low frequency full waveform seismic inversion within a tree based Bayesian framework

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Kaplan, Sam; Washbourne, John; Albertin, Uwe

    2018-01-01

    Limited illumination, insufficient offset, noisy data and poor starting models can pose challenges for seismic full waveform inversion. We present an application of a tree based Bayesian inversion scheme which attempts to mitigate these problems by accounting for data uncertainty while using a mildly informative prior about subsurface structure. We sample the resulting posterior model distribution of compressional velocity using a trans-dimensional (trans-D) or Reversible Jump Markov chain Monte Carlo method in the wavelet transform domain of velocity. This allows us to attain rapid convergence to a stationary distribution of posterior models while requiring a limited number of wavelet coefficients to define a sampled model. Two synthetic, low frequency, noisy data examples are provided. The first example is a simple reflection + transmission inverse problem, and the second uses a scaled version of the Marmousi velocity model, dominated by reflections. Both examples are initially started from a semi-infinite half-space with incorrect background velocity. We find that the trans-D tree based approach together with parallel tempering for navigating rugged likelihood (i.e. misfit) topography provides a promising, easily generalized method for solving large-scale geophysical inverse problems which are difficult to optimize, but where the true model contains a hierarchy of features at multiple scales.

  19. Acceleration of spiking neural network based pattern recognition on NVIDIA graphics processors.

    PubMed

    Han, Bing; Taha, Tarek M

    2010-04-01

    There is currently a strong push in the research community to develop biological scale implementations of neuron based vision models. Systems at this scale are computationally demanding and generally utilize more accurate neuron models, such as the Izhikevich and the Hodgkin-Huxley models, in favor of the more popular integrate and fire model. We examine the feasibility of using graphics processing units (GPUs) to accelerate a spiking neural network based character recognition network to enable such large scale systems. Two versions of the network utilizing the Izhikevich and Hodgkin-Huxley models are implemented. Three NVIDIA general-purpose (GP) GPU platforms are examined, including the GeForce 9800 GX2, the Tesla C1060, and the Tesla S1070. Our results show that the GPGPUs can provide significant speedup over conventional processors. In particular, the fastest GPGPU utilized, the Tesla S1070, provided a speedup of 5.6 and 84.4 over highly optimized implementations on the fastest central processing unit (CPU) tested, a quadcore 2.67 GHz Xeon processor, for the Izhikevich and the Hodgkin-Huxley models, respectively. The CPU implementation utilized all four cores and the vector data parallelism offered by the processor. The results indicate that GPUs are well suited for this application domain.

  20. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    NASA Astrophysics Data System (ADS)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.

  1. Measurement Equivalence of Teachers' Sense of Efficacy Scale Using Latent Growth Methods

    ERIC Educational Resources Information Center

    Basokçu, T. Oguz; Ögretmen, T.

    2016-01-01

    This study is based on the application of latent growth modeling, which is one of structural equation models on real data. Teachers' Sense of Efficacy Scale (TSES), which was previously adapted into Turkish was administered to 200 preservice teachers at different time intervals for three times and study data was collected. Measurement equivalence…

  2. Stress analysis of 27% scale model of AH-64 main rotor hub

    NASA Technical Reports Server (NTRS)

    Hodges, R. V.

    1985-01-01

    Stress analysis of an AH-64 27% scale model rotor hub was performed. Component loads and stresses were calculated based upon blade root loads and motions. The static and fatigue analysis indicates positive margins of safety in all components checked. Using the format developed here, the hub can be stress checked for future application.

  3. Scale-adaptive compressive tracking with feature integration

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin

    2016-05-01

    Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.

  4. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    NASA Astrophysics Data System (ADS)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.

  5. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    NASA Astrophysics Data System (ADS)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2010-09-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.

  6. Propulsion mechanisms for Leidenfrost solids on ratchets.

    PubMed

    Baier, Tobias; Dupeux, Guillaume; Herbert, Stefan; Hardt, Steffen; Quéré, David

    2013-02-01

    We propose a model for the propulsion of Leidenfrost solids on ratchets based on viscous drag due to the flow of evaporating vapor. The model assumes pressure-driven flow described by the Navier-Stokes equations and is mainly studied in lubrication approximation. A scaling expression is derived for the dependence of the propulsive force on geometric parameters of the ratchet surface and properties of the sublimating solid. We show that the model results as well as the scaling law compare favorably with experiments and are able to reproduce the experimentally observed scaling with the size of the solid.

  7. Integrated Modeling for Watershed Ecosystem Services Assessment and Forecasting

    EPA Science Inventory

    Regional scale watershed management decisions must be informed by the science-based relationship between anthropogenic activities on the landscape and the change in ecosystem structure, function, and services that occur as a result. We applied process-based models that represent...

  8. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    NASA Astrophysics Data System (ADS)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p < 0.01) in the Everglades of Florida over the years 1996-2001. The modeled monthly soil moisture for Illinois of the US agrees well (R2 = 0.79, p < 0.01) with the observed over the years 1984-2001. The modeled monthly stream flow for most 12 major rivers in the US is consistent R2 > 0.46, p < 0.01; Nash-Sutcliffe Coefficients >0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  9. On-lattice agent-based simulation of populations of cells within the open-source Chaste framework.

    PubMed

    Figueredo, Grazziela P; Joshi, Tanvi V; Osborne, James M; Byrne, Helen M; Owen, Markus R

    2013-04-06

    Over the years, agent-based models have been developed that combine cell division and reinforced random walks of cells on a regular lattice, reaction-diffusion equations for nutrients and growth factors; and ordinary differential equations for the subcellular networks regulating the cell cycle. When linked to a vascular layer, this multiple scale model framework has been applied to tumour growth and therapy. Here, we report on the creation of an agent-based multi-scale environment amalgamating the characteristics of these models within a Virtual Physiological Human (VPH) Exemplar Project. This project enables reuse, integration, expansion and sharing of the model and relevant data. The agent-based and reaction-diffusion parts of the multi-scale model have been implemented and are available for download as part of the latest public release of Chaste (Cancer, Heart and Soft Tissue Environment; http://www.cs.ox.ac.uk/chaste/), part of the VPH Toolkit (http://toolkit.vph-noe.eu/). The environment functionalities are verified against the original models, in addition to extra validation of all aspects of the code. In this work, we present the details of the implementation of the agent-based environment, including the system description, the conceptual model, the development of the simulation model and the processes of verification and validation of the simulation results. We explore the potential use of the environment by presenting exemplar applications of the 'what if' scenarios that can easily be studied in the environment. These examples relate to tumour growth, cellular competition for resources and tumour responses to hypoxia (low oxygen levels). We conclude our work by summarizing the future steps for the expansion of the current system.

  10. A semiparametric graphical modelling approach for large-scale equity selection.

    PubMed

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  11. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter Andrew

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomicmore » scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.« less

  12. Multi-scale hydrometeorological observation and modelling for flash flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-09-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.

  13. Multi-scale hydrometeorological observation and modelling for flash-flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-02-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.

  14. Observing and modeling dynamics in terrestrial gross primary productivity and phenology from remote sensing: An assessment using in-situ measurements

    NASA Astrophysics Data System (ADS)

    Verma, Manish K.

    Terrestrial gross primary productivity (GPP) is the largest and most variable component of the carbon cycle and is strongly influenced by phenology. Realistic characterization of spatio-temporal variation in GPP and phenology is therefore crucial for understanding dynamics in the global carbon cycle. In the last two decades, remote sensing has become a widely-used tool for this purpose. However, no study has comprehensively examined how well remote sensing models capture spatiotemporal patterns in GPP, and validation of remote sensing-based phenology models is limited. Using in-situ data from 144 eddy covariance towers located in all major biomes, I assessed the ability of 10 remote sensing-based methods to capture spatio-temporal variation in GPP at annual and seasonal scales. The models are based on different hypotheses regarding ecophysiological controls on GPP and span a range of structural and computational complexity. The results lead to four main conclusions: (i) at annual time scale, models were more successful capturing spatial variability than temporal variability; (ii) at seasonal scale, models were more successful in capturing average seasonal variability than interannual variability; (iii) simpler models performed as well or better than complex models; and (iv) models that were best at explaining seasonal variability in GPP were different from those that were best able to explain variability in annual scale GPP. Seasonal phenology of vegetation follows bounded growth and decay, and is widely modeled using growth functions. However, the specific form of the growth function affects how phenological dynamics are represented in ecosystem and remote sensing-base models. To examine this, four different growth functions (the logistic, Gompertz, Mirror-Gompertz and Richards function) were assessed using remotely sensed and in-situ data collected at several deciduous forest sites. All of the growth functions provided good statistical representation of in-situ and remote sensing time series. However, the Richards function captured observed asymmetric dynamics that were not captured by the other functions. The timing of key phenophase transitions derived using the Richards function therefore agreed best with observations. This suggests that ecosystem models and remote-sensing algorithms would benefit from using the Richards function to represent phenological dynamics.

  15. Length-scale dependent mechanical properties of Al-Cu eutectic alloy: Molecular dynamics based model and its experimental verification

    NASA Astrophysics Data System (ADS)

    Tiwary, C. S.; Chakraborty, S.; Mahapatra, D. R.; Chattopadhyay, K.

    2014-05-01

    This paper attempts to gain an understanding of the effect of lamellar length scale on the mechanical properties of two-phase metal-intermetallic eutectic structure. We first develop a molecular dynamics model for the in-situ grown eutectic interface followed by a model of deformation of Al-Al2Cu lamellar eutectic. Leveraging the insights obtained from the simulation on the behaviour of dislocations at different length scales of the eutectic, we present and explain the experimental results on Al-Al2Cu eutectic with various different lamellar spacing. The physics behind the mechanism is further quantified with help of atomic level energy model for different length scale as well as different strain. An atomic level energy partitioning of the lamellae and the interface regions reveals that the energy of the lamellae core are accumulated more due to dislocations irrespective of the length-scale. Whereas the energy of the interface is accumulated more due to dislocations when the length-scale is smaller, but the trend is reversed when the length-scale is large beyond a critical size of about 80 nm.

  16. A New Framework for Cumulus Parametrization - A CPT in action

    NASA Astrophysics Data System (ADS)

    Jakob, C.; Peters, K.; Protat, A.; Kumar, V.

    2016-12-01

    The representation of convection in climate model remains a major Achilles Heel in our pursuit of better predictions of global and regional climate. The basic principle underpinning the parametrisation of tropical convection in global weather and climate models is that there exist discernible interactions between the resolved model scale and the parametrised cumulus scale. Furthermore, there must be at least some predictive power in the larger scales for the statistical behaviour on small scales for us to be able to formally close the parametrised equations. The presentation will discuss a new framework for cumulus parametrisation based on the idea of separating the prediction of cloud area from that of velocity. This idea is put into practice by combining an existing multi-scale stochastic cloud model with observations to arrive at the prediction of the area fraction for deep precipitating convection. Using mid-tropospheric humidity and vertical motion as predictors, the model is shown to reproduce the observed behaviour of both mean and variability of deep convective area fraction well. The framework allows for the inclusion of convective organisation and can - in principle - be made resolution-aware or resolution-independent. When combined with simple assumptions about cloud-base vertical motion the model can be used as a closure assumption in any existing cumulus parametrisation. Results of applying this idea in the the ECHAM model indicate significant improvements in the simulation of tropical variability, including but not limited to the MJO. This presentation will highlight how the close collaboration of the observational, theoretical and model development community in the spirit of the climate process teams can lead to significant progress in long-standing issues in climate modelling while preserving the freedom of individual groups in pursuing their specific implementation of an agreed framework.

  17. Downscaling Ocean Conditions: Initial Results using a Quasigeostrophic and Realistic Ocean Model

    NASA Astrophysics Data System (ADS)

    Katavouta, Anna; Thompson, Keith

    2014-05-01

    Previous theoretical work (Henshaw et al, 2003) has shown that the small-scale modes of variability of solutions of the unforced, incompressible Navier-Stokes equation, and Burgers' equation, can be reconstructed with surprisingly high accuracy from the time history of a few of the large-scale modes. Motivated by this theoretical work we first describe a straightforward method for assimilating information on the large scales in order to recover the small scale oceanic variability. The method is based on nudging in specific wavebands and frequencies and is similar to the so-called spectral nudging method that has been used successfully for atmospheric downscaling with limited area models (e.g. von Storch et al., 2000). The validity of the method is tested using a quasigestrophic model configured to simulate a double ocean gyre separated by an unstable mid-ocean jet. It is shown that important features of the ocean circulation including the position of the meandering mid-ocean jet and associated pinch-off eddies can indeed be recovered from the time history of a small number of large-scales modes. The benefit of assimilating additional time series of observations from a limited number of locations, that alone are too sparse to significantly improve the recovery of the small scales using traditional assimilation techniques, is also demonstrated using several twin experiments. The final part of the study outlines the application of the approach using a realistic high resolution (1/36 degree) model, based on the NEMO (Nucleus for European Modelling of the Ocean) modeling framework, configured for the Scotian Shelf of the east coast of Canada. The large scale conditions used in this application are obtained from the HYCOM (HYbrid Coordinate Ocean Model) + NCODA (Navy Coupled Ocean Data Assimilation) global 1/12 degree analysis product. Henshaw, W., Kreiss, H.-O., Ystrom, J., 2003. Numerical experiments on the interaction between the larger- and the small-scale motion of the Navier-Stokes equations. Multiscale Modeling and Simulation 1, 119-149. von Storch, H., Langenberg, H., Feser, F., 2000. A spectral nudging technique for dynamical downscaling purposes. Monthly Weather Review 128, 3664-3673.

  18. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  19. Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.

    PubMed

    Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang

    2018-02-24

    This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.

  20. Modeling greenhouse gas emissions (CO2, N2O, CH4) from managed arable soils with a fully coupled hydrology-biogeochemical modeling system simulating water and nutrient transport and associated carbon and nitrogen cycling at catchment scale

    NASA Astrophysics Data System (ADS)

    Klatt, Steffen; Haas, Edwin; Kraus, David; Kiese, Ralf; Butterbach-Bahl, Klaus; Kraft, Philipp; Plesca, Ina; Breuer, Lutz; Zhu, Bo; Zhou, Minghua; Zhang, Wei; Zheng, Xunhua; Wlotzka, Martin; Heuveline, Vincent

    2014-05-01

    The use of mineral nitrogen fertilizer sustains the global food production and therefore the livelihood of human kind. The rise in world population will put pressure on the global agricultural system to increase its productivity leading most likely to an intensification of mineral nitrogen fertilizer use. The fate of excess nitrogen and its distribution within landscapes is manifold. Process knowledge on the site scale has rapidly grown in recent years and models have been developed to simulate carbon and nitrogen cycling in managed ecosystems on the site scale. Despite first regional studies, the carbon and nitrogen cycling on the landscape or catchment scale is not fully understood. In this study we present a newly developed modelling approach by coupling the fully distributed hydrology model CMF (catchment modelling framework) to the process based regional ecosystem model LandscapeDNDC for the investigation of hydrological processes and carbon and nitrogen transport and cycling, with a focus on nutrient displacement and resulting greenhouse gas emissions in a small catchment at the Yanting Agro-ecological Experimental Station of Purple Soil, Sichuan province, China. The catchment hosts cypress forests on the outer regions, arable fields on the sloping croplands cultivated with wheat-maize rotations and paddy rice fields in the lowland. The catchment consists of 300 polygons vertically stratified into 10 soil layers. Ecosystem states (soil water content and nutrients) and fluxes (evapotranspiration) are exchanged between the models at high temporal scales (hourly to daily) forming a 3-dimensional model application. The water flux and nutrients transport in the soil is modelled using a 3D Richards/Darcy approach for subsurface fluxes with a kinematic wave approach for surface water runoff and the evapotranspiration is based on Penman-Monteith. Biogeochemical processes are modelled by LandscapeDNDC, including soil microclimate, plant growth and biomass allocation, organic matter mineralisation, nitrification, denitrification, chemodenitrification and methanogenesis producing and consuming soil based greenhouse gases. The model application will present first validation results of the coupled model to simulate soil based greenhouse gas emissions as well as nitrate discharge from the Yanting catchment. The model application will also present the effects of different management practices (fertilization rates and timings, tilling, residues management) on the redistribution of N surplus within the catchment causing biomass productivity gradients and different levels of indirect N2O emissions along topographical gradients.

Top