Science.gov

Sample records for sheds-wood model incorporation

  1. Using Models that Incorporate Uncertainty

    ERIC Educational Resources Information Center

    Caulkins, Jonathan P.

    2002-01-01

    In this article, the author discusses the use in policy analysis of models that incorporate uncertainty. He believes that all models should consider incorporating uncertainty, but that at the same time it is important to understand that sampling variability is not usually the dominant driver of uncertainty in policy analyses. He also argues that…

  2. STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION MODEL FOR THE WOOD PRESERVATIVE SCENARIO (SHEDS-WOOD), VERSION 2 MODEL SAS CODE

    EPA Science Inventory

    Concerns have been raised regarding the safety of young children contacting arsenic and chromium residues while playing on and around Chromated Copper Arsenate (CCA) treated wood playground structures and decks. Although CCA registrants voluntarily canceled treated wood for resi...

  3. Incorporating opponent models into adversary search

    SciTech Connect

    Carmel, D.; Markovitch, S.

    1996-12-31

    This work presents a generalized theoretical framework that allows incorporation of opponent models into adversary search. We present the M* algorithm, a generalization of minimax that uses an arbitrary opponent model to simulate the opponent`s search. The opponent model is a recursive structure consisting of the opponent`s evaluation function and its model of the player. We demonstrate experimentally the potential benefit of using an opponent model. Pruning in M* is impossible in the general case. We prove a sufficient condition for pruning and present the {alpha}{beta}* algorithm which returns the M* value of a tree while searching only necessary branches.

  4. Incorporating interfacial phenomena in solidification models

    NASA Technical Reports Server (NTRS)

    Beckermann, Christoph; Wang, Chao Yang

    1994-01-01

    A general methodology is available for the incorporation of microscopic interfacial phenomena in macroscopic solidification models that include diffusion and convection. The method is derived from a formal averaging procedure and a multiphase approach, and relies on the presence of interfacial integrals in the macroscopic transport equations. In a wider engineering context, these techniques are not new, but their application in the analysis and modeling of solidification processes has largely been overlooked. This article describes the techniques and demonstrates their utility in two examples in which microscopic interfacial phenomena are of great importance.

  5. Incorporation of RAM techniques into simulation modeling

    NASA Astrophysics Data System (ADS)

    Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.

    1995-01-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.

  6. Incorporating neurophysiological concepts in mathematical thermoregulation models

    NASA Astrophysics Data System (ADS)

    Kingma, Boris R. M.; Vosselman, M. J.; Frijns, A. J. H.; van Steenhoven, A. A.; van Marken Lichtenbelt, W. D.

    2014-01-01

    Skin blood flow (SBF) is a key player in human thermoregulation during mild thermal challenges. Various numerical models of SBF regulation exist. However, none explicitly incorporates the neurophysiology of thermal reception. This study tested a new SBF model that is in line with experimental data on thermal reception and the neurophysiological pathways involved in thermoregulatory SBF control. Additionally, a numerical thermoregulation model was used as a platform to test the function of the neurophysiological SBF model for skin temperature simulation. The prediction-error of the SBF-model was quantified by root-mean-squared-residual (RMSR) between simulations and experimental measurement data. Measurement data consisted of SBF (abdomen, forearm, hand), core and skin temperature recordings of young males during three transient thermal challenges (1 development and 2 validation). Additionally, ThermoSEM, a thermoregulation model, was used to simulate body temperatures using the new neurophysiological SBF-model. The RMSR between simulated and measured mean skin temperature was used to validate the model. The neurophysiological model predicted SBF with an accuracy of RMSR < 0.27. Tskin simulation results were within 0.37 °C of the measured mean skin temperature. This study shows that (1) thermal reception and neurophysiological pathways involved in thermoregulatory SBF control can be captured in a mathematical model, and (2) human thermoregulation models can be equipped with SBF control functions that are based on neurophysiology without loss of performance. The neurophysiological approach in modelling thermoregulation is favourable over engineering approaches because it is more in line with the underlying physiology.

  7. Incorporation of salinity in Water Availability Modeling

    NASA Astrophysics Data System (ADS)

    Wurbs, Ralph A.; Lee, Chihun

    2011-10-01

    SummaryNatural salt pollution from geologic formations in the upper watersheds of several large river basins in the Southwestern United States severely constrains the use of otherwise available major water supply sources. The Water Rights Analysis Package modeling system has been routinely applied in Texas since the late 1990s in regional and statewide planning studies and administration of the state's water rights permit system, but without consideration of water quality. The modeling system was recently expanded to incorporate salinity considerations in assessments of river/reservoir system capabilities for supplying water for environmental, municipal, agricultural, and industrial needs. Salinity loads and concentrations are tracked through systems of river reaches and reservoirs to develop concentration frequency statistics that augment flow frequency and water supply reliability metrics at pertinent locations for alternative water management strategies. Flexible generalized capabilities are developed for using limited observed salinity data to model highly variable concentrations imposed upon complex river regulation infrastructure and institutional water allocation/management practices.

  8. Incorporating process variability into stormwater quality modelling.

    PubMed

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2015-11-15

    Process variability in pollutant build-up and wash-off generates inherent uncertainty that affects the outcomes of stormwater quality models. Poor characterisation of process variability constrains the accurate accounting of the uncertainty associated with pollutant processes. This acts as a significant limitation to effective decision making in relation to stormwater pollution mitigation. The study undertaken developed three theoretical scenarios based on research findings that variations in particle size fractions <150 μm and >150 μm during pollutant build-up and wash-off primarily determine the variability associated with these processes. These scenarios, which combine pollutant build-up and wash-off processes that takes place on a continuous timeline, are able to explain process variability under different field conditions. Given the variability characteristics of a specific build-up or wash-off event, the theoretical scenarios help to infer the variability characteristics of the associated pollutant process that follows. Mathematical formulation of the theoretical scenarios enables the incorporation of variability characteristics of pollutant build-up and wash-off processes in stormwater quality models. The research study outcomes will contribute to the quantitative assessment of uncertainty as an integral part of the interpretation of stormwater quality modelling outcomes. PMID:26179783

  9. Incorporating uncertainty in predictive species distribution modelling

    PubMed Central

    Beale, Colin M.; Lennon, Jack J.

    2012-01-01

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates. PMID:22144387

  10. SAI (SYSTEMS APPLICATIONS, INCORPORATED) URBAN AIRSHED MODEL

    EPA Science Inventory

    The magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air quality simulation model that is well suited for predicting the spatial and temporal distribution of photoch...

  11. A Financial Market Model Incorporating Herd Behaviour

    PubMed Central

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents’ accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents’ accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the

  12. A Financial Market Model Incorporating Herd Behaviour.

    PubMed

    Wray, Christopher M; Bishop, Steven R

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market

  13. Incorporating model uncertainty into spatial predictions

    SciTech Connect

    Handcock, M.S.

    1996-12-31

    We consider a modeling approach for spatially distributed data. We are concerned with aspects of statistical inference for Gaussian random fields when the ultimate objective is to predict the value of the random field at unobserved locations. However the exact statistical model is seldom known before hand and is usually estimated from the very same data relative to which the predictions are made. Our objective is to assess the effect of the fact that the model is estimated, rather than known, on the prediction and the associated prediction uncertainty. We describe a method for achieving this objective. We, in essence, consider the best linear unbiased prediction procedure based on the model within a Bayesian framework. These ideas are implemented for the spring temperature over the region in the northern United States based on the stations in the United States historical climatological network reported in Karl, Williams, Quinlan & Boden.

  14. Incorporating evolutionary processes into population viability models.

    PubMed

    Pierson, Jennifer C; Beissinger, Steven R; Bragg, Jason G; Coates, David J; Oostermeijer, J Gerard B; Sunnucks, Paul; Schumaker, Nathan H; Trotter, Meredith V; Young, Andrew G

    2015-06-01

    We examined how ecological and evolutionary (eco-evo) processes in population dynamics could be better integrated into population viability analysis (PVA). Complementary advances in computation and population genomics can be combined into an eco-evo PVA to offer powerful new approaches to understand the influence of evolutionary processes on population persistence. We developed the mechanistic basis of an eco-evo PVA using individual-based models with individual-level genotype tracking and dynamic genotype-phenotype mapping to model emergent population-level effects, such as local adaptation and genetic rescue. We then outline how genomics can allow or improve parameter estimation for PVA models by providing genotypic information at large numbers of loci for neutral and functional genome regions. As climate change and other threatening processes increase in rate and scale, eco-evo PVAs will become essential research tools to evaluate the effects of adaptive potential, evolutionary rescue, and locally adapted traits on persistence. PMID:25494697

  15. Incorporating 3-dimensional models in online articles

    PubMed Central

    Cevidanes, Lucia H. S.; Ruellasa, Antonio C. O.; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz

    2015-01-01

    Introduction The aims of this article were to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Methods Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. Results All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article’s online version for viewing and downloading using the reader’s software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader’s software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. Conclusions When submitting manuscripts, authors can

  16. Incorporating RTI in a Hybrid Model of Reading Disability.

    PubMed

    Spencer, Mercedes; Wagner, Richard K; Schatschneider, Christopher; Quinn, Jamie; Lopez, Danielle; Petscher, Yaacov

    2014-08-01

    The present study seeks to evaluate a hybrid model of identification that incorporates response-to-intervention (RTI) as a one of the key symptoms of reading disability. The one-year stability of alternative operational definitions of reading disability was examined in a large scale sample of students who were followed longitudinally from first to second grade. The results confirmed previous findings of limited stability for single-criterion based operational definitions of reading disability. However, substantially greater stability was obtained for a hybrid model of reading disability that incorporates RTI with other common symptoms of reading disability. PMID:25422531

  17. Incorporating parametric uncertainty into population viability analysis models

    USGS Publications Warehouse

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  18. Incorporating transient storage in conjunctive stream-aquifer modeling

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chang; Medina, Miguel A.

    2003-09-01

    There has been growing interest in incorporating the transient storage effect into modeling solute transport in streams. In particular, for a smaller mountain stream where flow is fast and the flow field is irregular (a favorable environment to induce dead zones along the stream), long tails are normally observed in the stream tracer data, and adding transient storage terms in the advection-dispersion transport equation can result in more accurate simulation. While previous studies on transient storage modeling account for temporary, localized exchange between the stream and the shallow groundwater in the hyporheic zone, larger-scale exchange with the groundwater in the underlying aquifer has rarely been included or properly coupled to surface water modeling. In this paper, we complement previous modeling efforts by incorporating the transient storage concept in a conjunctive stream-aquifer model. Three well-documented and widely used USGS models have been coupled to form the core of this conjunctive model: MODFLOW handles the groundwater flow in the aquifer; DAFLOW accurately computes unsteady streamflow by means of the diffusive wave routing technique, as well as stream-aquifer exchange simulated as streambed leakage; and MOC3D computes solute transport in the groundwater zone. In addition, an explicit finite difference package was developed to incorporate the one-dimensional transient storage equations for solute transport in streams. The quadratic upstream interpolation (QUICK) algorithm is employed to improve the accuracy of spatial differencing. An adaptive stepsize control algorithm for the Runge-Kutta method is incorporated to increase overall model efficiency. Results show that the conjunctive stream-aquifer model with transient storage can handle well the bank storage effect under a flooding event. When it is applied over a stream network, the results also show that the stream-aquifer interaction acts as a strong source or sink along the stream and is too

  19. Incorporating RTI in a Hybrid Model of Reading Disability

    ERIC Educational Resources Information Center

    Spencer, Mercedes; Wagner, Richard K.; Schatschneider, Christopher; Quinn, Jamie M.; Lopez, Danielle; Petscher, Yaacov

    2014-01-01

    The present study seeks to evaluate a hybrid model of identification that incorporates response to instruction and intervention (RTI) as one of the key symptoms of reading disability. The 1-year stability of alternative operational definitions of reading disability was examined in a large-scale sample of students who were followed longitudinally…

  20. "Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process

    SciTech Connect

    Sanfilippo, Antonio P.; Nibbs, Faith G.

    2007-08-24

    While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.

  1. How to incorporate generic refraction models into multistatic tracking algorithms

    NASA Astrophysics Data System (ADS)

    Crouse, D. F.

    The vast majority of literature published on target tracking ignores the effects of atmospheric refraction. When refraction is considered, the solutions are generally tailored to a simple exponential atmospheric refraction model. This paper discusses how arbitrary refraction models can be incorporated into tracking algorithms. Attention is paid to multistatic tracking problems, where uncorrected refractive effects can worsen track accuracy and consistency in centralized tracking algorithms, and can lead to difficulties in track-to-track association in distributed tracking filters. Monostatic and bistatic track initialization using refraction-corrupted measurements is discussed. The results are demonstrated using an exponential refractive model, though an arbitrary refraction profile can be substituted.

  2. Incorporating Field Intelligence Into Conceptual Rainfall-runoff Models

    NASA Astrophysics Data System (ADS)

    Vache, K.; McDonnell, J.; McGuire, K.

    2003-12-01

    A major challenge in the hydrological sciences is to incorporate observed physical processes into general hydrological models with minimal data requirements and limited model complexity. One approach is to move away from discharge-based calibration schemes, which often assume model structures to be correct, and allow field observations to inform and test new model structures. The use of this knowledge will contribute to (1) the development of an expanded set of variables to verify hydrological model performance and reflect the overall watershed function and (2) provide useful information regarding the development of model structures and landscape discretizations. We identify a set of three variables that focus on the composition of stream water, using artificial hydrograph separations to provide estimates of the time source (e.g., event vs. pre-event) and the geographic source (e.g., hillslope vs. riparian) of streamflow, and explicitly accounting for mass transfer to provide estimates of residence time. In addition to these variables, we present a set of methods and data designed to incorporate experimental understanding directly into the model structure and catchment discretization. These ideas are illustrated through application at the H.J. Andrews Experimental Forest's Lookout Creek watershed in the western Cascades of Oregon.

  3. Incorporating nitrogen fixing cyanobacteria in the global biogeochemical model HAMOCC

    NASA Astrophysics Data System (ADS)

    Paulsen, Hanna; Ilyina, Tatiana; Six, Katharina

    2015-04-01

    Nitrogen fixation by marine diazotrophs plays a fundamental role in the oceanic nitrogen and carbon cycle as it provides a major source of 'new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Since most global biogeochemical models include nitrogen fixation only diagnostically, they are not able to capture its spatial pattern sufficiently. Here we present the incorporation of an explicit, dynamic representation of diazotrophic cyanobacteria and the corresponding nitrogen fixation in the global ocean biogeochemical model HAMOCC (Hamburg Ocean Carbon Cycle model), which is part of the Max Planck Institute for Meteorology Earth system model (MPI-ESM). The parameterization of the diazotrophic growth is thereby based on available knowledge about the cyanobacterium Trichodesmium spp., which is considered as the most significant pelagic nitrogen fixer. Evaluation against observations shows that the model successfully reproduces the main spatial distribution of cyanobacteria and nitrogen fixation, covering large parts of the tropical and subtropical oceans. Besides the role of cyanobacteria in marine biogeochemical cycles, their capacity to form extensive surface blooms induces a number of bio-physical feedback mechanisms in the Earth system. The processes driving these interactions, which are related to the alteration of heat absorption, surface albedo and momentum input by wind, are incorporated in the biogeochemical and physical model of the MPI-ESM in order to investigate their impacts on a global scale. First preliminary results will be shown.

  4. Methods improvements incorporated into the SAPHIRE ASP models

    SciTech Connect

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-04-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.

  5. Importance of incorporating agriculture in conceptual rainfall-runoff models

    NASA Astrophysics Data System (ADS)

    de Boer-Euser, Tanja; Hrachowitz, Markus; Winsemius, Hessel; Savenije, Hubert

    2016-04-01

    Incorporating spatially variable information is a frequently discussed option to increase the performance of (semi-)distributed conceptual rainfall-runoff models. One of the methods to do this is by using this spatially variable information to delineate Hydrological Response Units (HRUs) within a catchment. In large parts of Europe the original forested land cover is replaced by an agricultural land cover. This change in land cover probably affects the dominant runoff processes in the area, for example by increasing the Hortonian overland flow component, especially on the flatter and higher elevated parts of the catchment. A change in runoff processes implies a change in HRUs as well. A previous version of our model distinguished wetlands (areas close to the stream) from the remainder of the catchment. However, this configuration was not able to reproduce all fast runoff processes, both in summer as in winter. Therefore, this study tests whether the reproduction of fast runoff processes can be improved by incorporating a HRU which explicitly accounts for the effect of agriculture. A case study is carried out in the Ourthe catchment in Belgium. For this case study the relevance of different process conceptualisations is tested stepwise. Among the conceptualisations are Hortonian overland flow in summer and winter, reduced infiltration capacity due to a partly frozen soil and the relative effect of rainfall and snow smelt in case of this frozen soil. The results show that the named processes can make a large difference on event basis, especially the Hortonian overland flow in summer and the combination of rainfall and snow melt on (partly) frozen soil in winter. However, differences diminish when the modelled period of several years is evaluated based on standard metrics like Nash-Sutcliffe Efficiency. These results emphasise on one hand the importance of incorporating the effects of agricultural in conceptual models and on the other hand the importance of more event

  6. Incorporation of Hysteresis Effects into Magnetc Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Lee, J. Y.; Lee, S. J.; Melikhov, Y.; Jiles, D. C.; Garton, M.; Lopez, R.; Brasche, L.

    2004-02-01

    Hysteresis effects have usually been ignored in magnetic modeling due to the multi-valued property causing difficulty in its incorporation into numerical calculations such as those based on finite elements. A linear approximation of magnetic permeability or a nonlinear B-H curve formed by connecting the tips of the hysteresis loops has been widely used in magnetic modeling for these types of calculations. We have employed the Jiles-Atherton (J-A) hysteresis model for development of a finite element method algorithm incorporating hysteresis effects. J-A model is suited for numerical analysis such as finite element modeling because of the small number of degrees of freedom and its simple form of equation. A finite element method algorithm for hysteretic materials has been developed for estimation of the volume and the distribution of retained magnetic particles around a defect site. The volume of retained magnetic particles was found to depend not only on the existing current source strength but also on the remaining magnetization of a hysteretic material. Detailed algorithm and simulation results are presented.

  7. Stochastic Human Exposure and Dose Simulation Model for Wood Preservatives

    EPA Science Inventory

    SHEDS-Wood (Stochastic Human Exposure and Dose Simulation Model for Wood Preservatives) is a physically-based stochastic model that was developed to quantify exposure and dose of children to wood preservatives on treated playsets and residential decks. Probabilistic inputs are co...

  8. USEPA SHEDS MODEL: METHODOLOGY FOR EXPOSURE ASSESSMENT FOR WOOD PRESERVATIVES

    EPA Science Inventory

    A physically-based, Monte Carlo probabilistic model (SHEDS-Wood: Stochastic Human Exposure and Dose Simulation model for wood preservatives) has been applied to assess the exposure and dose of children to arsenic (As) and chromium (Cr) from contact with chromated copper arsenat...

  9. Cirrus cloud model parameterizations: Incorporating realistic ice particle generation

    NASA Technical Reports Server (NTRS)

    Sassen, Kenneth; Dodd, G. C.; Starr, David OC.

    1990-01-01

    Recent cirrus cloud modeling studies have involved the application of a time-dependent, two dimensional Eulerian model, with generalized cloud microphysical parameterizations drawn from experimental findings. For computing the ice versus vapor phase changes, the ice mass content is linked to the maintenance of a relative humidity with respect to ice (RHI) of 105 percent; ice growth occurs both with regard to the introduction of new particles and the growth of existing particles. In a simplified cloud model designed to investigate the basic role of various physical processes in the growth and maintenance of cirrus clouds, these parametric relations are justifiable. In comparison, the one dimensional cloud microphysical model recently applied to evaluating the nucleation and growth of ice crystals in cirrus clouds explicitly treated populations of haze and cloud droplets, and ice crystals. Although these two modeling approaches are clearly incompatible, the goal of the present numerical study is to develop a parametric treatment of new ice particle generation, on the basis of detailed microphysical model findings, for incorporation into improved cirrus growth models. For example, the relation between temperature and the relative humidity required to generate ice crystals from ammonium sulfate haze droplets, whose probability of freezing through the homogeneous nucleation mode are a combined function of time and droplet molality, volume, and temperature. As an example of this approach, the results of cloud microphysical simulations are presented showing the rather narrow domain in the temperature/humidity field where new ice crystals can be generated. The microphysical simulations point out the need for detailed CCN studies at cirrus altitudes and haze droplet measurements within cirrus clouds, but also suggest that a relatively simple treatment of ice particle generation, which includes cloud chemistry, can be incorporated into cirrus cloud growth.

  10. Geomagnetic field models incorporating physical constraints on the secular variation

    NASA Technical Reports Server (NTRS)

    Constable, Catherine; Parker, Robert L.

    1993-01-01

    This proposal has been concerned with methods for constructing geomagnetic field models that incorporate physical constraints on the secular variation. The principle goal that has been accomplished is the development of flexible algorithms designed to test whether the frozen flux approximation is adequate to describe the available geomagnetic data and their secular variation throughout this century. These have been applied to geomagnetic data from both the early and middle part of this century and convincingly demonstrate that there is no need to invoke violations of the frozen flux hypothesis in order to satisfy the available geomagnetic data.

  11. Incorporating Statistical Topic Models in the Retrieval of Healthcare Documents

    PubMed Central

    Caballero, Karla; Akella, Ram

    2015-01-01

    Patients often search for information on the web about treatments and diseases after they are discharged from the hospital. However, searching for medical information on the web poses challenges due to related terms and synonymy for the same disease and treatment. In this paper, we present a method that combines Statistical Topics Models, Language Models and Natural Language Processing to retrieve healthcare related documents. In addition, we test if the incorporation of terms extracted from the patient’s discharge summary improves the retrieval performance. We show that the proposed framework outperformed the winner of the retrieval CLEF eHealth 2013 challenge by 68% in the MAP measure (0:5226 vs 0:3108), and by 13% in NDCG (0:5202 vs 0:3637). Compared with standard language models, we obtain an improvement of 92% in MAP (0:2666) and 45% in NDCG. (0:3637) PMID:26306280

  12. Incorporating groundwater-surface water interaction into river management models.

    PubMed

    Valerio, Allison; Rajaram, Harihar; Zagona, Edith

    2010-01-01

    Accurate representation of groundwater-surface water interactions is critical to modeling low river flows in the semi-arid southwestern United States. Although a number of groundwater-surface water models exist, they are seldom integrated with river operation/management models. A link between the object-oriented river and reservoir operations model, RiverWare, and the groundwater model, MODFLOW, was developed to incorporate groundwater-surface water interaction processes, such as river seepage/gains, riparian evapotranspiration, and irrigation return flows, into a rule-based water allocations model. An explicit approach is used in which the two models run in tandem, exchanging data once in each computational time step. Because the MODFLOW grid is typically at a finer resolution than RiverWare objects, the linked model employs spatial interpolation and summation for compatible communication of exchanged variables. The performance of the linked model is illustrated through two applications in the Middle Rio Grande Basin in New Mexico where overappropriation impacts endangered species habitats. In one application, the linked model results are compared with historical data; the other illustrates use of the linked model for determining management strategies needed to attain an in-stream flow target. The flows predicted by the linked model at gauge locations are reasonably accurate except during a few very low flow periods when discrepancies may be attributable to stream gaging uncertainties or inaccurate documentation of diversions. The linked model accounted for complex diversions, releases, groundwater pumpage, irrigation return flows, and seepage between the groundwater system and canals/drains to achieve a schedule of releases that satisfied the in-stream target flow. PMID:20412319

  13. SAI (Systems Applications, Incorporated) Urban Airshed Model. Model

    SciTech Connect

    Schere, K.L.

    1985-06-01

    This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air-quality simulation model that is well suited for predicting the spatial and temporal distribution of photochemical pollutant concentrations in an urban area. The model is based on the equations of conservation of mass for a set of reactive pollutants in a turbulent-flow field. To solve these equations, the UAM uses numerical techniques set in a 3-D finite-difference grid array of cells, each about 1 to 10 kilometers wide and 10 to several hundred meters deep. As output, the model provides the calculated pollutant concentrations in each cell as a function of time. The chemical species of prime interest included in the UAM simulations are O3, NO, NO/sub 2/ and several organic compounds and classes of compounds. The UAM system contains at its core the Airshed Simulation Program that accesses input data consisting of 10 to 14 files, depending on the program options chosen. Each file is created by a separate data-preparation program. There are 17 programs in the entire UAM system. The services of a qualified dispersion meteorologist, a chemist, and a computer programmer will be necessary to implement and apply the UAM and to interpret the results. Software Description: The program is written in the FORTRAN programming language for implementation on a UNIVAC 1110 computer under the UNIVAC 110 0 operating system level 38R5A. Memory requirement is 80K.

  14. A mathematical model for incorporating biofeedback into human postural control

    PubMed Central

    2013-01-01

    Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1) to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2) to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP) to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional torque to the postural

  15. Active shape models incorporating isolated landmarks for medical image annotation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Stieltjes, Bram; Maier-Hein, Klaus H.

    2014-03-01

    Apart from their robustness in anatomic surface segmentation, purely surface based 3D Active Shape Models lack the ability to automatically detect and annotate non-surface key points of interest. However, annotation of anatomic landmarks is desirable, as it yields additional anatomic and functional information. Moreover, landmark detection might help to further improve accuracy during ASM segmentation. We present an extension of surface-based 3D Active Shape Models incorporating isolated non-surface landmarks. Positions of isolated and surface landmarks are modeled conjoint within a point distribution model (PDM). Isolated landmark appearance is described by a set of haar-like features, supporting local landmark detection on the PDM estimates using a kNN-Classi er. Landmark detection was evaluated in a leave-one-out cross validation on a reference dataset comprising 45 CT volumes of the human liver after shape space projection. Depending on the anatomical landmark to be detected, our experiments have shown in about 1/4 up to more than 1/2 of all test cases a signi cant improvement in detection accuracy compared to the position estimates delivered by the PDM. Our results encourage further research with regard to the combination of shape priors and machine learning for landmark detection within the Active Shape Model Framework.

  16. Incorporation of shuttle CCT parameters in computer simulation models

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    1990-01-01

    Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported.

  17. Incorporation of multiple cloud layers for ultraviolet radiation modeling studies

    NASA Technical Reports Server (NTRS)

    Charache, Darryl H.; Abreu, Vincent J.; Kuhn, William R.; Skinner, Wilbert R.

    1994-01-01

    Cloud data sets compiled from surface observations were used to develop an algorithm for incorporating multiple cloud layers into a multiple-scattering radiative transfer model. Aerosol extinction and ozone data sets were also incorporated to estimate the seasonally averaged ultraviolet (UV) flux reaching the surface of the Earth in the Detroit, Michigan, region for the years 1979-1991, corresponding to Total Ozone Mapping Spectrometer (TOMS) version 6 ozone observations. The calculated UV spectrum was convolved with an erythema action spectrum to estimate the effective biological exposure for erythema. Calculations show that decreasing the total column density of ozone by 1% leads to an increase in erythemal exposure by approximately 1.1-1.3%, in good agreement with previous studies. A comparison of the UV radiation budget at the surface between a single cloud layer method and a multiple cloud layer method presented here is discussed, along with limitations of each technique. With improved parameterization of cloud properties, and as knowledge of biological effects of UV exposure increase, inclusion of multiple cloud layers may be important in accurately determining the biologically effective UV budget at the surface of the Earth.

  18. Tantalum strength model incorporating temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  19. Incorporating Plant Phenology Dynamics in a Biophysical Canopy Model

    NASA Technical Reports Server (NTRS)

    Barata, Raquel A.; Drewry, Darren

    2012-01-01

    The Multi-Layer Canopy Model (MLCan) is a vegetation model created to capture plant responses to environmental change. Themodel vertically resolves carbon uptake, water vapor and energy exchange at each canopy level by coupling photosynthesis, stomatal conductance and leaf energy balance. The model is forced by incoming shortwave and longwave radiation, as well as near-surface meteorological conditions. The original formulation of MLCan utilized canopy structural traits derived from observations. This project aims to incorporate a plant phenology scheme within MLCan allowing these structural traits to vary dynamically. In the plant phenology scheme implemented here, plant growth is dependent on environmental conditions such as air temperature and soil moisture. The scheme includes functionality that models plant germination, growth, and senescence. These growth stages dictate the variation in six different vegetative carbon pools: storage, leaves, stem, coarse roots, fine roots, and reproductive. The magnitudes of these carbon pools determine land surface parameters such as leaf area index, canopy height, rooting depth and root water uptake capacity. Coupling this phenology scheme with MLCan allows for a more flexible representation of the structure and function of vegetation as it responds to changing environmental conditions.

  20. Incorporation of chemical kinetic models into process control

    SciTech Connect

    Herget, C.J.; Frazer, J.W.

    1981-07-08

    An important consideration in chemical process control is to determine the precise rationing of reactant streams, particularly when a large time delay exists between the mixing of the reactants and the measurement of the product. In this paper, a method is described for incorporating chemical kinetic models into the control strategy in order to achieve optimum operating conditions. The system is first characterized by determining a reaction rate surface as a function of all input reactant concentrations over a feasible range. A nonlinear constrained optimization program is then used to determine the combination of reactants which produces the specified yield at minimum cost. This operating condition is then used to establish the nominal concentrations of the reactants. The actual operation is determined through a feedback control system employing a Smith predictor. The method is demonstrated on a laboratory bench scale enzyme reactor.

  1. A dengue model incorporating saturation incidence and human migration

    NASA Astrophysics Data System (ADS)

    Gakkhar, S.; Mishra, A.

    2015-03-01

    In this paper, a non-linear model has been proposed to investigate the effects of human migration on dengue dynamics. Human migration has been considered between two patches having different dengue strains. Due to migration secondary infection is possible. Further, the secondary infection is considered in patch-2 only as strain-2 in patch-2 is considered to be more severe than that of strain-1 in patch-1. The saturation incidence rate has been considered to incorporate the behavioral changes towards epidemic in human population. The basic reproduction number has been computed. Four Equilibrium states have been found and analyzed. Increasing saturation rate decreases the threshold thereby enhancing the stability of disease-free state in both the patches. Control on migration may lead to change in infection level of patches.

  2. Incorporating Functional Gene Quantification into Traditional Decomposition Models

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Zhou, J.; Yin, H.; Wu, L.; Tiedje, J. M.; Schuur, E. A. G.; Konstantinidis, K.; Luo, Y.

    2014-12-01

    Incorporating new genetic quantification measurements into traditional substrate pool models represents a substantial challenge. These decomposition models are built around the idea that substrate availablity, with environmental drivers, limit carbon dioxide respiration rates. In this paradigm, microbial communities optimally adapt to a given substrate and environment on much shorter time scales then the carbon flux of interest. By characterizing the relative shift in biomass of these microbial communities, we informed previously poorly constrained parameters in traditional decomposition models. In this study we coupled a 9 month laboratory incubation study with quantitative gene measurements with traditional CO2 flux measurements plus initial soil organic carbon quantification. GeoChip 5.0 was used to quantify the functional genes associated with carbon cycling at 2 weeks, 3 months and 9 months. We then combined the genes which 'collapsed' over the experiment and assumed that this tracked the relative change in the biomass associated with the 'fast' pool. We further assumed that this biomass was proportional to the 'fast' SOC pool and thus were able to constrain the relative change in the fast SOC pool in our 3-pool decomposition model. We found that biomass quantification described above, combined with traditional CO2 flux and SOC measurements, improve the transfer coefficient estimation in traditional decomposition models. Transfer coefficients are very difficult to characterized using traditional CO2 flux measurements, thus DNA quantification provides new and significant information about the system. Over a 100 year simulation, these new biologically informed parameters resulted in an additional 10% of SOC loss over the traditionally informed parameters.

  3. Incorporation of Helium Demixing in Interior Structure Models of Saturn

    NASA Astrophysics Data System (ADS)

    Tian, Bob; Stanley, Sabine; Valencia, Diana

    2015-04-01

    Experiments and ab initio calculations of hydrogen-helium mixtures predict a phase separation at pressure-temperature conditions relevant to Saturn's interior. At depths where this occurs, droplets of helium form out of the mixture and sink towards the deep interiors where it re-mixes again, thereby depleting the helium above the layer over time while enriching the concentration below the layer. In dynamo modelling, the axisymmetric nature of Saturn's magnetic field is so far best explained by the inclusion of a stably stratified layer just below the depth at which hydrogen metallizes (approximately 0.65RS). Stable stratification at that depth could occur if the compositional gradients produced by the helium rain process described above is great enough to suppress convection in the de-mixing layers. Thus, we first developed a range of interior structure models consistent with available constraints of the gravity field and atmospheric composition. The hydrogen-helium de-mixing curve was then incorporated in calculations of some of these models to assess its feasibility in compositionally stratifying the top of the dynamo source region. We found that when helium rain is taken into account, a stably stratified layer approximately 0.1 - 0.15RS in thickness can exist atop the dynamo source region, consistent with thicknesses needed in dynamo models to axisymmetrize the observable magnetic field. Furthermore, inertial gravity waves could be excited in such thick stably stratified regions. These may be detectable by asteroseismology techniques, or by analysis of wave modes' gravitational interaction with Saturn's ring particles. Thus, profiles of sound speed and Brunt-Vaisala frequencies were also calculated for all of the interior structures models studied to be used for comparison with possible seismic studies in the future.

  4. Digital terrain model generalization incorporating scale, semantic and cognitive constraints

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Papadogiorgaki, Maria

    2014-05-01

    Cartographic generalization is a well-known process accommodating spatial data compression, visualization and comprehension under various scales. In the last few years, there are several international attempts to construct tangible GIS systems, forming real 3D surfaces using a vast number of mechanical parts along a matrix formation (i.e., bars, pistons, vacuums). Usually, moving bars upon a structured grid push a stretching membrane resulting in a smooth visualization for a given surface. Most of these attempts suffer either in their cost, accuracy, resolution and/or speed. Under this perspective, the present study proposes a surface generalization process that incorporates intrinsic constrains of tangible GIS systems including robotic-motor movement and surface stretching limitations. The main objective is to provide optimized visualizations of 3D digital terrain models with minimum loss of information. That is, to minimize the number of pixels in a raster dataset used to define a DTM, while reserving the surface information. This neighborhood type of pixel relations adheres to the basics of Self Organizing Map (SOM) artificial neural networks, which are often used for information abstraction since they are indicative of intrinsic statistical features contained in the input patterns and provide concise and characteristic representations. Nevertheless, SOM remains more like a black box procedure not capable to cope with possible particularities and semantics of the application at hand. E.g. for coastal monitoring applications, the near - coast areas, surrounding mountains and lakes are more important than other features and generalization should be "biased"-stratified to fulfill this requirement. Moreover, according to the application objectives, we extend the SOM algorithm to incorporate special types of information generalization by differentiating the underlying strategy based on topologic information of the objects included in the application. The final

  5. 4-D Subduction Models Incorporating an Upper Plate

    NASA Astrophysics Data System (ADS)

    Stegman, D.; Capitanio, F. A.; Moresi, L.; Mueller, D.; Clark, S.

    2007-12-01

    Thus far, relatively simplistic models of free subduction have been employed in which the trench and plate kinematics are emergent features completely driven by the negative buoyancy of the slab. This has allowed us to build a fundamental understanding of subduction processes such as the kinematics of subduction zones, the strength of slabs, and mantle flow-plate coupling. Additionaly, these efforts have helped to develop appreciable insight into subduction processes when considering the energetics of subduction, in particular how energy is dissipated in various parts of the system such as generating mantle flow and bending the plate. We are now in a position to build upon this knowledge and shift our focus towards the dynamic controls of deformation in the upper plate (vertical motions, extension, shortening, and dynamic topography). Here, the state of stress in the overriding plate is the product of the delicate balance of large tectonic forces in a highly-coupled system, and must therefore include all components of the system: the subducting plate, the overriding plate, and the underlying mantle flow which couples everything together. We will present some initial results of the fully dynamic 3-D models of free subduction which incorporate an overriding plate and systematically investigate how variations in the style and strength of subduction are expressed by the tectonics of the overriding plate. Deformation is driven in the overriding plate by the forces generated from the subducting plate and the type of boundary condition on the non-subducting side of the overriding plate (either fixed or free). Ultimately, these new models will help to address a range of issues: how the overriding plate influences the plate and trench kinematics; the formation and evolution of back-arc basins; the variation of tractions on the base of the overriding plate; the nature of forces which drive plates; and the dynamics controls on seismic coupling at the plate boundary.

  6. Incorporating spatial correlations into multispecies mean-field models

    NASA Astrophysics Data System (ADS)

    Markham, Deborah C.; Simpson, Matthew J.; Maini, Philip K.; Gaffney, Eamonn A.; Baker, Ruth E.

    2013-11-01

    In biology, we frequently observe different species existing within the same environment. For example, there are many cell types in a tumour, or different animal species may occupy a given habitat. In modeling interactions between such species, we often make use of the mean-field approximation, whereby spatial correlations between the locations of individuals are neglected. Whilst this approximation holds in certain situations, this is not always the case, and care must be taken to ensure the mean-field approximation is only used in appropriate settings. In circumstances where the mean-field approximation is unsuitable, we need to include information on the spatial distributions of individuals, which is not a simple task. In this paper, we provide a method that overcomes many of the failures of the mean-field approximation for an on-lattice volume-excluding birth-death-movement process with multiple species. We explicitly take into account spatial information on the distribution of individuals by including partial differential equation descriptions of lattice site occupancy correlations. We demonstrate how to derive these equations for the multispecies case and show results specific to a two-species problem. We compare averaged discrete results to both the mean-field approximation and our improved method, which incorporates spatial correlations. We note that the mean-field approximation fails dramatically in some cases, predicting very different behavior from that seen upon averaging multiple realizations of the discrete system. In contrast, our improved method provides excellent agreement with the averaged discrete behavior in all cases, thus providing a more reliable modeling framework. Furthermore, our method is tractable as the resulting partial differential equations can be solved efficiently using standard numerical techniques.

  7. Incorporating seepage processes into a streambank stability model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Seepage processes are usually neglected in bank stability analyses although they can become a prominent failure mechanism under certain field conditions. This study incorporated the effects of seepage (i.e., seepage gradient forces and seepage erosion undercutting) into the Bank Stability and Toe Er...

  8. A PROBABILISTIC EXPOSURE ASSESSMENT FOR CHILDREN WHO CONTACT CCA-TREATED PLAYSETS AND DECKS USING THE STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION (SHEDS) MODEL FOR THE WOOD PRESERVATIVE EXPOSURE SCENARIO

    EPA Science Inventory

    The U.S. Environmental Protection Agency has conducted a probabilistic exposure and dose assessment on the arsenic (As) and chromium (Cr) components of Chromated Copper Arsenate (CCA) using the Stochastic Human Exposure and Dose Simulation model for wood preservatives (SHEDS-Wood...

  9. Implementing the Standards: Incorporating Mathematical Modeling into the Curriculum.

    ERIC Educational Resources Information Center

    Swetz, Frank

    1991-01-01

    Following a brief historical review of the mechanism of mathematical modeling, examples are included that associate a mathematical model with given data (changes in sea level) and that model a real-life situation (process of parallel parking). Also provided is the rationale for the curricular implementation of mathematical modeling. (JJK)

  10. A Measurement Model for Likert Responses that Incorporates Response Time

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Lorenzo-Seva, Urbano

    2007-01-01

    This article describes a model for response times that is proposed as a supplement to the usual factor-analytic model for responses to graded or more continuous typical-response items. The use of the proposed model together with the factor model provides additional information about the respondent and can potentially increase the accuracy of the…

  11. A new nonlinear Muskingum flood routing model incorporating lateral flow

    NASA Astrophysics Data System (ADS)

    Karahan, Halil; Gurarslan, Gurhan; Geem, Zong Woo

    2015-06-01

    A new nonlinear Muskingum flood routing model taking the contribution from lateral flow into consideration was developed in the present study. The cuckoo search algorithm, a quite novel and robust algorithm, was used in the calibration and verification of the model parameters. The success and the dependability of the proposed model were tested on five different sets of synthetic and real flood data. The optimal solutions for the test cases were determined by the currently proposed model rather than by different models taken from the literature, indicating that this model could be suitable for use in flood routing problems.

  12. Incorporating Temperature-driven Seasonal Variation in Survival, Growth, and Reproduction Models for Small Fish

    EPA Science Inventory

    Seasonal variation in survival and reproduction can be a large source of prediction uncertainty in models used for conservation and management. A seasonally varying matrix population model is developed that incorporates temperature-driven differences in mortality and reproduction...

  13. Incorporating uncertainty into high-resolution groundwater supply models

    USGS Publications Warehouse

    Rahman, A.; Hartono, S.; Carlson, D.; Willson, C.S.

    2003-01-01

    Groundwater modeling is a useful tool for evaluating whether an acquifer system is capable of supporting groundwater withdrawals over long periods of time and what effect, if any, such activity will have on the regional flow dynamics as well as on specific public water, agricultural and industrial supplies. An overview is given of an ongoing groundwater modeling study of the Chicot Aquifer in southwestern Louisiana where a low-resolution groundwater model is being used to study the regional flow in the Chicot acquifer and to provide boundary conditions for higher-resolution inset models created using telescopic mesh refinement (TMR).

  14. Incorporation of the planetary boundary layer in atmospheric models

    NASA Technical Reports Server (NTRS)

    Moeng, Chin-Hoh; Wyngaard, John; Pielke, Roger; Krueger, Steve

    1993-01-01

    The topics discussed include the following: perspectives on planetary boundary layer (PBL) measurements; current problems of PBL parameterization in mesoscale models; and convective cloud-PBL interactions.

  15. Progressive evaluation of incorporating information into a model building process

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Gao, Hongkai; Gupta, Hoshin; Savenije, Huub

    2014-05-01

    Catchments are open systems meaning that it is impossible to find out the exact boundary conditions of the real system spatially and temporarily. Therefore models are essential tools in capturing system behaviour spatially and extrapolating it temporarily for prediction. In recent years conceptual models have been in the center of attention rather than so called physically based models which are often over-parameterized and encounter difficulties for up-scaling of small scale processes. Conceptual models however are heavily dependent on calibration as one or more of their parameter values can typically not be physically measured at the catchment scale. The general understanding is based on the fact that increasing the complexity of conceptual model for better representation of hydrological process heterogeneity typically makes parameter identification more difficult however the evaluation of the amount of information given by each of the model elements, control volumes (so called buckets), interconnecting fluxes, parameterization (constitutive functions) and finally parameter values are rather unknown. Each of the mentioned components of a model contains information on the transformation of forcing (precipitation) into runoff, however the effect of each of them solely and together is not well understood. In this study we follow hierarchal steps for model building, firstly the model structure is built by its building blocks (control volumes) as well as interconnecting fluxes. The effect of adding every control volumes and the architecture of the model (formation of control volumes and fluxes) can be evaluated in this level. In the second layer the parameterization of model is evaluated. As an example the effect of a specific type of stage-discharge relation for a control volume can be explored. Finally in the last step of the model building the information gained by parameter values are quantified. In each development level the value of information which are added

  16. A quantum model of exaptation: incorporating potentiality into evolutionary theory.

    PubMed

    Gabora, Liane; Scott, Eric O; Kauffman, Stuart

    2013-09-01

    The phenomenon of preadaptation, or exaptation (wherein a trait that originally evolved to solve one problem is co-opted to solve a new problem) presents a formidable challenge to efforts to describe biological phenomena using a classical (Kolmogorovian) mathematical framework. We develop a quantum framework for exaptation with examples from both biological and cultural evolution. The state of a trait is written as a linear superposition of a set of basis states, or possible forms the trait could evolve into, in a complex Hilbert space. These basis states are represented by mutually orthogonal unit vectors, each weighted by an amplitude term. The choice of possible forms (basis states) depends on the adaptive function of interest (e.g., ability to metabolize lactose or thermoregulate), which plays the role of the observable. Observables are represented by self-adjoint operators on the Hilbert space. The possible forms (basis states) corresponding to this adaptive function (observable) are called eigenstates. The framework incorporates key features of exaptation: potentiality, contextuality, nonseparability, and emergence of new features. However, since it requires that one enumerate all possible contexts, its predictive value is limited, consistent with the assertion that there exists no biological equivalent to "laws of motion" by which we can predict the evolution of the biosphere. PMID:23567156

  17. Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies

    ERIC Educational Resources Information Center

    Smith, Carrie E.; Cribbie, Robert A.

    2013-01-01

    When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…

  18. A Model for Library Book Circulations Incorporating Loan Periods.

    ERIC Educational Resources Information Center

    Burrell, Quentin L.; Fenton, Michael R.

    1994-01-01

    Proposes and explains a modification of the mixed Poisson model for library circulations which takes into account the periods when a book is out on loan and therefore unavailable for borrowing. Highlights include frequency of circulation distributions; negative binomial distribution; and examples of the model at two universities. (Contains 34…

  19. Incorporating model uncertainty into attribution of observed temperature change

    NASA Astrophysics Data System (ADS)

    Huntingford, Chris; Stott, Peter A.; Allen, Myles R.; Lambert, F. Hugo

    2006-03-01

    Optimal detection analyses have been used to determine the causes of past global warming, leading to the conclusion by the Third Assessment Report of the IPCC that ``most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations''. To date however, these analyses have not taken full account of uncertainty in the modelled patterns of climate response due to differences in basic model formulation. To address this current ``perfect model'' assumption, we extend the optimal detection method to include, simultaneously, output from more than one GCM by introducing inter-model variance as an extra uncertainty. Applying the new analysis to three climate models we find that the effects of both anthropogenic and natural factors are detected. We find that greenhouse gas forcing would very likely have resulted in greater warming than observed during the past half century if there had not been an offsetting cooling from aerosols and other forcings.

  20. A transient stochastic weather generator incorporating climate model uncertainty

    NASA Astrophysics Data System (ADS)

    Glenis, Vassilis; Pinamonti, Valentina; Hall, Jim W.; Kilsby, Chris G.

    2015-11-01

    Stochastic weather generators (WGs), which provide long synthetic time series of weather variables such as rainfall and potential evapotranspiration (PET), have found widespread use in water resources modelling. When conditioned upon the changes in climatic statistics (change factors, CFs) predicted by climate models, WGs provide a useful tool for climate impacts assessment and adaption planning. The latest climate modelling exercises have involved large numbers of global and regional climate models integrations, designed to explore the implications of uncertainties in the climate model formulation and parameter settings: so called 'perturbed physics ensembles' (PPEs). In this paper we show how these climate model uncertainties can be propagated through to impact studies by testing multiple vectors of CFs, each vector derived from a different sample from a PPE. We combine this with a new methodology to parameterise the projected time-evolution of CFs. We demonstrate how, when conditioned upon these time-dependent CFs, an existing, well validated and widely used WG can be used to generate non-stationary simulations of future climate that are consistent with probabilistic outputs from the Met Office Hadley Centre's Perturbed Physics Ensemble. The WG enables extensive sampling of natural variability and climate model uncertainty, providing the basis for development of robust water resources management strategies in the context of a non-stationary climate.

  1. Incorporating tissue absorption and scattering in rapid ultrasound beam modeling

    NASA Astrophysics Data System (ADS)

    Christensen, Douglas; Almquist, Scott

    2013-02-01

    We have developed a new approach for modeling the propagation of an ultrasound beam in inhomogeneous tissues such as encountered with high-intensity focused ultrasound (HIFU) for treatment of various diseases. This method, called the hybrid angular spectrum (HAS) approach, alternates propagation steps between the space and the spatial frequency domains throughout the inhomogeneous regions of the body; the use of spatial Fourier transforms makes this technique considerably faster than other modeling approaches (about 10 sec for a 141 x 141 x 121 model). In HIFU thermal treatments, the acoustic absorption property of the tissues is of prime importance since it leads to temperature rise and the achievement of desired thermal dose at the treatment site. We have recently added to the HAS method the capability of independently modeling tissue absorption and scattering, the two components of acoustic attenuation. These additions improve the predictive value of the beam modeling and more accurately describes the thermal conditions expected during a therapeutic ultrasound exposure. Two approaches to explicitly model scattering were developed: one for scattering sizes smaller than a voxel, and one when the scattering scale is several voxels wide. Some anatomically realistic examples that demonstrate the importance of independently modeling absorption and scattering are given, including propagation through the human skull for noninvasive brain therapy and in the human breast for treatment of breast lesions.

  2. Incorporating Uncoupled Stress Effects into FEHM Modeling of HDR Reservoirs

    SciTech Connect

    Birdsell, Stephen A.

    1988-07-01

    Thermal and pressure-induced stress effects are extremely important aspects of modeling HDR reservoirs because these effects will control the transient behavior of reservoir flow impedance, water loss and flow distribution. Uncoupled stress effects will be added to the existing three-dimensional Finite Element Heat and Mass Transfer (FEHM) model (Birdsell, 1988) in order to more realistically simulate HDR reservoirs. Stress effects will be uncoupled in the new model since a fully-coupled code will not be available for some time.

  3. Incorporating principal component analysis into air quality model evaluation

    EPA Science Inventory

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...

  4. Viral dynamics model with CTL immune response incorporating antiretroviral therapy.

    PubMed

    Wang, Yan; Zhou, Yicang; Brauer, Fred; Heffernan, Jane M

    2013-10-01

    We present two HIV models that include the CTL immune response, antiretroviral therapy and a full logistic growth term for uninfected CD4+ T-cells. The difference between the two models lies in the inclusion or omission of a loss term in the free virus equation. We obtain critical conditions for the existence of one, two or three steady states, and analyze the stability of these steady states. Through numerical simulation we find substantial differences in the reproduction numbers and the behaviour at the infected steady state between the two models, for certain parameter sets. We explore the effect of varying the combination drug efficacy on model behaviour, and the possibility of reconstituting the CTL immune response through antiretroviral therapy. Furthermore, we employ Latin hypercube sampling to investigate the existence of multiple infected equilibria. PMID:22930342

  5. NEXT-GENERATION NUMERICAL MODELING: INCORPORATING ELASTICITY, ANISOTROPY AND ATTENUATION

    SciTech Connect

    S. LARSEN; ET AL

    2001-03-01

    A new effort has been initiated between the Department of Energy (DOE) and the Society of Exploration Geophysicists (SEG) to investigate what features the next generation of numerical seismic models should contain that will best address current technical problems encountered during exploration in increasingly complex geologies. This collaborative work is focused on designing and building these new models, generating synthetic seismic data through simulated surveys of various geometries, and using these data to test and validate new and improved seismic imaging algorithms. The new models will be both 2- and 3-dimensional and will include complex velocity structures as well as anisotropy and attenuation. Considerable attention is being focused on multi-component acoustic and elastic effects because it is now widely recognized that converted phases could play a vital role in improving the quality of seismic images. An existing, validated 3-D elastic modeling code is being used to generate the synthetic data. Preliminary elastic modeling results using this code are presented here along with a description of the proposed new models that will be built and tested.

  6. Stochastic modelling of landfill leachate and biogas production incorporating waste heterogeneity. Model formulation and uncertainty analysis.

    PubMed

    Zacharof, A I; Butler, A P

    2004-01-01

    A mathematical model simulating the hydrological and biochemical processes occurring in landfilled waste is presented and demonstrated. The model combines biochemical and hydrological models into an integrated representation of the landfill environment. Waste decomposition is modelled using traditional biochemical waste decomposition pathways combined with a simplified methodology for representing the rate of decomposition. Water flow through the waste is represented using a statistical velocity model capable of representing the effects of waste heterogeneity on leachate flow through the waste. Given the limitations in data capture from landfill sites, significant emphasis is placed on improving parameter identification and reducing parameter requirements. A sensitivity analysis is performed, highlighting the model's response to changes in input variables. A model test run is also presented, demonstrating the model capabilities. A parameter perturbation model sensitivity analysis was also performed. This has been able to show that although the model is sensitive to certain key parameters, its overall intuitive response provides a good basis for making reasonable predictions of the future state of the landfill system. Finally, due to the high uncertainty associated with landfill data, a tool for handling input data uncertainty is incorporated in the model's structure. It is concluded that the model can be used as a reasonable tool for modelling landfill processes and that further work should be undertaken to assess the model's performance. PMID:15120429

  7. Macroscopic singlet oxygen model incorporating photobleaching as an input parameter

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.

    2015-03-01

    A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.

  8. UV communications channel modeling incorporating multiple scattering interactions.

    PubMed

    Drost, Robert J; Moore, Terrence J; Sadler, Brian M

    2011-04-01

    In large part because of advancements in the design and fabrication of UV LEDs, photodetectors, and filters, significant research interest has recently been focused on non-line-of-sight UV communication systems. This research in, for example, system design and performance prediction, can be greatly aided by accurate channel models that allow for the reproducibility of results, thus facilitating the fair and consistent comparison of different communication approaches. In this paper, we provide a comprehensive derivation of a multiple-scattering Monte Carlo UV channel model, addressing weaknesses in previous treatments. The resulting model can be used to study the contribution of different orders of scattering to the path loss and impulse response functions associated with general UV communication system geometries. Simulation results are provided that demonstrate the benefit of this approach. PMID:21478967

  9. Bayesian statistical modeling of disinfection byproduct (DBP) bromine incorporation in the ICR database.

    PubMed

    Francis, Royce A; Vanbriesen, Jeanne M; Small, Mitchell J

    2010-02-15

    Statistical models are developed for bromine incorporation in the trihalomethane (THM), trihaloacetic acids (THAA), dihaloacetic acid (DHAA), and dihaloacetonitrile (DHAN) subclasses of disinfection byproducts (DBPs) using distribution system samples from plants applying only free chlorine as a primary or residual disinfectant in the Information Collection Rule (ICR) database. The objective of this study is to characterize the effect of water quality conditions before, during, and post-treatment on distribution system bromine incorporation into DBP mixtures. Bayesian Markov Chain Monte Carlo (MCMC) methods are used to model individual DBP concentrations and estimate the coefficients of the linear models used to predict the bromine incorporation fraction for distribution system DBP mixtures in each of the four priority DBP classes. The bromine incorporation models achieve good agreement with the data. The most important predictors of bromine incorporation fraction across DBP classes are alkalinity, specific UV absorption (SUVA), and the bromide to total organic carbon ratio (Br:TOC) at the first point of chlorine addition. Free chlorine residual in the distribution system, distribution system residence time, distribution system pH, turbidity, and temperature only slightly influence bromine incorporation. The bromide to applied chlorine (Br:Cl) ratio is not a significant predictor of the bromine incorporation fraction (BIF) in any of the four classes studied. These results indicate that removal of natural organic matter and the location of chlorine addition are important treatment decisions that have substantial implications for bromine incorporation into disinfection byproduct in drinking waters. PMID:20095529

  10. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS

    EPA Science Inventory

    Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...

  11. Incorporating Satellite Time-Series Data into Modeling

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2008-01-01

    In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.

  12. The incorporation of geomorphic information in storage-zone models.

    NASA Astrophysics Data System (ADS)

    Boufadel, M. C.; Gabriel, M.

    2001-12-01

    Three stream-tracer studies were conducted in a 190-m reach of an urban stream in Philadelphia to investigate the interactions between the main channel and transverse storage zones. Sodium chloride was used as a conservative tracer and was monitored at two downstream locations using electric conductivity measurements. The experiments were simulated using the advection-dispersion equation with additional terms that account for the transverse exchange. The fit of the model to the data was good when all the parameters were assumed to be sub-reach-averaged. When measurements of the cross sectional area at various downstream distances were introduced into the model, the remaining reach-averaged parameters had to take extreme values to achieve agreement with the experimental breakthrough curve. This indicates that additional but incomplete geomorphic information does not necessarily improve the understanding of a particular stream system. The variation of the parameters with scale was also explored.

  13. The incorporation of geomorphic information in storage-zone models

    NASA Astrophysics Data System (ADS)

    Boufadel, M.

    2003-04-01

    Three stream-tracer studies were conducted in a 190-m reach of an urban stream in Philadelphia to investigate the interactions between the main channel and transverse storage zones. Sodium chloride was used as a conservative tracer and was monitored at two downstream locations using electric conductivity measurements. The experiments were simulated using the advection-dispersion equation with additional terms that account for the transverse exchange. The fit of the model to the data was good when all the parameters were assumed to be sub-reach-averaged. When measurements of the cross sectional area at various downstream distances were introduced into the model, the remaining reach-averaged parameters had to take extreme values to achieve agreement with the experimental breakthrough curve. This indicates that additional but incomplete geomorphic information does not necessarily improve the understanding of a particular stream system. The variation of the parameters with scale was also explored.

  14. Incorporating Spatial Models in Visual Field Test Procedures

    PubMed Central

    Rubinstein, Nikki J.; McKendrick, Allison M.; Turpin, Andrew

    2016-01-01

    Purpose To introduce a perimetric algorithm (Spatially Weighted Likelihoods in Zippy Estimation by Sequential Testing [ZEST] [SWeLZ]) that uses spatial information on every presentation to alter visual field (VF) estimates, to reduce test times without affecting output precision and accuracy. Methods SWeLZ is a maximum likelihood Bayesian procedure, which updates probability mass functions at VF locations using a spatial model. Spatial models were created from empirical data, computational models, nearest neighbor, random relationships, and interconnecting all locations. SWeLZ was compared to an implementation of the ZEST algorithm for perimetry using computer simulations on 163 glaucomatous and 233 normal VFs (Humphrey Field Analyzer 24-2). Output measures included number of presentations and visual sensitivity estimates. Results There was no significant difference in accuracy or precision of SWeLZ for the different spatial models relative to ZEST, either when collated across whole fields or when split by input sensitivity. Inspection of VF maps showed that SWeLZ was able to detect localized VF loss. SWeLZ was faster than ZEST for normal VFs: median number of presentations reduced by 20% to 38%. The number of presentations was equivalent for SWeLZ and ZEST when simulated on glaucomatous VFs. Conclusions SWeLZ has the potential to reduce VF test times in people with normal VFs, without detriment to output precision and accuracy in glaucomatous VFs. Translational Relevance SWeLZ is a novel perimetric algorithm. Simulations show that SWeLZ can reduce the number of test presentations for people with normal VFs. Since many patients have normal fields, this has the potential for significant time savings in clinical settings. PMID:26981329

  15. Incorporating organic soil into a global climate model

    NASA Astrophysics Data System (ADS)

    Lawrence, David M.; Slater, Andrew G.

    2008-02-01

    Organic matter significantly alters a soil’s thermal and hydraulic properties but is not typically included in land-surface schemes used in global climate models. This omission has consequences for ground thermal and moisture regimes, particularly in the high-latitudes where soil carbon content is generally high. Global soil carbon data is used to build a geographically distributed, profiled soil carbon density dataset for the Community Land Model (CLM). CLM parameterizations for soil thermal and hydraulic properties are modified to accommodate both mineral and organic soil matter. Offline simulations including organic soil are characterized by cooler annual mean soil temperatures (up to ˜2.5°C cooler for regions of high soil carbon content). Cooling is strong in summer due to modulation of early and mid-summer soil heat flux. Winter temperatures are slightly warmer as organic soils do not cool as efficiently during fall and winter. High porosity and hydraulic conductivity of organic soil leads to a wetter soil column but with comparatively low surface layer saturation levels and correspondingly low soil evaporation. When CLM is coupled to the Community Atmosphere Model, the reduced latent heat flux drives deeper boundary layers, associated reductions in low cloud fraction, and warmer summer air temperatures in the Arctic. Lastly, the insulative properties of organic soil reduce interannual soil temperature variability, but only marginally. This result suggests that, although the mean soil temperature cooling will delay the simulated date at which frozen soil begins to thaw, organic matter may provide only limited insulation from surface warming.

  16. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni

  17. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal, and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineers perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(sub x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for 'graceful' rather than catastrophic failure. When loaded in the fiber direction these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni

  18. Incorporating tracer-tracee differences into models to improve accuracy

    SciTech Connect

    Schoeller, D.A. )

    1991-05-01

    The ideal tracer for metabolic studies is one that behaves exactly like the tracee. Compounds labeled with isotopes come the closest to this ideal because they are chemically identical to the tracee except for the substitution of a stable or radioisotope at one or more positions. Even this substitution, however, can introduce a difference in metabolism that may be quantitatively important with regard to the development of the mathematical model used to interpret the kinetic data. The doubly labeled water method for the measurement of carbon dioxide production and hence energy expenditure in free-living subjects is a good example of how differences between the metabolism of the tracers and the tracee can influence the accuracy of the carbon dioxide production rate determined from the kinetic data.

  19. Incorporating affective bias in models of human decision making

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.

    1991-01-01

    Research on human decision making has traditionally focused on how people actually make decisions, how good their decisions are, and how their decisions can be improved. Recent research suggests that this model is inadequate. Affective as well as cognitive components drive the way information about relevant outcomes and events is perceived, integrated, and used in the decision making process. The affective components include how the individual frames outcomes as good or bad, whether the individual anticipates regret in a decision situation, the affective mood state of the individual, and the psychological stress level anticipated or experienced in the decision situation. A focus of the current work has been to propose empirical studies that will attempt to examine in more detail the relationships between the latter two critical affective influences (mood state and stress) on decision making behavior.

  20. Incorporating flood event analyses and catchment structures into model development

    NASA Astrophysics Data System (ADS)

    Oppel, Henning; Schumann, Andreas

    2016-04-01

    The space-time variability in catchment response results from several hydrological processes which differ in their relevance in an event-specific way. An approach to characterise this variance consists in comparisons between flood events in a catchment and between flood responses of several sub-basins in such an event. In analytical frameworks the impact of space and time variability of rainfall on runoff generation due to rainfall excess can be characterised. Moreover the effect of hillslope and channel network routing on runoff timing can be specified. Hence, a modelling approach is needed to specify the runoff generation and formation. Knowing the space-time variability of rainfall and the (spatial averaged) response of a catchment it seems worthwhile to develop new models based on event and catchment analyses. The consideration of spatial order and the distribution of catchment characteristics in their spatial variability and interaction with the space-time variability of rainfall provides additional knowledge about hydrological processes at the basin scale. For this purpose a new procedure to characterise the spatial heterogeneity of catchments characteristics in their succession along the flow distance (differentiated between river network and hillslopes) was developed. It was applied to study of flood responses at a set of nested catchments in a river basin in eastern Germany. In this study the highest observed rainfall-runoff events were analysed, beginning at the catchment outlet and moving upstream. With regard to the spatial heterogeneities of catchment characteristics, sub-basins were separated by new algorithms to attribute runoff-generation, hillslope and river network processes. With this procedure the cumulative runoff response at the outlet can be decomposed and individual runoff features can be assigned to individual aspects of the catchment. Through comparative analysis between the sub-catchments and the assigned effects on runoff dynamics new

  1. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS: A REPLY TO ROBBINS, HILDERBRAND AND FARLEY (2002)

    EPA Science Inventory

    Phillips & Koch (2002) outlined a new stable isotope mixing model which incorporates differences in elemental concentrations in the determinations of source proportions in a mixture. They illustrated their method with sensitivity analyses and two examples from the wildlife ecolog...

  2. Joint modelling of longitudinal and survival data: incorporating delayed entry and an assessment of model misspecification.

    PubMed

    Crowther, Michael J; Andersson, Therese M-L; Lambert, Paul C; Abrams, Keith R; Humphreys, Keith

    2016-03-30

    A now common goal in medical research is to investigate the inter-relationships between a repeatedly measured biomarker, measured with error, and the time to an event of interest. This form of question can be tackled with a joint longitudinal-survival model, with the most common approach combining a longitudinal mixed effects model with a proportional hazards survival model, where the models are linked through shared random effects. In this article, we look at incorporating delayed entry (left truncation), which has received relatively little attention. The extension to delayed entry requires a second set of numerical integration, beyond that required in a standard joint model. We therefore implement two sets of fully adaptive Gauss-Hermite quadrature with nested Gauss-Kronrod quadrature (to allow time-dependent association structures), conducted simultaneously, to evaluate the likelihood. We evaluate fully adaptive quadrature compared with previously proposed non-adaptive quadrature through a simulation study, showing substantial improvements, both in terms of minimising bias and reducing computation time. We further investigate, through simulation, the consequences of misspecifying the longitudinal trajectory and its impact on estimates of association. Our scenarios showed the current value association structure to be very robust, compared with the rate of change that we found to be highly sensitive showing that assuming a simpler trend when the truth is more complex can lead to substantial bias. With emphasis on flexible parametric approaches, we generalise previous models by proposing the use of polynomials or splines to capture the longitudinal trend and restricted cubic splines to model the baseline log hazard function. The methods are illustrated on a dataset of breast cancer patients, modelling mammographic density jointly with survival, where we show how to incorporate density measurements prior to the at-risk period, to make use of all the available

  3. Incorporating swarm data into plasma models and plasma surface interactions

    NASA Astrophysics Data System (ADS)

    Makabe, Toshiaki

    2009-10-01

    Since the mid-1980s, modeling of non-equilibrium plasmas in a collisional region driven at radio frequency has been developed at pressure greater than ˜Pa. The collisional plasma has distinct characteristics induced by a quantum property of each of feed gas molecules through collisions with electrons or heavy particles. That is, there exists a proper function caused by chemically active radicals, negative-ions, and radiations based on a molecular quantum structure through short-range interactions mainly with electrons. This differs from high-density, collisionless plasma controlled by the long-range Coulomb interaction. The quantum property in the form of the collision cross section is the first essential through swarm parameters in order to investigate the collisional plasma structure and to predict the function. These structure and function, of course, appear under a self- organized spatiotemporal distribution of electrons and positive ions subject to electromagnetic theory, i.e., bulk-plasma and ion-sheath. In a plasma interacting with a surface, the flux, energy and angle of particles incident on a surface are basic quantities. It will be helpful to learn the limits of the swarm data in a quasi-equilibrium situation and to find a way out of the difficulty, when we predict the collisional plasma, the function, and related surface processes. In this talk we will discuss some of these experiences in the case of space and time varying radiofrequency plasma and the micro/nano-surface processes. This work is partly supported by Global-COE program in Keio University, granted by MEXT Japan.

  4. Incorporating the life course model into MCH nutrition leadership education and training programs.

    PubMed

    Haughton, Betsy; Eppig, Kristen; Looney, Shannon M; Cunningham-Sabo, Leslie; Spear, Bonnie A; Spence, Marsha; Stang, Jamie S

    2013-01-01

    Life course perspective, social determinants of health, and health equity have been combined into one comprehensive model, the life course model (LCM), for strategic planning by US Health Resources and Services Administration's Maternal and Child Health Bureau. The purpose of this project was to describe a faculty development process; identify strategies for incorporation of the LCM into nutrition leadership education and training at the graduate and professional levels; and suggest broader implications for training, research, and practice. Nineteen representatives from 6 MCHB-funded nutrition leadership education and training programs and 10 federal partners participated in a one-day session that began with an overview of the models and concluded with guided small group discussions on how to incorporate them into maternal and child health (MCH) leadership training using obesity as an example. Written notes from group discussions were compiled and coded emergently. Content analysis determined the most salient themes about incorporating the models into training. Four major LCM-related themes emerged, three of which were about training: (1) incorporation by training grants through LCM-framed coursework and experiences for trainees, and similarly framed continuing education and skills development for professionals; (2) incorporation through collaboration with other training programs and state and community partners, and through advocacy; and (3) incorporation by others at the federal and local levels through policy, political, and prevention efforts. The fourth theme focused on anticipated challenges of incorporating the model in training. Multiple methods for incorporating the LCM into MCH training and practice are warranted. Challenges to incorporating include the need for research and related policy development. PMID:22350632

  5. Incorporating phosphorus cycling into global modeling efforts: a worthwhile, tractable endeavor

    USGS Publications Warehouse

    Reed, Sasha C.; Yang, Xiaojuan; Thornton, Peter E.

    2015-01-01

    Myriad field, laboratory, and modeling studies show that nutrient availability plays a fundamental role in regulating CO2 exchange between the Earth's biosphere and atmosphere, and in determining how carbon pools and fluxes respond to climatic change. Accordingly, global models that incorporate coupled climate–carbon cycle feedbacks made a significant advance with the introduction of a prognostic nitrogen cycle. Here we propose that incorporating phosphorus cycling represents an important next step in coupled climate–carbon cycling model development, particularly for lowland tropical forests where phosphorus availability is often presumed to limit primary production. We highlight challenges to including phosphorus in modeling efforts and provide suggestions for how to move forward.

  6. Incorporation of prior information on parameters into nonlinear regression groundwater flow models 2. Applications.

    USGS Publications Warehouse

    Cooley, R.L.

    1983-01-01

    Investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. -from Author

  7. A new car-following model with the consideration of incorporating timid and aggressive driving behaviors

    NASA Astrophysics Data System (ADS)

    Peng, Guanghan; He, Hongdi; Lu, Wei-Zhen

    2016-01-01

    In this paper, a new car-following model is proposed with the consideration of the incorporating timid and aggressive behaviors on single lane. The linear stability condition with the incorporating timid and aggressive behaviors term is obtained. Numerical simulation indicates that the new car-following model can estimate proper delay time of car motion and kinematic wave speed at jam density by considering the incorporating the timid and aggressive behaviors. The results also show that the aggressive behavior can improve traffic flow while the timid behavior deteriorates traffic stability, which means that the aggressive behavior is better than timid behavior since the aggressive driver makes rapid response to the variation of the velocity of the leading car. Snapshot of the velocities also shows that the new model can approach approximation to a wide moving jam.

  8. A Bayesian Model for Pooling Gene Expression Studies That Incorporates Co-Regulation Information

    PubMed Central

    Conlon, Erin M.; Postier, Bradley L.; Methé, Barbara A.; Nevin, Kelly P.; Lovley, Derek R.

    2012-01-01

    Current Bayesian microarray models that pool multiple studies assume gene expression is independent of other genes. However, in prokaryotic organisms, genes are arranged in units that are co-regulated (called operons). Here, we introduce a new Bayesian model for pooling gene expression studies that incorporates operon information into the model. Our Bayesian model borrows information from other genes within the same operon to improve estimation of gene expression. The model produces the gene-specific posterior probability of differential expression, which is the basis for inference. We found in simulations and in biological studies that incorporating co-regulation information improves upon the independence model. We assume that each study contains two experimental conditions: a treatment and control. We note that there exist environmental conditions for which genes that are supposed to be transcribed together lose their operon structure, and that our model is best carried out for known operon structures. PMID:23284902

  9. A vector auto-regressive model for onshore and offshore wind synthesis incorporating meteorological model information

    NASA Astrophysics Data System (ADS)

    Hill, D.; Bell, K. R. W.; McMillan, D.; Infield, D.

    2014-05-01

    The growth of wind power production in the electricity portfolio is striving to meet ambitious targets set, for example by the EU, to reduce greenhouse gas emissions by 20% by 2020. Huge investments are now being made in new offshore wind farms around UK coastal waters that will have a major impact on the GB electrical supply. Representations of the UK wind field in syntheses which capture the inherent structure and correlations between different locations including offshore sites are required. Here, Vector Auto-Regressive (VAR) models are presented and extended in a novel way to incorporate offshore time series from a pan-European meteorological model called COSMO, with onshore wind speeds from the MIDAS dataset provided by the British Atmospheric Data Centre. Forecasting ability onshore is shown to be improved with the inclusion of the offshore sites with improvements of up to 25% in RMS error at 6 h ahead. In addition, the VAR model is used to synthesise time series of wind at each offshore site, which are then used to estimate wind farm capacity factors at the sites in question. These are then compared with estimates of capacity factors derived from the work of Hawkins et al. (2011). A good degree of agreement is established indicating that this synthesis tool should be useful in power system impact studies.

  10. Incorporating a Gaussian model at the catheter tip for improved registration of preoperative surface models

    NASA Astrophysics Data System (ADS)

    Rettmann, M. E.; Holmes, D. R., III; Packer, D. L.; Robb, R. A.

    2011-03-01

    Atrial fibrillation is a common cardiac arrhythmia in which aberrant electrical activity cause the atria to quiver which results in irregular beating of the heart. Catheter ablation therapy is becoming increasingly popular in treating atrial fibrillation, a procedure in which an electrophysiologist guides a catheter into the left atrium and creates radiofrequency lesions to stop the arrhythmia. Typical visualization tools include bi-plane fluoroscopy, 2-D ultrasound, and electroanatomic maps, however, recently there has been increased interest in incorporating preoperative surface models into the procedure. Typical strategies for registration include landmark-based and surface-based methods. Drawbacks of these approaches include difficulty in accurately locating corresponding landmark pairs and the time required to sample surface points with a catheter. In this paper, we describe a new approach which models the catheter tip as a Gaussian kernel and eliminates the need to collect surface points by instead using the stream of continuosly tracked catheter points. We demonstrate the feasibility of this technique with a left atrial phantom model and compare the results with a standard surface based approach.

  11. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.

    2016-06-01

    In this work, we develop a tantalum strength model that incorporates effects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate effects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa. The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.

  12. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    DOE PAGESBeta

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.

    2016-06-14

    In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less

  13. The Forced Choice Dilemma: A Model Incorporating Idiocentric/Allocentric Cultural Orientation

    ERIC Educational Resources Information Center

    Jung, Jae Yup; McCormick, John; Gross, Miraca U. M.

    2012-01-01

    This study developed and tested a new model of the forced choice dilemma (i.e., the belief held by some intellectually gifted students that they must choose between academic achievement and peer acceptance) that incorporates individual-level cultural orientation variables (i.e., vertical allocentrism and vertical idiocentrism). A survey that had…

  14. SPARC Groups: A Model for Incorporating Spiritual Psychoeducation into Group Work

    ERIC Educational Resources Information Center

    Christmas, Christopher; Van Horn, Stacy M.

    2012-01-01

    The use of spirituality as a resource for clients within the counseling field is growing; however, the primary focus has been on individual therapy. The purpose of this article is to provide counseling practitioners, administrators, and researchers with an approach for incorporating spiritual psychoeducation into group work. The proposed model can…

  15. Controllability and Optimal Harvesting of a Prey-Predator Model Incorporating a Prey Refuge

    ERIC Educational Resources Information Center

    Kar, Tapan Kumar

    2006-01-01

    This paper deals with a prey-predator model incorporating a prey refuge and harvesting of the predator species. A mathematical analysis shows that prey refuge plays a crucial role for the survival of the species and that the harvesting effort on the predator may be used as a control to prevent the cyclic behaviour of the system. The optimal…

  16. Incorporating Eco-Evolutionary Processes into Population Models:Design and Applications

    EPA Science Inventory

    Eco-evolutionary population models are powerful new tools for exploring howevolutionary processes influence plant and animal population dynamics andvice-versa. The need to manage for climate change and other dynamicdisturbance regimes is creating a demand for the incorporation of...

  17. Incorporating 4MAT Model in Distance Instructional Material--An Innovative Design

    ERIC Educational Resources Information Center

    Nikolaou, Alexandra; Koutsouba, Maria

    2012-01-01

    In an attempt to improve the effectiveness of distance learning, the present study aims to introduce an innovative way of creating and designing distance learning instructional material incorporating Bernice McCarthy's 4MAT Model based on learning styles. According to McCarthy's theory, all students can learn effectively in a cycle of learning…

  18. Nine challenges in incorporating the dynamics of behaviour in infectious diseases models.

    PubMed

    Funk, Sebastian; Bansal, Shweta; Bauch, Chris T; Eames, Ken T D; Edmunds, W John; Galvani, Alison P; Klepac, Petra

    2015-03-01

    Traditionally, the spread of infectious diseases in human populations has been modelled with static parameters. These parameters, however, can change when individuals change their behaviour. If these changes are themselves influenced by the disease dynamics, there is scope for mechanistic models of behaviour to improve our understanding of this interaction. Here, we present challenges in modelling changes in behaviour relating to disease dynamics, specifically: how to incorporate behavioural changes in models of infectious disease dynamics, how to inform measurement of relevant behaviour to parameterise such models, and how to determine the impact of behavioural changes on observed disease dynamics. PMID:25843377

  19. Incorporating phosphorus cycling into global modeling efforts: a worthwhile, tractable endeavor.

    PubMed

    Reed, Sasha C; Yang, Xiaojuan; Thornton, Peter E

    2015-10-01

    324 I. 324 II. 325 III. 326 IV. 327 328 References 328 SUMMARY: Myriad field, laboratory, and modeling studies show that nutrient availability plays a fundamental role in regulating CO2 exchange between the Earth's biosphere and atmosphere, and in determining how carbon pools and fluxes respond to climatic change. Accordingly, global models that incorporate coupled climate-carbon cycle feedbacks made a significant advance with the introduction of a prognostic nitrogen cycle. Here we propose that incorporating phosphorus cycling represents an important next step in coupled climate-carbon cycling model development, particularly for lowland tropical forests where phosphorus availability is often presumed to limit primary production. We highlight challenges to including phosphorus in modeling efforts and provide suggestions for how to move forward. PMID:26115197

  20. Incorporating preferential flow into a 3D model of a forested headwater catchment

    NASA Astrophysics Data System (ADS)

    Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Pfister, Laurent; Klaus, Julian

    2016-04-01

    Preferential flow plays an important role for water flow and solute transport. The inclusion of preferential flow, for example with dual porosity or dual permeability approaches, is a common feature in transport simulations at the plot scale. But at hillslope and catchment scales, incorporation of macropore and fracture flow into distributed hydrologic 3D models is rare, often due to limited data availability for model parameterisation. In this study, we incorporated preferential flow into an existing 3D integrated surface subsurface hydrologic model (HydroGeoSphere) of a headwater region (6 ha) of the forested Weierbach catchment in western Luxembourg. Our model philosophy was a strong link between measured data and the model setup. The model setup we used previously had been parameterised and validated based on various field data. But existing macropores and fractures had not been considered in this initial model setup. The multi-criteria validation revealed a good model performance but also suggested potential for further improvement by incorporating preferential flow as additional process. In order to pursue the data driven model philosophy for the implementation of preferential flow, we analysed the results of plot scale bromide sprinkling and infiltration experiments carried out in the vicinity of the Weierbach catchment. Three 1 sqm plots were sprinkled for one hour and excavated one day later for bromide depth profile sampling. We simulated these sprinkling experiments at the soil column scale, using the parameterisation of the base headwater model extended by a second permeability domain. Representing the bromide depth profiles was successful without changing this initial parameterisation. Moreover, to explain the variability between the three bromide depth profiles it was sufficient to adapt the dual permeability properties, indicating the spatial heterogeneity of preferential flow. Subsequently, we incorporated the dual permeability simulation in the

  1. Effect of incorporation of uncertainty in PCB bioaccumulation factors on modeled receptor doses

    SciTech Connect

    Welsh, C.; Duncan, J.; Purucker, S.; Richardson, N.; Redfearn, A.

    1995-12-31

    Bioaccumulation factors (BAFs) are regularly employed in ecological risk assessments to model contaminant transfer through ecological food chains. The authors compiled data on bioaccumulation of PCBs in plants, invertebrates, birds, and mammals from published literature and used these data to develop regression equations relating soil or food concentrations to bioaccumulation. They then used Latin Hypercube simulation techniques and simple food chain models to incorporate uncertainty in the BAF regressions into the derivation of exposure dose estimates for selected wildlife receptors. The authors present their preliminary results in this paper. Dose estimates ranged over several orders of magnitude for herbivorous, insectivorous, and carnivorous receptors. These results suggest incorporating the uncertainty in BAF values into food chain exposure models could provide risk assessors and risk managers with information on the probability of a given outcome that can be used in interpreting the potential risks at hazardous waste sites.

  2. Incorporating physically-based microstructures in materials modeling: Bridging phase field and crystal plasticity frameworks

    NASA Astrophysics Data System (ADS)

    Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.; Hanks, Byron W.; Foulk, James W.; Battaile, Corbett C.

    2016-05-01

    The mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FE meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.

  3. Incorporation of the electrode electrolyte interface into finite-element models of metal microelectrodes

    NASA Astrophysics Data System (ADS)

    Cantrell, Donald R.; Inayat, Samsoon; Taflove, Allen; Ruoff, Rodney S.; Troy, John B.

    2008-03-01

    An accurate description of the electrode-electrolyte interfacial impedance is critical to the development of computational models of neural recording and stimulation that aim to improve understanding of neuro-electric interfaces and to expedite electrode design. This work examines the effect that the electrode-electrolyte interfacial impedance has upon the solutions generated from time-harmonic finite-element models of cone- and disk-shaped platinum microelectrodes submerged in physiological saline. A thin-layer approximation is utilized to incorporate a platinum-saline interfacial impedance into the finite-element models. This approximation is easy to implement and is not computationally costly. Using an iterative nonlinear solver, solutions were obtained for systems in which the electrode was driven at ac potentials with amplitudes from 10 mV to 500 mV and frequencies from 100 Hz to 100 kHz. The results of these simulations indicate that, under certain conditions, incorporation of the interface may strongly affect the solutions obtained. This effect, however, is dependent upon the amplitude of the driving potential and, to a lesser extent, its frequency. The solutions are most strongly affected at low amplitudes where the impedance of the interface is large. Here, the current density distribution that is calculated from models incorporating the interface is much more uniform than the current density distribution generated by models that neglect the interface. At higher potential amplitudes, however, the impedance of the interface decreases, and its effect on the solutions obtained is attenuated.

  4. Incorporating physically-based microstructures in materials modeling: Bridging phase field and crystal plasticity frameworks

    DOE PAGESBeta

    Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.; Hanks, Byron W.; Foulk, James W.; Battaile, Corbett C.

    2016-04-25

    Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less

  5. In silico investigation of the short QT syndrome, using human ventricle models incorporating electromechanical coupling

    PubMed Central

    Adeniran, Ismail; Hancox, Jules C.; Zhang, Henggui

    2013-01-01

    Introduction: Genetic forms of the Short QT Syndrome (SQTS) arise due to cardiac ion channel mutations leading to accelerated ventricular repolarization, arrhythmias and sudden cardiac death. Results from experimental and simulation studies suggest that changes to refractoriness and tissue vulnerability produce a substrate favorable to re-entry. Potential electromechanical consequences of the SQTS are less well-understood. The aim of this study was to utilize electromechanically coupled human ventricle models to explore electromechanical consequences of the SQTS. Methods and Results: The Rice et al. mechanical model was coupled to the ten Tusscher et al. ventricular cell model. Previously validated K+ channel formulations for SQT variants 1 and 3 were incorporated. Functional effects of the SQTS mutations on [Ca2+]i transients, sarcomere length shortening and contractile force at the single cell level were evaluated with and without the consideration of stretch-activated channel current (Isac). Without Isac, at a stimulation frequency of 1Hz, the SQTS mutations produced dramatic reductions in the amplitude of [Ca2+]i transients, sarcomere length shortening and contractile force. When Isac was incorporated, there was a considerable attenuation of the effects of SQTS-associated action potential shortening on Ca2+ transients, sarcomere shortening and contractile force. Single cell models were then incorporated into 3D human ventricular tissue models. The timing of maximum deformation was delayed in the SQTS setting compared to control. Conclusion: The incorporation of Isac appears to be an important consideration in modeling functional effects of SQT 1 and 3 mutations on cardiac electro-mechanical coupling. Whilst there is little evidence of profoundly impaired cardiac contractile function in SQTS patients, our 3D simulations correlate qualitatively with reported evidence for dissociation between ventricular repolarization and the end of mechanical systole. PMID

  6. An overlapping-feature-based phonological model incorporating linguistic constraints: applications to speech recognition.

    PubMed

    Sun, Jiping; Deng, Li

    2002-02-01

    Modeling phonological units of speech is a critical issue in speech recognition. In this paper, our recent development of an overlapping-feature-based phonological model that represents long-span contextual dependency in speech acoustics is reported. In this model, high-level linguistic constraints are incorporated in automatic construction of the patterns of feature-overlapping and of the hidden Markov model (HMM) states induced by such patterns. The main linguistic information explored includes word and phrase boundaries, morpheme, syllable, syllable constituent categories, and word stress. A consistent computational framework developed for the construction of the feature-based model and the major components of the model are described. Experimental results on the use of the overlapping-feature model in an HMM-based system for speech recognition show improvements over the conventional triphone-based phonological model. PMID:11863165

  7. An overlapping-feature-based phonological model incorporating linguistic constraints: Applications to speech recognition

    NASA Astrophysics Data System (ADS)

    Sun, Jiping; Deng, Li

    2002-02-01

    Modeling phonological units of speech is a critical issue in speech recognition. In this paper, our recent development of an overlapping-feature-based phonological model that represents long-span contextual dependency in speech acoustics is reported. In this model, high-level linguistic constraints are incorporated in automatic construction of the patterns of feature-overlapping and of the hidden Markov model (HMM) states induced by such patterns. The main linguistic information explored includes word and phrase boundaries, morpheme, syllable, syllable constituent categories, and word stress. A consistent computational framework developed for the construction of the feature-based model and the major components of the model are described. Experimental results on the use of the overlapping-feature model in an HMM-based system for speech recognition show improvements over the conventional triphone-based phonological model.

  8. A Cochlear Partition Model Incorporating Realistic Electrical and Mechanical Parameters for Outer Hair Cells

    NASA Astrophysics Data System (ADS)

    Nam, Jong-Hoon; Fettiplace, Robert

    2011-11-01

    The organ of Corti (OC) is believed to optimize the force transmission from the outer hair cell (OHC) to the basilar membrane and inner hair cell. Recent studies showed that the OC has complex modes of deformation. In an effort to understand the consequence of the OC deformation, we developed a fully deformable 3D finite element model of the OC. It incorporates hair bundle's mechano-transduction and the OHC electrical circuit. Geometric information was taken from the gerbil cochlea at locations with 18 and 0.7 kHz characteristic frequencies. Cochlear partitions of several hundred micrometers long were simulated. The model describes the signature 3D structural arrangement in the OC, especially the tilt of OHC and Deiters cell process. Transduction channel kinetics contributed to the system's mechanics through the hair bundle. The OHC electrical model incorporated the transduction channel conductance, nonlinear capacitance and piezoelectric properties. It also incorporated recent data on the voltage-dependent potassium conductance and membrane time constant. With the model we simulated (1) the limiting frequencies of mechano-transduction and OHC somatic motility and (2) OC transient response to impulse stimuli.

  9. Dynamic modelling of a double-pendulum gantry crane system incorporating payload

    SciTech Connect

    Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.

    2011-06-20

    The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.

  10. Dynamic Modelling of a Double-Pendulum Gantry Crane System Incorporating Payload

    NASA Astrophysics Data System (ADS)

    Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.

    2011-06-01

    The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.

  11. Finite element analysis of structural engineering problems using a viscoplastic model incorporating two back stresses

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.; Halford, Gary R.

    1993-01-01

    The feasibility of a viscoplastic model incorporating two back stresses and a drag strength is investigated for performing nonlinear finite element analyses of structural engineering problems. To demonstrate suitability for nonlinear structural analyses, the model is implemented into a finite element program and analyses for several uniaxial and multiaxial problems are performed. Good agreement is shown between the results obtained using the finite element implementation and those obtained experimentally. The advantages of using advanced viscoplastic models for performing nonlinear finite element analyses of structural components are indicated.

  12. Incorporating Mobility in Growth Modeling for Multilevel and Longitudinal Item Response Data.

    PubMed

    Choi, In-Hee; Wilson, Mark

    2016-01-01

    Multilevel data often cannot be represented by the strict form of hierarchy typically assumed in multilevel modeling. A common example is the case in which subjects change their group membership in longitudinal studies (e.g., students transfer schools; employees transition between different departments). In this study, cross-classified and multiple membership models for multilevel and longitudinal item response data (CCMM-MLIRD) are developed to incorporate such mobility, focusing on students' school change in large-scale longitudinal studies. Furthermore, we investigate the effect of incorrectly modeling school membership in the analysis of multilevel and longitudinal item response data. Two types of school mobility are described, and corresponding models are specified. Results of the simulation studies suggested that appropriate modeling of the two types of school mobility using the CCMM-MLIRD yielded good recovery of the parameters and improvement over models that did not incorporate mobility properly. In addition, the consequences of incorrectly modeling the school effects on the variance estimates of the random effects and the standard errors of the fixed effects depended upon mobility patterns and model specifications. Two sets of large-scale longitudinal data are analyzed to illustrate applications of the CCMM-MLIRD for each type of school mobility. PMID:26881961

  13. Going beyond the unitary curve: incorporating richer cognition into agent-based water resources models

    NASA Astrophysics Data System (ADS)

    Kock, B. E.

    2008-12-01

    The increased availability and understanding of agent-based modeling technology and techniques provides a unique opportunity for water resources modelers, allowing them to go beyond traditional behavioral approaches from neoclassical economics, and add rich cognition to social-hydrological models. Agent-based models provide for an individual focus, and the easier and more realistic incorporation of learning, memory and other mechanisms for increased cognitive sophistication. We are in an age of global change impacting complex water resources systems, and social responses are increasingly recognized as fundamentally adaptive and emergent. In consideration of this, water resources models and modelers need to better address social dynamics in a manner beyond the capabilities of neoclassical economics theory and practice. However, going beyond the unitary curve requires unique levels of engagement with stakeholders, both to elicit the richer knowledge necessary for structuring and parameterizing agent-based models, but also to make sure such models are appropriately used. With the aim of encouraging epistemological and methodological convergence in the agent-based modeling of water resources, we have developed a water resources-specific cognitive model and an associated collaborative modeling process. Our cognitive model emphasizes efficiency in architecture and operation, and capacity to adapt to different application contexts. We describe a current application of this cognitive model and modeling process in the Arkansas Basin of Colorado. In particular, we highlight the potential benefits of, and challenges to, using more sophisticated cognitive models in agent-based water resources models.

  14. Gold Incorporated Mesoporous Silica Thin Film Model Surface as a Robust SERS and Catalytically Active Substrate.

    PubMed

    Sunil Sekhar, Anandakumari Chandrasekharan; Vinod, Chathakudath Prabhakaran

    2016-01-01

    Ultra-small gold nanoparticles incorporated in mesoporous silica thin films with accessible pore channels perpendicular to the substrate are prepared by a modified sol-gel method. The simple and easy spin coating technique is applied here to make homogeneous thin films. The surface characterization using FESEM shows crack-free films with a perpendicular pore arrangement. The applicability of these thin films as catalysts as well as a robust SERS active substrate for model catalysis study is tested. Compared to bare silica film our gold incorporated silica, GSM-23F gave an enhancement factor of 10³ for RhB with a laser source 633 nm. The reduction reaction of p-nitrophenol with sodium borohydride from our thin films shows a decrease in peak intensity corresponding to -NO₂ group as time proceeds, confirming the catalytic activity. Such model surfaces can potentially bridge the material gap between a real catalytic system and surface science studies. PMID:27213321

  15. A new technique for the incorporation of seafloor topography in electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Baba, Kiyoshi; Seama, Nobukazu

    2002-08-01

    We describe a new technique for incorporating seafloor topography in electromagnetic modelling. It is based on a transformation of the topographic relief into a change in electrical conductivity and magnetic permeability within a flat seafloor. Since the technique allows us to model arbitrary topographic changes without extra grid cells and any restriction by vertical discretization, we can model very precise topographic changes easily without an extra burden in terms of computer memory or calculation time. The reliability and stability of the technique are tested by comparing the magnetotelluric responses from two synthetic seafloor topography models with three different approaches; these incorporate the topographic change in terms of (1) the change in conductance, using the thin-sheet approximation; (2) a series of rectangular block-like steps; and (3) triangular finite elements. The technique is easily applied to any modelling method including 3D modelling, allowing us to model complex structure in the Earth while taking full account of the 3D seafloor topography.

  16. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it

  17. A Condensed Disaggregation Model for Incorporating Parameter Uncertainty Into Monthly Reservoir Simulations

    NASA Astrophysics Data System (ADS)

    Stedinger, Jery R.; Pei, Daniel; Cohn, Timothy A.

    1985-05-01

    A condensed version of the Valencia-Schaake disaggregation model is developed which describes the distribution of monthly streamflow sequences using a set of coupled univariate regression models rather than a multivariate time series formulation. The condensed model has fewer parameters and is convenient for generating flow sequences which incorporate the intrinsic variability of streamflows and the uncertainty in the parameters of the annual and monthly streamflow models. The impact of parameter uncertainty on derived relationships between reservoir capacity and reservoir performance statistics is illustrated using required reservoir capacity (calculated with the sequent peak algorithm), system reliability, and the average total shortfall. Modeled sequences describe flows in the Rappahannock River in Virginia and the Boise River in Idaho. For high-reliability systems the results show that streamflow generation procedures which ignore model parameter uncertainty can grossly underestimate reservoir system failure rates and the severity of likely shortages, even if based on a 50-year record.

  18. Dose convolution filter: Incorporating spatial dose information into tissue response modeling

    SciTech Connect

    Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay

    2010-03-15

    Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.

  19. A new Bernoulli-Euler beam model incorporating microstructure and surface energy effects

    NASA Astrophysics Data System (ADS)

    Gao, X.-L.; Mahmoud, F. F.

    2014-04-01

    A new Bernoulli-Euler beam model is developed using a modified couple stress theory and a surface elasticity theory. A variational formulation based on the principle of minimum total potential energy is employed, which leads to the simultaneous determination of the equilibrium equation and complete boundary conditions for a Bernoulli-Euler beam. The new model contains a material length scale parameter accounting for the microstructure effect in the bulk of the beam and three surface elasticity constants describing the mechanical behavior of the beam surface layer. The inclusion of these additional material constants enables the new model to capture the microstructure- and surface energy-dependent size effect. In addition, Poisson's effect is incorporated in the current model, unlike existing beam models. The new beam model includes the models considering only the microstructure dependence or the surface energy effect as special cases. The current model reduces to the classical Bernoulli-Euler beam model when the microstructure dependence, surface energy, and Poisson's effect are all suppressed. To demonstrate the new model, a cantilever beam problem is solved by directly applying the general formulas derived. Numerical results reveal that the beam deflection predicted by the new model is smaller than that by the classical beam model. Also, it is found that the difference between the deflections predicted by the two models is very significant when the beam thickness is small but is diminishing with the increase of the beam thickness.

  20. Velocity and Density Models Incorporating the Cascadia Subduction Zone for 3D Earthquake Ground Motion Simulations

    USGS Publications Warehouse

    Stephenson, William J.

    2007-01-01

    INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.

  1. Transient Thermohydraulic Heat Pipe Modeling: Incorporating THROHPUT into the CAESAR Environment

    NASA Astrophysics Data System (ADS)

    Hall, Michael L.

    2003-01-01

    The THROHPUT code, which models transient thermohydraulic heat pipe behavior, is being incorporated into the CAESAR computational physics development environment. The CAESAR environment provides many beneficial features for enhanced model development, including levelized design, unit testing, Design by Contract™ (Meyer, 1997), and literate programming (Knuth, 1992), in a parallel, object-based manner. The original THROHPUT code was developed as a doctoral thesis research code; the current emphasis is on making a robust, verifiable, documented, component-based production package. Results from the original code are included.

  2. Incorporating many-body effects into modeling of semiconductor lasers and amplifiers

    SciTech Connect

    Ning, C.Z.; Moloney, J.V.; Indik, R.A.

    1997-06-01

    Major many-body effects that are important for semiconductor laser modeling are summarized. The authors adopt a bottom-up approach to incorporate these many-body effects into a model for semiconductor lasers and amplifiers. The optical susceptibility function ({Chi}) computed from the semiconductor Bloch equations (SBEs) is approximated by a single Lorentzian, or a superposition of a few Lorentzians in the frequency domain. Their approach leads to a set of effective Bloch equations (EBEs). The authors compare this approach with the full microscopic SBEs for the case of pulse propagation. Good agreement between the two is obtained for pulse widths longer than tens of picoseconds.

  3. Incorporating Micro-Mechanics Based Damage Models into Earthquake Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Bhat, H.; Rosakis, A.; Sammis, C. G.

    2012-12-01

    The micromechanical damage mechanics formulated by Ashby and Sammis, 1990 and generalized by Deshpande and Evans 2008 has been extended to allow for a more generalized stress state and to incorporate an experimentally motivated new crack growth (damage evolution) law that is valid over a wide range of loading rates. This law is sensitive to both the crack tip stress field and its time derivative. Incorporating this feature produces additional strain-rate sensitivity in the constitutive response. The model is also experimentally verified by predicting the failure strength of Dionysus-Pentelicon marble over a wide range of strain rates. Model parameters determined from quasi-static experiments were used to predict the failure strength at higher loading rates. Agreement with experimental results was excellent. After this verification step the constitutive law was incorporated into a Finite Element Code focused on simulating dynamic earthquake ruptures with specific focus on the ends of the fault (fault tip process zone) and the resulting strong ground motion radiation was studied.

  4. Incorporating epistasis interaction of genetic susceptibility single nucleotide polymorphisms in a lung cancer risk prediction model.

    PubMed

    Marcus, Michael W; Raji, Olaide Y; Duffy, Stephen W; Young, Robert P; Hopkins, Raewyn J; Field, John K

    2016-07-01

    Incorporation of genetic variants such as single nucleotide polymorphisms (SNPs) into risk prediction models may account for a substantial fraction of attributable disease risk. Genetic data, from 2385 subjects recruited into the Liverpool Lung Project (LLP) between 2000 and 2008, consisting of 20 SNPs independently validated in a candidate-gene discovery study was used. Multifactor dimensionality reduction (MDR) and random forest (RF) were used to explore evidence of epistasis among 20 replicated SNPs. Multivariable logistic regression was used to identify similar risk predictors for lung cancer in the LLP risk model for the epidemiological model and extended model with SNPs. Both models were internally validated using the bootstrap method and model performance was assessed using area under the curve (AUC) and net reclassification improvement (NRI). Using MDR and RF, the overall best classifier of lung cancer status were SNPs rs1799732 (DRD2), rs5744256 (IL-18), rs2306022 (ITGA11) with training accuracy of 0.6592 and a testing accuracy of 0.6572 and a cross-validation consistency of 10/10 with permutation testing P<0.0001. The apparent AUC of the epidemiological model was 0.75 (95% CI 0.73-0.77). When epistatic data were incorporated in the extended model, the AUC increased to 0.81 (95% CI 0.79-0.83) which corresponds to 8% increase in AUC (DeLong's test P=2.2e-16); 17.5% by NRI. After correction for optimism, the AUC was 0.73 for the epidemiological model and 0.79 for the extended model. Our results showed modest improvement in lung cancer risk prediction when the SNP epistasis factor was added. PMID:27121382

  5. Incorporating epistasis interaction of genetic susceptibility single nucleotide polymorphisms in a lung cancer risk prediction model

    PubMed Central

    MARCUS, MICHAEL W.; RAJI, OLAIDE Y.; DUFFY, STEPHEN W.; YOUNG, ROBERT P.; HOPKINS, RAEWYN J.; FIELD, JOHN K.

    2016-01-01

    Incorporation of genetic variants such as single nucleotide polymorphisms (SNPs) into risk prediction models may account for a substantial fraction of attributable disease risk. Genetic data, from 2385 subjects recruited into the Liverpool Lung Project (LLP) between 2000 and 2008, consisting of 20 SNPs independently validated in a candidate-gene discovery study was used. Multifactor dimensionality reduction (MDR) and random forest (RF) were used to explore evidence of epistasis among 20 replicated SNPs. Multivariable logistic regression was used to identify similar risk predictors for lung cancer in the LLP risk model for the epidemiological model and extended model with SNPs. Both models were internally validated using the bootstrap method and model performance was assessed using area under the curve (AUC) and net reclassification improvement (NRI). Using MDR and RF, the overall best classifier of lung cancer status were SNPs rs1799732 (DRD2), rs5744256 (IL-18), rs2306022 (ITGA11) with training accuracy of 0.6592 and a testing accuracy of 0.6572 and a cross-validation consistency of 10/10 with permutation testing P<0.0001. The apparent AUC of the epidemiological model was 0.75 (95% CI 0.73–0.77). When epistatic data were incorporated in the extended model, the AUC increased to 0.81 (95% CI 0.79–0.83) which corresponds to 8% increase in AUC (DeLong's test P=2.2e-16); 17.5% by NRI. After correction for optimism, the AUC was 0.73 for the epidemiological model and 0.79 for the extended model. Our results showed modest improvement in lung cancer risk prediction when the SNP epistasis factor was added. PMID:27121382

  6. Simulations of chlorophyll fluorescence incorporated into the Community Land Model version 4.

    PubMed

    Lee, Jung-Eun; Berry, Joseph A; van der Tol, Christiaan; Yang, Xi; Guanter, Luis; Damm, Alexander; Baker, Ian; Frankenberg, Christian

    2015-09-01

    Several studies have shown that satellite retrievals of solar-induced chlorophyll fluorescence (SIF) provide useful information on terrestrial photosynthesis or gross primary production (GPP). Here, we have incorporated equations coupling SIF to photosynthesis in a land surface model, the National Center for Atmospheric Research Community Land Model version 4 (NCAR CLM4), and have demonstrated its use as a diagnostic tool for evaluating the calculation of photosynthesis, a key process in a land surface model that strongly influences the carbon, water, and energy cycles. By comparing forward simulations of SIF, essentially as a byproduct of photosynthesis, in CLM4 with observations of actual SIF, it is possible to check whether the model is accurately representing photosynthesis and the processes coupled to it. We provide some background on how SIF is coupled to photosynthesis, describe how SIF was incorporated into CLM4, and demonstrate that our simulated relationship between SIF and GPP values are reasonable when compared with satellite (Greenhouse gases Observing SATellite; GOSAT) and in situ flux-tower measurements. CLM4 overestimates SIF in tropical forests, and we show that this error can be corrected by adjusting the maximum carboxylation rate (Vmax ) specified for tropical forests in CLM4. Our study confirms that SIF has the potential to improve photosynthesis simulation and thereby can play a critical role in improving land surface and carbon cycle models. PMID:25881891

  7. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    NASA Astrophysics Data System (ADS)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  8. Incorporation of an evaporative cooling scheme into a dynamic model of orographic precipitation

    NASA Technical Reports Server (NTRS)

    Barros, Ana Paula; Lettenmaier, Dennis P.

    1994-01-01

    A simple evaporative cooling scheme was incorporated into a dynamic model to estimate orographic precipitation in mountainous regions. The orographic precipitation model is based on the transport of atmospheric moisture and the quantification of preciptable water across a 3D representation of the terrain from the surface up to 250 hPa. Advective wind fields are computed independently and boundary conditions are extracted from radiosonde data. Precipitation rates are obtained through calibration of a spatially distributed precipitation efficiency parameter. The model was applied to the central Sierra Nevada. Results show a gain of the order of 20% in threat-score coefficients designed to measure the forecast ability of the model. Accuracy gains are largest at high elevations and during intense storms associated with warm air masses.

  9. Modelling long-term deformation of granular soils incorporating the concept of fractional calculus

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Xiao, Yang; Zheng, Changjie; Hanif, Khairul Fikry

    2016-02-01

    Many constitutive models exist to characterise the cyclic behaviour of granular soils but can only simulate deformations for very limited cycles. Fractional derivatives have been regarded as one potential instrument for modelling memory-dependent phenomena. In this paper, the physical connection between the fractional derivative order and the fractal dimension of granular soils is investigated in detail. Then a modified elasto-plastic constitutive model is proposed for evaluating the long-term deformation of granular soils under cyclic loading by incorporating the concept of factional calculus. To describe the flow direction of granular soils under cyclic loading, a cyclic flow potential considering particle breakage is used. Test results of several types of granular soils are used to validate the model performance.

  10. Modeling water scarcity over south Asia: Incorporating crop growth and irrigation models into the Variable Infiltration Capacity (VIC) model

    NASA Astrophysics Data System (ADS)

    Troy, Tara J.; Ines, Amor V. M.; Lall, Upmanu; Robertson, Andrew W.

    2013-04-01

    Large-scale hydrologic models, such as the Variable Infiltration Capacity (VIC) model, are used for a variety of studies, from drought monitoring to projecting the potential impact of climate change on the hydrologic cycle decades in advance. The majority of these models simulates the natural hydrological cycle and neglects the effects of human activities such as irrigation, which can result in streamflow withdrawals and increased evapotranspiration. In some parts of the world, these activities do not significantly affect the hydrologic cycle, but this is not the case in south Asia where irrigated agriculture has a large water footprint. To address this gap, we incorporate a crop growth model and irrigation model into the VIC model in order to simulate the impacts of irrigated and rainfed agriculture on the hydrologic cycle over south Asia (Indus, Ganges, and Brahmaputra basin and peninsular India). The crop growth model responds to climate signals, including temperature and water stress, to simulate the growth of maize, wheat, rice, and millet. For the primarily rainfed maize crop, the crop growth model shows good correlation with observed All-India yields (0.7) with lower correlations for the irrigated wheat and rice crops (0.4). The difference in correlation is because irrigation provides a buffer against climate conditions, so that rainfed crop growth is more tied to climate than irrigated crop growth. The irrigation water demands induce hydrologic water stress in significant parts of the region, particularly in the Indus, with the streamflow unable to meet the irrigation demands. Although rainfall can vary significantly in south Asia, we find that water scarcity is largely chronic due to the irrigation demands rather than being intermittent due to climate variability.

  11. Incorporating grazing into an eco-hydrologic model: Simulating coupled human and natural systems in rangelands

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Liu, M.; Tague, C.; Choate, J. S.; Evans, R. D.; Johnson, K. A.; Adam, J. C.

    2013-12-01

    Rangelands provide an opportunity to investigate the coupled feedbacks between human activities and natural ecosystems. These areas comprise at least one-third of the Earth's surface and provide ecological support for birds, insects, wildlife and agricultural animals including grazing lands for livestock. Capturing the interactions among water, carbon, and nitrogen cycles within the context of regional scale patterns of climate and management is important to understand interactions, responses, and feedbacks between rangeland systems and humans, as well as provide relevant information to stakeholders and policymakers. The overarching objective of this research is to understand the full consequences, intended and unintended, of human activities and climate over time in rangelands by incorporating dynamics related to rangeland management into an eco-hydrologic model that also incorporates biogeochemical and soil processes. Here we evaluate our model over ungrazed and grazed sites for different rangeland ecosystems. The Regional Hydro-ecologic Simulation System (RHESSys) is a process-based, watershed-scale model that couples water with carbon and nitrogen cycles. Climate, soil, vegetation, and management effects within the watershed are represented in a nested landscape hierarchy to account for heterogeneity and the lateral movement of water and nutrients. We incorporated a daily time-series of plant biomass loss from rangeland to represent grazing. The TRY Plant Trait Database was used to parameterize genera of shrubs and grasses in different rangeland types, such as tallgrass prairie, Intermountain West cold desert, and shortgrass steppe. In addition, other model parameters captured the reallocation of carbon and nutrients after grass defoliation. Initial simulations were conducted at the Curlew Valley site in northern Utah, a former International Geosphere-Biosphere Programme Desert Biome site. We found that grasses were most sensitive to model parameters affecting

  12. Incorporating Sediment Compaction Into a Gravitationally Self-consistent Model for Global Sea-level Change

    NASA Astrophysics Data System (ADS)

    Ferrier, K.; Mitrovica, J. X.

    2015-12-01

    In sedimentary deltas and fans, sea-level changes are strongly modulated by the deposition and compaction of marine sediment. The deposition of sediment and incorporation of water into the sedimentary pore space reduces sea level by increasing the elevation of the seafloor, which reduces the thickness of sea-water above the bed. In a similar manner, the compaction of sediment and purging of water out of the sedimentary pore space increases sea level by reducing the elevation of the seafloor, which increases the thickness of sea water above the bed. Here we show how one can incorporate the effects of sediment deposition and compaction into the global, gravitationally self-consistent sea-level model of Dalca et al. (2013). Incorporating sediment compaction requires accounting for only one additional quantity that had not been accounted for in Dalca et al. (2013): the mean porosity in the sediment column. We provide a general analytic framework for global sea-level changes including sediment deposition and compaction, and we demonstrate how sea level responds to deposition and compaction under one simple parameterization for compaction. The compaction of sediment generates changes in sea level only by changing the elevation of the seafloor. That is, sediment compaction does not affect the mass load on the crust, and therefore does not generate perturbations in crustal elevation or the gravity field that would further perturb sea level. These results have implications for understanding sedimentary effects on sea-level changes and thus for disentangling the various drivers of sea-level change. ReferencesDalca A.V., Ferrier K.L., Mitrovica J.X., Perron J.T., Milne G.A., Creveling J.R., 2013. On postglacial sea level - III. Incorporating sediment redistribution. Geophysical Journal International, doi: 10.1093/gji/ggt089.

  13. Incorporating solar radiation into the litter moisture model in the Canadian Forest Fire Danger Rating System

    NASA Astrophysics Data System (ADS)

    Wotton, Mike; Gibos, Kelsy

    2010-05-01

    The Canadian Forest Fire Danger Rating System (CFFDRS) is used throughout Canada, and in a number of countries throughout the world, for estimating fire potential in wildland fuels. The standard fuel moisture models in the CFFDRS are representative of moisture in closed canopy jack pine or lodge pole pine stands. These models assume full canopy closure and do not therefore account for the influence of solar radiation and thus cannot readily be adapted to more open environments. Recent research has seen the adaptation of the CFFDRS's hourly Fine Fuel Moisture Code (FFMC) model (which represents litter moisture) to open grasslands, through the incorporation of an explicit solar radiation term. This current study describes more recent extension of this modelling effort to forested stand situations. The development and structure of this new model is described and outputs of this new model, along with outputs from the existing FFMC model, are compared with field observations. Results show that the model tracks the diurnal variation in actual litter moisture content more accurately than the existing model for diurnal calculation of the FFMC in the CFFDRS. Practical examples of the application of this system for operational estimation of litter moisture are provided for stands of varying densities and types.

  14. A variational size-dependent model for electrostatically actuated NEMS incorporating nonlinearities and Casimir force

    NASA Astrophysics Data System (ADS)

    Liang, Binbin; Zhang, Long; Wang, Binglei; Zhou, Shenjie

    2015-07-01

    A size-dependent model for the electrostatically actuated Nano-Electro-Mechanical Systems (NEMS) incorporating nonlinearities and Casimir force is presented by using a variational method. The governing equation and boundary conditions are derived with the help of strain gradient elasticity theory and Hamilton principle. Generalized differential quadrature (GDQ) method is employed to solve the problem numerically. The pull-in instability with Casimir force included is then studied. The results reveal that Casimir force, which is a spontaneous force between the two electrodes, can reduce the external applied voltage. With Casimir force incorporated, the pull-in instability occurs without voltage applied when the beam size is in nanoscale. The minimum gap and detachment length can be calculated from the present model for different beam size, which is important for NEMS design. Finally, discussions of size effect induced by the strain gradient terms reveal that the present model is more accurate since size effect play an important role when beam in nanoscale.

  15. Incorporation of detailed eye model into polygon-mesh versions of ICRP-110 reference phantoms

    NASA Astrophysics Data System (ADS)

    Tat Nguyen, Thang; Yeom, Yeon Soo; Kim, Han Sung; Wang, Zhao Jun; Han, Min Cheol; Kim, Chan Hyeong; Lee, Jai Ki; Zankl, Maria; Petoussi-Henss, Nina; Bolch, Wesley E.; Lee, Choonsik; Chung, Beom Sun

    2015-11-01

    The dose coefficients for the eye lens reported in ICRP 2010 Publication 116 were calculated using both a stylized model and the ICRP-110 reference phantoms, according to the type of radiation, energy, and irradiation geometry. To maintain consistency of lens dose assessment, in the present study we incorporated the ICRP-116 detailed eye model into the converted polygon-mesh (PM) version of the ICRP-110 reference phantoms. After the incorporation, the dose coefficients for the eye lens were calculated and compared with those of the ICRP-116 data. The results showed generally a good agreement between the newly calculated lens dose coefficients and the values of ICRP 2010 Publication 116. Significant differences were found for some irradiation cases due mainly to the use of different types of phantoms. Considering that the PM version of the ICRP-110 reference phantoms preserve the original topology of the ICRP-110 reference phantoms, it is believed that the PM version phantoms, along with the detailed eye model, provide more reliable and consistent dose coefficients for the eye lens.

  16. Three-dimensional numerical modelling of gas discharges at atmospheric pressure incorporating photoionization phenomena

    NASA Astrophysics Data System (ADS)

    Papageorgiou, L.; Metaxas, A. C.; Georghiou, G. E.

    2011-02-01

    A three-dimensional (3D) numerical model for the characterization of gas discharges in air at atmospheric pressure incorporating photoionization through the solution of the Helmholtz equation is presented. Initially, comparisons with a two-dimensional (2D) axi-symmetric model are performed in order to assess the validity of the model. Subsequently several discharge instabilities (plasma spots and low pressure inhomogeneities) are considered in order to study their effect on streamer branching and off-axis propagation. Depending on the magnitude and position of the plasma spot, deformations and off-axis propagation of the main discharge channel were obtained. No tendency for branching in small (of the order of 0.1 cm) overvolted discharge gaps was observed.

  17. Wideband Power Amplifier Modeling Incorporating Carrier Frequency Dependent AM/AM and AM/PM Characteristics

    NASA Astrophysics Data System (ADS)

    Tkacenko, A.

    2013-05-01

    In this article, we present a complex baseband model for a wideband power amplifier that incorporates carrier frequency dependent amplitude modulation (AM) and phase modulation (PM) (i.e., AM/AM and AM/PM) characteristics in the design process. The structure used to implement the amplifier model is a Wiener system which accounts for memory effects caused by the frequency selective nature of the amplifier, in addition to the nonlinearities caused by gain compression and saturation. By utilizing piecewise polynomial nonlinearities in the structure, it is shown how to construct the Wiener model to exactly accommodate all given AM/AM and AM/PM measurement constraints. Simulation results using data from a 50 W 32-way Ka-band solid-state power amplifier (SSPA) are provided, highlighting the differences in degradation incurred for a wideband input signal as compared with a narrowband input.

  18. Incorporating temporal EHR data in predictive models for risk stratification of renal function deterioration.

    PubMed

    Singh, Anima; Nadkarni, Girish; Gottesman, Omri; Ellis, Stephen B; Bottinger, Erwin P; Guttag, John V

    2015-02-01

    Predictive models built using temporal data in electronic health records (EHRs) can potentially play a major role in improving management of chronic diseases. However, these data present a multitude of technical challenges, including irregular sampling of data and varying length of available patient history. In this paper, we describe and evaluate three different approaches that use machine learning to build predictive models using temporal EHR data of a patient. The first approach is a commonly used non-temporal approach that aggregates values of the predictors in the patient's medical history. The other two approaches exploit the temporal dynamics of the data. The two temporal approaches vary in how they model temporal information and handle missing data. Using data from the EHR of Mount Sinai Medical Center, we learned and evaluated the models in the context of predicting loss of estimated glomerular filtration rate (eGFR), the most common assessment of kidney function. Our results show that incorporating temporal information in patient's medical history can lead to better prediction of loss of kidney function. They also demonstrate that exactly how this information is incorporated is important. In particular, our results demonstrate that the relative importance of different predictors varies over time, and that using multi-task learning to account for this is an appropriate way to robustly capture the temporal dynamics in EHR data. Using a case study, we also demonstrate how the multi-task learning based model can yield predictive models with better performance for identifying patients at high risk of short-term loss of kidney function. PMID:25460205

  19. Incorporating temporal EHR data in predictive models for risk stratification of renal function deterioration

    PubMed Central

    Singh, Anima; Nadkarni, Girish; Gottesman, Omri; Ellis, Stephen B.; Bottinger, Erwin P.; Guttag, John V.

    2015-01-01

    Predictive models built using temporal data in electronic health records (EHRs) can potentially play a major role in improving management of chronic diseases. However, these data present a multitude of technical challenges, including irregular sampling of data and varying length of available patient history. In this paper, we describe and evaluate three different approaches that use machine learning to build predictive models using temporal EHR data of a patient. The first approach is a commonly used non-temporal approach that aggregates values of the predictors in the patient’s medical history. The other two approaches exploit the temporal dynamics of the data. The two temporal approaches vary in how they model temporal information and handle missing data. Using data from the EHR of Mount Sinai Medical Center, we learned and evaluated the models in the context of predicting loss of estimated glomerular filtration rate (eGFR), the most common assessment of kidney function. Our results show that incorporating temporal information in patient’s medical history can lead to better prediction of loss of kidney function. They also demonstrate that exactly how this information is incorporated is important. In particular, our results demonstrate that the relative importance of different predictors varies over time, and that using multi-task learning to account for this is an appropriate way to robustly capture the temporal dynamics in EHR data. Using a case study, we also demonstrate how the multi-task learning based model can yield predictive models with better performance for identifying patients at high risk of short-term loss of kidney function. PMID:25460205

  20. A Direct Method for Incorporating Experimental Data into Multiscale Coarse-Grained Models.

    PubMed

    Dannenhoffer-Lafage, Thomas; White, Andrew D; Voth, Gregory A

    2016-05-10

    To extract meaningful data from molecular simulations, it is necessary to incorporate new experimental observations as they become available. Recently, a new method was developed for incorporating experimental observations into molecular simulations, called experiment directed simulation (EDS), which utilizes a maximum entropy argument to bias an existing model to agree with experimental observations while changing the original model by a minimal amount. However, there is no discussion in the literature of whether or not the minimal bias systematically and generally improves the model by creating agreement with the experiment. In this work, we show that the relative entropy of the biased system with respect to an ideal target is always reduced by the application of a minimal bias, such as the one utilized by EDS. Using all-atom simulations that have been biased with EDS, one can then easily and rapidly improve a bottom-up multiscale coarse-grained (MS-CG) model without the need for a time-consuming reparametrization of the underlying atomistic force field. Furthermore, the improvement given by the many-body interactions introduced by the EDS bias can be maintained after being projected down to effective two-body MS-CG interactions. The result of this analysis is a new paradigm in coarse-grained modeling and simulation in which the "bottom-up" and "top-down" approaches are combined within a single, rigorous formalism based on statistical mechanics. The utility of building the resulting EDS-MS-CG models is demonstrated on two molecular systems: liquid methanol and ethylene carbonate. PMID:27045328

  1. A nonlinear biphasic fiber-reinforced porohyperviscoelastic model of articular cartilage incorporating fiber reorientation and dispersion.

    PubMed

    Seifzadeh, A; Wang, J; Oguamanam, D C D; Papini, M

    2011-08-01

    A nonlinear biphasic fiber-reinforced porohyperviscoelastic (BFPHVE) model of articular cartilage incorporating fiber reorientation effects during applied load was used to predict the response of ovine articular cartilage at relatively high strains (20%). The constitutive material parameters were determined using a coupled finite element-optimization algorithm that utilized stress relaxation indentation tests at relatively high strains. The proposed model incorporates the strain-hardening, tension-compression, permeability, and finite deformation nonlinearities that inherently exist in cartilage, and accounts for effects associated with fiber dispersion and reorientation and intrinsic viscoelasticity at relatively high strains. A new optimization cost function was used to overcome problems associated with large peak-to-peak differences between the predicted finite element and experimental loads that were due to the large strain levels utilized in the experiments. The optimized material parameters were found to be insensitive to the initial guesses. Using experimental data from the literature, the model was also able to predict both the lateral displacement and reaction force in unconfined compression, and the reaction force in an indentation test with a single set of material parameters. Finally, it was demonstrated that neglecting the effects of fiber reorientation and dispersion resulted in poorer agreement with experiments than when they were considered. There was an indication that the proposed BFPHVE model, which includes the intrinsic viscoelasticity of the nonfibrillar matrix (proteoglycan), might be used to model the behavior of cartilage up to relatively high strains (20%). The maximum percentage error between the indentation force predicted by the FE model using the optimized material parameters and that measured experimentally was 3%. PMID:21950897

  2. Dynamic modeling of the outlet of a pulsatile pump incorporating a flow-dependent resistance.

    PubMed

    Huang, Huan; Yang, Ming; Wu, Shunjie; Liao, Huogen

    2013-08-01

    Outlet tube models incorporating a linearly flow-dependent resistance are widely used in pulsatile and rotary pump studies. The resistance is made up of a flow-proportional term and a constant term. Previous studies often focused on the steady state properties of the model. In this paper, a dynamic modeling procedure was presented. Model parameters were estimated by an unscented Kalman filter (UKF). The subspace model identification (SMI) algorithm was proposed to initialize the UKF. Model order and structure were also validated by SMI. A mock circulatory loop driven by a pneumatic pulsatile pump was developed to produce pulsatile pressure and flow. Hydraulic parameters of the outlet tube were adjusted manually by a clamp. Seven groups of steady state experiments were carried out to calibrate the flow-dependent resistance as reference values. Dynamic estimation results showed that the inertance estimates are insensitive to model structures. If the constant term was ignored, estimation errors for the flow-proportional term were limited within 16% of the reference values. Compared with the constant resistance, a time-varying one improves model accuracy in terms of root mean square error. The maximum improvement is up to 35%. However, including the constant term in the time-varying resistance will lead to serious estimation errors. PMID:23253954

  3. Rate-Dependent Embedded Discontinuity Approach Incorporating Heterogeneity for Numerical Modeling of Rock Fracture

    NASA Astrophysics Data System (ADS)

    Saksala, Timo

    2015-07-01

    In this paper, the embedded discontinuity approach is applied in finite element modeling of rock in compression and tension. For this end, a rate-dependent constitutive model based on (strong) embedded displacement discontinuity model is developed to describe the mode I, mode II and mixed mode fracture of rock. The constitutive model describes the bulk material as linear elastic until reaching the elastic limit. Beyond the elastic limit, the rate-dependent exponential softening law governs the evolution of the displacement jump. Rock heterogeneity is incorporated in the present approach by random description of the mineral texture of rock. Moreover, initial microcrack population always present in natural rocks is accounted for as randomly-oriented embedded discontinuities. In the numerical examples, the model properties are extensively studied in uniaxial compression. The effect of loading rate and confining pressure is also tested in the 2D (plane strain) numerical simulations. These simulations demonstrate that the model captures the salient features of rock in confined compression and uniaxial tension. The developed method has the computational efficiency of continuum plasticity models. However, it also has the advantage, over these models, of accounting for the orientation of introduced microcracks. This feature is crucial with respect to the fracture behavior of rock in compression as shown in this paper.

  4. Incorporation of mantle effects in lithospheric stress modeling: the Eurasian plate

    NASA Astrophysics Data System (ADS)

    Ruckstuhl, K.; Wortel, M. J. R.; Govers, R.; Meijer, P.

    2009-04-01

    The intraplate stress field is the result of forces acting on the lithosphere and as such contains valuable information on the dynamics of plate tectonics. Studies modeling the intraplate stress field have followed two different approaches, with the emphasis either on the lithosphere itself or the underlying convecting mantle. For most tectonic plates on earth one or both methods have been quiet successful in reproducing the large scale stress field. The Eurasian plate however has remained a challenge. A probable cause is that due to the complexity of the plate successful models require both an active mantle and well defined boundary forces. We therefore construct a model for the Eurasian plate in which we combine both modeling approaches by incorporating the effects of an active mantle in a model based on a lithospheric approach, where boundary forces are modeled explicitly. The assumption that the whole plate is in dynamical equilibrium allows for imposing a torque balance on the plate, which provides extra constraints on the forces that cannot be calculated a priori. Mantle interaction is modeled as a shear at the base of the plate obtained from global mantle flow models from literature. A first order approximation of the increased excess pressure of the anomalous ridge near the Iceland hotspot is incorporated. Results are evaluated by comparison with World Stress Map data. Direct incorporation of the sublithospheric stresses from mantle flow modeling in our force model is not possible, due to a discrepancy in the magnitude of the integrated mantle shear and lithospheric forces of around one order of magnitude, prohibiting balance of the torques. This magnitude discrepancy is a well known fundamental problem in geodynamics and we choose to close the gap between the two different approaches by scaling down the absolute magnitude of the sublithospheric stresses. Becker and O'Connell (G3,2,2001) showed that various mantle flow models show a considerable spread in

  5. Advanced Methods for Incorporating Solar Energy Technologies into Electric Sector Capacity-Expansion Models: Literature Review and Analysis

    SciTech Connect

    Sullivan, P.; Eurek, K.; Margolis, R.

    2014-07-01

    Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.

  6. Incorporating disease and population structure into models of SIR disease in contact networks.

    PubMed

    Miller, Joel C; Volz, Erik M

    2013-01-01

    We consider the recently introduced edge-based compartmental models (EBCM) for the spread of susceptible-infected-recovered (SIR) diseases in networks. These models differ from standard infectious disease models by focusing on the status of a random partner in the population, rather than a random individual. This change in focus leads to simple analytic models for the spread of SIR diseases in random networks with heterogeneous degree. In this paper we extend this approach to handle deviations of the disease or population from the simplistic assumptions of earlier work. We allow the population to have structure due to effects such as demographic features or multiple types of risk behavior. We allow the disease to have more complicated natural history. Although we introduce these modifications in the static network context, it is straightforward to incorporate them into dynamic network models. We also consider serosorting, which requires using dynamic network models. The basic methods we use to derive these generalizations are widely applicable, and so it is straightforward to introduce many other generalizations not considered here. Our goal is twofold: to provide a number of examples generalizing the EBCM method for various different population or disease structures and to provide insight into how to derive such a model under new sets of assumptions. PMID:23990880

  7. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    PubMed

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  8. A methodology for incorporating geomechanically-based fault damage zones models into reservoir simulation

    NASA Astrophysics Data System (ADS)

    Paul, Pijush Kanti

    In the fault damage zone modeling study for a field in the Timor Sea, I present a methodology to incorporate geomechanically-based fault damage zones into reservoir simulation. In the studied field, production history suggests that the mismatch between actual production and model prediction is due to preferential fluid flow through the damage zones associated with the reservoir scale faults, which is not included in the baseline petrophysical model. I analyzed well data to estimate stress heterogeneity and fracture distributions in the reservoir. Image logs show that stress orientations are homogenous at the field scale with a strike-slip/normal faulting stress regime and maximum horizontal stress oriented in NE-SW direction. Observed fracture zones in wells are mostly associated with well scale fault and bed boundaries. These zones do not show any anomalies in production logs or well test data, because most of the fractures are not optimally oriented to the present day stress state, and matrix permeability is high enough to mask any small anomalies from the fracture zones. However, I found that fracture density increases towards the reservoir scale faults, indicating high fracture density zones or damage zones close to these faults, which is consistent with the preferred flow direction indicated by interference and tracer test done between the wells. It is well known from geologic studies that there is a concentration of secondary fractures and faults in a damage zone adjacent to larger faults. Because there is usually inadequate data to incorporate damage zone fractures and faults into reservoir simulation models, in this study I utilized the principles of dynamic rupture propagation from earthquake seismology to predict the nature of fractured/damage zones associated with reservoir scale faults. The implemented workflow can be used to more routinely incorporate damage zones into reservoir simulation models. Applying this methodology to a real reservoir utilizing

  9. Incorporating and Compensating Cerebrospinal Fluid in Surface-Based Forward Models of Magneto- and Electroencephalography

    PubMed Central

    Stenroos, Matti; Nummenmaa, Aapo

    2016-01-01

    MEG/EEG source imaging is usually done using a three-shell (3-S) or a simpler head model. Such models omit cerebrospinal fluid (CSF) that strongly affects the volume currents. We present a four-compartment (4-C) boundary-element (BEM) model that incorporates the CSF and is computationally efficient and straightforward to build using freely available software. We propose a way for compensating the omission of CSF by decreasing the skull conductivity of the 3-S model, and study the robustness of the 4-C and 3-S models to errors in skull conductivity. We generated dense boundary meshes using MRI datasets and automated SimNIBS pipeline. Then, we built a dense 4-C reference model using Galerkin BEM, and 4-C and 3-S test models using coarser meshes and both Galerkin and collocation BEMs. We compared field topographies of cortical sources, applying various skull conductivities and fitting conductivities that minimized the relative error in 4-C and 3-S models. When the CSF was left out from the EEG model, our compensated, unbiased approach improved the accuracy of the 3-S model considerably compared to the conventional approach, where CSF is neglected without any compensation (mean relative error < 20% vs. > 40%). The error due to the omission of CSF was of the same order in MEG and compensated EEG. EEG has, however, large overall error due to uncertain skull conductivity. Our results show that a realistic 4-C MEG/EEG model can be implemented using standard tools and basic BEM, without excessive workload or computational burden. If the CSF is omitted, compensated skull conductivity should be used in EEG. PMID:27472278

  10. Incorporating and Compensating Cerebrospinal Fluid in Surface-Based Forward Models of Magneto- and Electroencephalography.

    PubMed

    Stenroos, Matti; Nummenmaa, Aapo

    2016-01-01

    MEG/EEG source imaging is usually done using a three-shell (3-S) or a simpler head model. Such models omit cerebrospinal fluid (CSF) that strongly affects the volume currents. We present a four-compartment (4-C) boundary-element (BEM) model that incorporates the CSF and is computationally efficient and straightforward to build using freely available software. We propose a way for compensating the omission of CSF by decreasing the skull conductivity of the 3-S model, and study the robustness of the 4-C and 3-S models to errors in skull conductivity. We generated dense boundary meshes using MRI datasets and automated SimNIBS pipeline. Then, we built a dense 4-C reference model using Galerkin BEM, and 4-C and 3-S test models using coarser meshes and both Galerkin and collocation BEMs. We compared field topographies of cortical sources, applying various skull conductivities and fitting conductivities that minimized the relative error in 4-C and 3-S models. When the CSF was left out from the EEG model, our compensated, unbiased approach improved the accuracy of the 3-S model considerably compared to the conventional approach, where CSF is neglected without any compensation (mean relative error < 20% vs. > 40%). The error due to the omission of CSF was of the same order in MEG and compensated EEG. EEG has, however, large overall error due to uncertain skull conductivity. Our results show that a realistic 4-C MEG/EEG model can be implemented using standard tools and basic BEM, without excessive workload or computational burden. If the CSF is omitted, compensated skull conductivity should be used in EEG. PMID:27472278

  11. Incorporating Social Anxiety Into a Model of College Problem Drinking: Replication and Extension

    PubMed Central

    Ham, Lindsay S.; Hope, Debra A.

    2009-01-01

    Although research has found an association between social anxiety and alcohol use in noncollege samples, results have been mixed for college samples. College students face many novel social situations in which they may drink to reduce social anxiety. In the current study, the authors tested a model of college problem drinking, incorporating social anxiety and related psychosocial variables among 228 undergraduate volunteers. According to structural equation modeling (SEM) results, social anxiety was unrelated to alcohol use and was negatively related to drinking consequences. Perceived drinking norms mediated the social anxiety–alcohol use relation and was the variable most strongly associated with problem drinking. College students appear to be unique with respect to drinking and social anxiety. Although the notion of social anxiety alone as a risk factor for problem drinking was unsupported, additional research is necessary to determine whether there is a subset of socially anxious students who have high drinking norms and are in need of intervention. PMID:16938075

  12. Runoff Modelling of the Khumbu Glacier, Nepal: Incorporating Debris Cover and Retreat Dynamics.

    NASA Astrophysics Data System (ADS)

    Douglas, James; Huss, Matthias; Jones, Julie; Swift, Darrel; Salerno, Franco

    2016-04-01

    Detailed studies on the future evolution and runoff of glaciers in high mountain Asia are scarce considering the region is so reliant on on this essential water source. This study adapts a model well-proven in the European Alps, the Glacier Evolution and Runoff Model (GERM), to simulate the behaviour of the Khumbu glacier, Nepal. GERM calculates glacier mass balance and runoff using a distributed temperature index model which has been modified such that the unique dynamics of debris covered glaciers, namely stagnation, thinning, and melt-inhibiting debris surfaces, are incorporated. Debris thickness is derived from both remote sensing and model based approaches allowing a suite of experiments to be conducted using various levels of debris cover. The model is driven by CORDEX-South Asia regional climate model (RCM) simulations, bias corrected using a quantile mapping technique based on in-situ data from the Pyramid meteorological station. Here, results are presented showing the retreat of the Khumbu glacier and the corresponding changes for annual and seasonal discharge until 2100, using varying melt parameters and debris thicknesses to assess the impact of debris cover on glacier evolution and runoff.

  13. Incorporation of 3D Shortwave Radiative Effects within the Weather Research and Forecasting Model

    SciTech Connect

    O'Hirok, W.; Ricchiazzi, P.; Gautier, C.

    2005-03-18

    A principal goal of the Atmospheric Radiation Measurement (ARM) Program is to understand the 3D cloud-radiation problem from scales ranging from the local to the size of global climate model (GCM) grid squares. For climate models using typical cloud overlap schemes, 3D radiative effects are minimal for all but the most complicated cloud fields. However, with the introduction of ''superparameterization'' methods, where sub-grid cloud processes are accounted for by embedding high resolution 2D cloud system resolving models within a GCM grid cell, the impact of 3D radiative effects on the local scale becomes increasingly relevant (Randall et al. 2003). In a recent study, we examined this issue by comparing the heating rates produced from a 3D and 1D shortwave radiative transfer model for a variety of radar derived cloud fields (O'Hirok and Gautier 2005). As demonstrated in Figure 1, the heating rate differences for a large convective field can be significant where 3D effects produce areas o f intense local heating. This finding, however, does not address the more important question of whether 3D radiative effects can alter the dynamics and structure of a cloud field. To investigate that issue we have incorporated a 3D radiative transfer algorithm into the Weather Research and Forecasting (WRF) model. Here, we present very preliminary findings of a comparison between cloud fields generated from a high resolution non-hydrostatic mesoscale numerical weather model using 1D and 3D radiative transfer codes.

  14. Tutorial in medical decision modeling incorporating waiting lines and queues using discrete event simulation.

    PubMed

    Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter

    2010-01-01

    In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example. PMID:20345550

  15. An agent-based model of stock markets incorporating momentum investors

    NASA Astrophysics Data System (ADS)

    Wei, J. R.; Huang, J. P.; Hui, P. M.

    2013-06-01

    It has been widely accepted that there exist investors who adopt momentum strategies in real stock markets. Understanding the momentum behavior is of both academic and practical importance. For this purpose, we propose and study a simple agent-based model of trading incorporating momentum investors and random investors. The random investors trade randomly all the time. The momentum investors could be idle, buying or selling, and they decide on their action by implementing an action threshold that assesses the most recent price movement. The model is able to reproduce some of the stylized facts observed in real markets, including the fat-tails in returns, weak long-term correlation and scaling behavior in the kurtosis of returns. An analytic treatment of the model relates the model parameters to several quantities that can be extracted from real data sets. To illustrate how the model can be applied, we show that real market data can be used to constrain the model parameters, which in turn provide information on the behavior of momentum investors in different markets.

  16. The dilemma of disappearing diatoms: Incorporating diatom dissolution data into palaeoenvironmental modelling and reconstruction

    NASA Astrophysics Data System (ADS)

    Ryves, David B.; Battarbee, Richard W.; Fritz, Sherilyn C.

    2009-01-01

    Taphonomic issues pose fundamental challenges for Quaternary scientists to recover environmental signals from biological proxies and make accurate inferences of past environments. The problem of microfossil preservation, specifically diatom dissolution, remains an important, but often overlooked, source of error in both qualitative and quantitative reconstructions of key variables from fossil samples, especially those using relative abundance data. A first step to tackling this complex issue is establishing an objective method of assessing preservation (here, diatom dissolution) that can be applied by different analysts and incorporated into routine counting strategies. Here, we establish a methodology for assessment of diatom dissolution under standard light microscopy (LM) illustrated with morphological criteria for a range of major diatom valve shapes. Dissolution data can be applied to numerical models (transfer functions) from contemporary samples, and to fossil material to aid interpretation of stratigraphic profiles and taphonomic pathways of individual taxa. Using a surface sediment diatom-salinity training set from the Northern Great Plains (NGP) as an example, we explore a variety of approaches to include dissolution data in salinity inference models indirectly and directly. Results show that dissolution data can improve models, with apparent dissolution-adjusted error (RMSE) up to 15% lower than their unadjusted counterparts. Internal validation suggests improvements are more modest, with bootstrapped prediction errors (RMSEP) up to 10% lower. When tested on a short core from Devils Lake, North Dakota, which has a historical record of salinity, dissolution-adjusted models infer higher values compared to unadjusted models during peak salinity of the 1930s-1940s Dust Bowl but nonetheless significantly underestimate peak values. Site-specific factors at Devils Lake associated with effects of lake level change on taphonomy (preservation and re

  17. Progressive evaluation of incorporating information into a model building process: from scratch to FLEX-TOPO

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Hrachowitz, M.; Fenicia, F.; Gao, H.; Gupta, H. V.; Savenije, H.

    2014-12-01

    Although different strategies have demonstrated that incorporation of expert and a priori knowledge can help to improve the realism of models, no systematic strategy has been presented in the literature for constraining the model parameters to be consistent with the (sometimes) patchy understanding of a modeler regarding how the real system might work. Part of the difficulty in doing this is that expert knowledge may not always consist of explicitly quantifiable relationships between physical system characteristics and model parameters; rather, it may consist of conceptual understanding about consistency relationships that must exist between various model parameter or behavioral relationships that must exist among model state variables and/or fluxes. Apart from aforementioned constraints, a unified strategy for measurement of information content in hierarchal model building seems lacking. Firstly the model structure is built by its building blocks (control volumes or state variables) as well as interconnecting fluxes (formation of control volumes and fluxes). Secondly, parameterizations of model are designed, as an example the effect of a specific type of stage-discharge relation for a control volume can be explored. At the final stage the parameter values are quantified. In each step and based on assumptions made, more and more information is added to the model. In this study we try to construct (based on hierarchal model building scheme) and constrain parameters of different conceptual models built on landscape units classified according to their hydrological functions and based on our logical considerations and general lessons from previous studies across the globe for a Luxembourgish catchment. Based on the result, including our basic understanding of how a system may work into hydrological models appears to be a powerful tool to achieve higher model realism as it leads to models with higher performance. Progressive measurement of performance and uncertainty

  18. Progressive evaluation of incorporating information into a model building process: from scratch to FLEX-TOPO

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Gupta, Hoshin; Hrachowitz, Markus; Fenicia, Fabrizio; Gao, Hongkai; Savenije, Hubert

    2015-04-01

    Although different strategies have demonstrated that incorporation of expert and a priori knowledge can help to improve the realism of models, no systematic strategy has been presented in the literature for constraining the model parameters to be consistent with the (sometimes) patchy understanding of a modeler regarding how the real system might work. Part of the difficulty in doing this is that expert knowledge may not always consist of explicitly quantifiable relationships between physical system characteristics and model parameters; rather, it may consist of conceptual understanding about consistency relationships that must exist between various model parameter or behavioral relationships that must exist among model state variables and/or fluxes. Apart from aforementioned constraints, a unified strategy for measurement of information content in hierarchal model building seems lacking. Firstly the model structure is built by its building blocks (control volumes or state variables) as well as interconnecting fluxes (formation of control volumes and fluxes). Secondly, parameterizations of model are designed, as an example the effect of a specific type of stage-discharge relation for a control volume can be explored. At the final stage the parameter values are quantified. In each step and based on assumptions made, more and more information is added to the model. In this study we try to construct (based on hierarchal model building scheme) and constrain parameters of different conceptual models built on landscape units classified according to their hydrological functions and based on our logical considerations and general lessons from previous studies across the globe for a Luxembourgish catchment. Based on the result, including our basic understanding of how a system may work into hydrological models appears to be a powerful tool to achieve higher model realism as it leads to models with higher performance. Progressive measurement of performance and uncertainty

  19. Incorporating advanced combustion models to study power density in diesel engines

    NASA Astrophysics Data System (ADS)

    Lee, Daniel Michael

    A new combustion model is presented that can be used to simulate the diesel combustion process. This combustion process is broken into three phases: low temperature ignition kinetics, premixed burn and high temperature diffusion burn. The low temperature ignition kinetics are modeled using the Shell model. For combustion limited by diffusion, a probability density function (PDF) combustion model is utilized. In this model, the turbulent reacting flow is assumed to be an ensemble of locally laminar flamelets. With this methodology, species mass fractions obtained from the solution of laminar flamelet equations can be conditioned to generate a flamelet library. For kinetically limited (premixed) combustion, an Arrhenius rate is used. To transition between the premixed and diffusion burning modes, a transport equation for premixed fuel was implemented. The ratio of fuel in a computational cell that is premixed is used to determine the contribution of each combustion mode. Results show that this combustion model accurately simulates the diesel combustion process. Furthermore, the simulated results are in agreement with the recent conceptual picture of diesel combustion based upon experimental observations. Large eddy simulation (LES) models for momentum exchange and scalar flux were incorporated into the KIVA solver. In this formulation, the turbulent viscosity, μt, is determined as a function of the sub- grid turbulent kinetic energy, which is in turn determined from a one equation model. The formulation for the scalar transfer coefficient, μs, is similar to that of the turbulent viscosity, yet is made to be consistent with scalar transport. Test cases were run verifying that both momentum and scalar flux can be accurately predicted using LES. Once verified, these LES models were used to simulate the diesel combustion process for a Caterpillar 3400 series engine. Results for the engine simulations were in good agreement with experimental data.

  20. Incorporating social groups' responses in a descriptive model for second- and higher-order impact identification

    SciTech Connect

    Sutheerawatthana, Pitch; Minato, Takayuki

    2010-02-15

    The response of a social group is a missing element in the formal impact assessment model. Previous discussion of the involvement of social groups in an intervention has mainly focused on the formation of the intervention. This article discusses the involvement of social groups in a different way. A descriptive model is proposed by incorporating a social group's response into the concept of second- and higher-order effects. The model is developed based on a cause-effect relationship through the observation of phenomena in case studies. The model clarifies the process by which social groups interact with a lower-order effect and then generate a higher-order effect in an iterative manner. This study classifies social groups' responses into three forms-opposing, modifying, and advantage-taking action-and places them in six pathways. The model is expected to be used as an analytical tool for investigating and identifying impacts in the planning stage and as a framework for monitoring social groups' responses during the implementation stage of a policy, plan, program, or project (PPPPs).

  1. Statistical integration of tracking and vessel survey data to incorporate life history differences in habitat models.

    PubMed

    Yamamoto, Takashi; Watanuki, Yutaka; Hazen, Elliott L; Nishizawa, Bungo; Sasaki, Hiroko; Takahashi, Akinori

    2015-12-01

    Habitat use is often examined at a species or population level, but patterns likely differ within a species, as a function of the sex, breeding colony, and current breeding status of individuals. Hence, within-species differences should be considered in habitat models when analyzing and predicting species distributions, such as predicted responses to expected climate change scenarios. Also, species' distribution data obtained by different methods (vessel-survey and individual tracking) are often analyzed separately rather than integrated to improve predictions. Here, we eventually fit generalized additive models for Streaked Shearwaters Calonectris leuconelas using tracking data from two different breeding colonies in the Northwestern Pacific and visual observer data collected during a research cruise off the coast of western Japan. The tracking-based models showed differences among patterns of relative density distribution as a function of life history category (colony, sex, and breeding conditions). The integrated tracking-based and vessel-based bird count model incorporated ecological states rather than predicting a single surface for the entire species. This study highlights both the importance of including ecological and life history data and integrating multiple data types (tag-based tracking and vessel count) when examining species-environment relationships, ultimately advancing the capabilities of species distribution models. PMID:26910963

  2. Incorporation of a Chemical Kinetics Model for Composition B in a Parallel Finite-Element Algorithm

    NASA Astrophysics Data System (ADS)

    Kallman, Elizabeth; Pauler, Denise

    2009-06-01

    A thermal degradation model for Composition B (Comp B) explosive is being evaluated for incorporation into a finite-element algorithm [1]. The RDX component of Comp B dominates the thermal degradation since its decomposition process occurs at lower temperatures than TNT. The model assumes that solid and liquid RDX decompose by the same mechanisms, but along different reaction pathways [2, 3]. A steady-state approximation is applied to the gaseous intermediates and is compared to the full transient analysis for the entire reaction scheme. The parallel finite-element algorithm is used to predict the pressure increase on the interior of the metal casing of confined Comp B due to the production of gases during thermal decomposition. =0pt References [1] E. M. Kallman, ``Scalable Cluster-Based Galerkin Analysis for Kinetics Models of Energetic Materials,'' SIAM CSE, March 2-6, 2009. [2] D. K. Zerkle, ``Composition B Decomposition and Ignition Model,'' 13th International Detonation Symposium, July 23-28, 2006. [3] J. M. Zucker, A. J. Barra, D. K. Zerkle, M. J. Kaneshige and P. M. Dickson, ``Thermal Decomposition Models for High Explosive Compositions,'' 14th APS Topical Conference on Shock Compression of Condensed Matter, July 31-August 5, 2005.

  3. A transient electrochemical model incorporating the Donnan effect for all-vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    Lei, Y.; Zhang, B. W.; Bai, B. F.; Zhao, T. S.

    2015-12-01

    In a typical all-vanadium redox flow battery (VRFB), the ion exchange membrane is directly exposed in the bulk electrolyte. Consequently, the Donnan effect occurs at the membrane/electrolyte (M/E) interfaces, which is critical for modeling of ion transport through the membrane and the prediction of cell performance. However, unrealistic assumptions in previous VRFB models, such as electroneutrality and discontinuities of ionic potential and ion concentrations at the M/E interfaces, lead to simulated results inconsistent with the theoretical analysis of ion adsorption in the membrane. To address this issue, this work proposes a continuous-Donnan effect-model using the Poisson equation coupled with the Nernst-Planck equation to describe variable distributions at the M/E interfaces. A one-dimensional transient VRFB model incorporating the Donnan effect is developed. It is demonstrated that the present model enables (i) a more realistic simulation of continuous distributions of ion concentrations and ionic potential throughout the membrane and (ii) a more comprehensive estimation for the effect of the fixed charge concentration on species crossover across the membrane and cell performance.

  4. Incorporating teleconnection information into reservoir operating policies using Stochastic Dynamic Programming and a Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Turner, Sean; Galelli, Stefano; Wilcox, Karen

    2015-04-01

    Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating

  5. Current plate velocities relative to the hotspots incorporating the NUVEL-1 global plate motion model

    SciTech Connect

    Gripp, A.E.; Gordon, R.G. )

    1990-07-01

    NUVEL-1 is a new global model of current relative plate velocities which differ significantly from those of prior models. Here the authors incorporate NUVEL-1 into HS2-NUVEL1, a new global model of plate velocities relative to the hotspots. HS2-NUVEL1 was determined from the hotspot data and errors used by Minster and Jordan (1978) to determine AM1-2, which is their model of plate velocities relative to the hotspots. AM1-2 is consistent with Minster and Jordan's relative plate velocity model RM2. Here the authors compare HS2-NUVEL1 with AM1-2 and examine how their differences relate to differences between NUVEL-1 and RM2. HS2-NUVEL1 plate velocities relative to the hotspots are mainly similar to those of AM1-2. Minor differences between the two models include the following: (1) in HS2-NUVEL1 the speed of the partly continental, apparently non-subducting Indian plate is greater than that of the purely oceanic, subducting Nazca plate; (2) in places the direction of motion of the African, Antarctic, Arabian, Australian, Caribbean, Cocos, Eurasian, North American, and South American plates differs between models by more than 10{degree}; (3) in places the speed of the Australian, Caribbean, Cocos, Indian, and Nazca plates differs between models by more than 8 mm/yr. Although 27 of the 30 RM2 Euler vectors differ with 95% confidence from those of NUVEL-1, only the AM1-2 Arabia-hotspot and India-hotspot Euler vectors differ with 95% confidence from those of HS2-NUVEL1. Thus, substituting NUVEL-1 for RM2 in the inversion for plate velocities relative to the hotspots changes few Euler vectors significantly, presumably because the uncertainty in the velocity of a plate relative to the hotspots is much greater than the uncertainty in its velocity relative to other plates.

  6. INCORPORATING SINGLE NUCLEOTIDE POLYMORPHISMS INTO THE LYMAN MODEL TO IMPROVE PREDICTION OF RADIATION PNEUMONITIS

    PubMed Central

    Tucker, Susan L.; Li, Minghuan; Xu, Ting; Gomez, Daniel; Yuan, Xianglin; Yu, Jinming; Liu, Zhensheng; Yin, Ming; Guan, Xiaoxiang; Wang, Li-E; Wei, Qingyi; Mohan, Radhe; Vinogradskiy, Yevgeniy; Martel, Mary; Liao, Zhongxing

    2012-01-01

    Purpose To determine whether single nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor beta, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGFβ, TNFα, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiotherapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (Grade ≥3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate (MV) analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGFβ, VEGF, TNFα, XRCC1 and APEX1. With smoking status included in the MV model, the SNPs significantly associated with increased risk of RP were in genes for TGFβ, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk versus <10% risk of RP when exposed to high MLDs. PMID:22541966

  7. Incorporating Single-nucleotide Polymorphisms Into the Lyman Model to Improve Prediction of Radiation Pneumonitis

    SciTech Connect

    Tucker, Susan L.; Li Minghuan; Xu Ting; Gomez, Daniel; Yuan Xianglin; Yu Jinming; Liu Zhensheng; Yin Ming; Guan Xiaoxiang; Wang Lie; Wei Qingyi; Mohan, Radhe; Vinogradskiy, Yevgeniy; Martel, Mary; Liao Zhongxing

    2013-01-01

    Purpose: To determine whether single-nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor-{beta}, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials: Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGF{beta}, TNF{alpha}, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiation therapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (grade {>=}3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results: Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGF{beta}, VEGF, TNF{alpha}, XRCC1 and APEX1. With smoking status included in the multivariate model, the SNPs significantly associated with increased risk of RP were in genes for TGF{beta}, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions: This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk vs <10% risk of RP when they were exposed to high MLDs.

  8. A Neural Population Model Incorporating Dopaminergic Neurotransmission during Complex Voluntary Behaviors

    PubMed Central

    Simonyan, Kristina

    2014-01-01

    Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number of extensions of

  9. Petroacoustic Modelling of Heterolithic Sandstone Reservoirs: A Novel Approach to Gassmann Modelling Incorporating Sedimentological Constraints and NMR Porosity data

    NASA Astrophysics Data System (ADS)

    Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.

    2012-12-01

    Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which

  10. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat

  11. Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism

    SciTech Connect

    Li, Zhen; Bian, Xin; Karniadakis, George Em; Li, Xiantao

    2015-12-28

    The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while the corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.

  12. Quantification of sequential chlorinated ethene degradation by use of a reactive transport model incorporating isotope fractionation.

    PubMed

    Van Breukelen, Boris M; Hunkeler, Daniel; Volkering, Frank

    2005-06-01

    Compound-specific isotope analysis (CSIA) enables quantification of biodegradation by use of the Rayleigh equation. The Rayleigh equation fails, however, to describe the sequential degradation of chlorinated aliphatic hydrocarbons (CAHs) involving various intermediates that are controlled by simultaneous degradation and production. This paper shows how isotope fractionation during sequential degradation can be simulated in a 1D reactive transport code (PHREEQC-2). 12C and 13C isotopes of each CAH were simulated as separate species, and the ratio of the rate constants of the heavy to light isotope equaled the kinetic isotope fractionation factor for each degradation step. The developed multistep isotope fractionation reactive transport model (IF-RTM) adequately simulated reductive dechlorination of tetrachloroethene (PCE) to ethene in a microcosm experiment. Transport scenarios were performed to evaluate the effect of sorption and of different degradation rate constant ratios among CAH species on the downgradient isotope evolution. The power of the model to quantify degradation is illustrated for situations where mixed sources degrade and for situations where daughter products are removed by oxidative processes. Finally, the model was used to interpret the occurrence of reductive dechlorination at a field site. The developed methodology can easily be incorporated in 3D solute transport models to enable quantification of sequential CAH degradation in the field by CSIA. PMID:15984799

  13. Incorporating seismic phase correlations into a probabilistic model of global-scale seismology

    NASA Astrophysics Data System (ADS)

    Arora, Nimar

    2013-04-01

    We present a probabilistic model of seismic phases whereby the attributes of the body-wave phases are correlated to those of the first arriving P phase. This model has been incorporated into NET-VISA (Network processing Vertically Integrated Seismic Analysis) a probabilistic generative model of seismic events, their transmission, and detection on a global seismic network. In the earlier version of NET-VISA, seismic phase were assumed to be independent of each other. Although this didn't affect the quality of the inferred seismic bulletin, for the most part, it did result in a few instances of anomalous phase association. For example, an S phase with a smaller slowness than the corresponding P phase. We demonstrate that the phase attributes are indeed highly correlated, for example the uncertainty in the S phase travel time is significantly reduced given the P phase travel time. Our new model exploits these correlations to produce better calibrated probabilities for the events, as well as fewer anomalous associations.

  14. Incorporation of parametric factors into multilinear receptor model studies of Atlanta aerosol

    NASA Astrophysics Data System (ADS)

    Kim, Eugene; Hopke, Philip K.; Paatero, Pentti; Edgerton, Eric S.

    In prior work with simulated data, ancillary variables including time resolved wind data were utilized in a multilinear model to successfully reduce rotational ambiguity and increase the number of resolved sources. In this study, time resolved wind and other data were incorporated into a model for the analysis of real measurement data. Twenty-four hour integrated PM 2.5 (particulate matter ⩽2.5 μm in aerodynamic diameter) compositional data were measured in Atlanta, GA between August 1998 and August 2000 (662 samples). A two-stage model that utilized 22 elemental species, two wind variables, and three time variables was used for this analysis. The model identified nine sources: sulfate-rich secondary aerosol I (54%), gasoline exhaust (15%), diesel exhaust (11%), nitrate-rich secondary aerosol (9%), metal processing (3%), wood smoke (3%), airborne soil (2%), sulfate-rich secondary aerosol II (2%), and the mixture of a cement kiln with a carbon-rich source (0.9%). The results of this study indicate that utilizing time resolved wind measurements aids to separate diesel exhaust from gasoline vehicle exhaust. For most of the sources, well-defined directional profiles, seasonal trends, and weekend effects were obtained.

  15. Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism.

    PubMed

    Li, Zhen; Bian, Xin; Li, Xiantao; Karniadakis, George Em

    2015-12-28

    The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while the corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model. PMID:26723613

  16. Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Bian, Xin; Li, Xiantao; Karniadakis, George Em

    2015-12-01

    The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while the corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.

  17. Lifetime growth in wild meerkats: incorporating life history and environmental factors into a standard growth model.

    PubMed

    English, Sinéad; Bateman, Andrew W; Clutton-Brock, Tim H

    2012-05-01

    Lifetime records of changes in individual size or mass in wild animals are scarce and, as such, few studies have attempted to model variation in these traits across the lifespan or to assess the factors that affect them. However, quantifying lifetime growth is essential for understanding trade-offs between growth and other life history parameters, such as reproductive performance or survival. Here, we used model selection based on information theory to measure changes in body mass over the lifespan of wild meerkats, and compared the relative fits of several standard growth models (monomolecular, von Bertalanffy, Gompertz, logistic and Richards). We found that meerkats exhibit monomolecular growth, with the best model incorporating separate growth rates before and after nutritional independence, as well as effects of season and total rainfall in the previous nine months. Our study demonstrates how simple growth curves may be improved by considering life history and environmental factors, which may be particularly relevant when quantifying growth patterns in wild populations. PMID:22108854

  18. Incorporating a Full-Physics Meteorological Model into an Applied Atmospheric Dispersion Modeling System

    SciTech Connect

    Berg, Larry K.; Allwine, K Jerry; Rutz, Frederick C.

    2004-08-23

    A new modeling system has been developed to provide a non-meteorologist with tools to predict air pollution transport in regions of complex terrain. This system couples the Penn State/NCAR Mesoscale Model 5 (MM5) with Earth Tech’s CALMET-CALPUFF system using a unique Graphical User Interface (GUI) developed at Pacific Northwest National Laboratory. This system is most useful in data-sparse regions, where there are limited observations to initialize the CALMET model. The user is able to define the domain of interest, provide details about the source term, and enter a surface weather observation through the GUI. The system then generates initial conditions and time constant boundary conditions for use by MM5. MM5 is run and the results are piped to CALPUFF for the dispersion calculations. Contour plots of pollutant concentration are prepared for the user. The primary advantages of the system are the streamlined application of MM5 and CALMET, limited data requirements, and the ability to run the coupled system on a desktop or laptop computer. In comparison with data collected as part of a field campaign, the new modeling system shows promise that a full-physics mesoscale model can be used in an applied modeling system to effectively simulate locally thermally-driven winds with minimal observations as input. An unexpected outcome of this research was how well CALMET represented the locally thermally-driven flows.

  19. A non-classical Kirchhoff plate model incorporating microstructure, surface energy and foundation effects

    NASA Astrophysics Data System (ADS)

    Gao, X.-L.; Zhang, G. Y.

    2016-03-01

    A new non-classical Kirchhoff plate model is developed using a modified couple stress theory, a surface elasticity theory and a two-parameter elastic foundation model. A variational formulation based on Hamilton's principle is employed, which leads to the simultaneous determination of the equations of motion and the complete boundary conditions and provides a unified treatment of the microstructure, surface energy and foundation effects. The new plate model contains a material length scale parameter to account for the microstructure effect, three surface elastic constants to describe the surface energy effect, and two foundation moduli to represent the foundation effect. The current non-classical plate model reduces to its classical elasticity-based counterpart when the microstructure, surface energy and foundation effects are all suppressed. In addition, the newly developed plate model includes the models considering the microstructure dependence or the surface energy effect or the foundation influence alone as special cases and recovers the Bernoulli-Euler beam model incorporating the microstructure, surface energy and foundation effects. To illustrate the new model, the static bending and free vibration problems of a simply supported rectangular plate are analytically solved by directly applying the general formulas derived. For the static bending problem, the numerical results reveal that the deflection of the simply supported plate with or without the elastic foundation predicted by the current model is smaller than that predicted by the classical model. Also, it is observed that the difference in the deflection predicted by the new and classical plate models is very large when the plate thickness is sufficiently small, but it is diminishing with the increase of the plate thickness. For the free vibration problem, it is found that the natural frequency predicted by the new plate model with or without the elastic foundation is higher than that predicted by the

  20. Improving consumption rate estimates by incorporating wild activity into a bioenergetics model.

    PubMed

    Brodie, Stephanie; Taylor, Matthew D; Smith, James A; Suthers, Iain M; Gray, Charles A; Payne, Nicholas L

    2016-04-01

    Consumption is the basis of metabolic and trophic ecology and is used to assess an animal's trophic impact. The contribution of activity to an animal's energy budget is an important parameter when estimating consumption, yet activity is usually measured in captive animals. Developments in telemetry have allowed the energetic costs of activity to be measured for wild animals; however, wild activity is seldom incorporated into estimates of consumption rates. We calculated the consumption rate of a free-ranging marine predator (yellowtail kingfish, Seriola lalandi) by integrating the energetic cost of free-ranging activity into a bioenergetics model. Accelerometry transmitters were used in conjunction with laboratory respirometry trials to estimate kingfish active metabolic rate in the wild. These field-derived consumption rate estimates were compared with those estimated by two traditional bioenergetics methods. The first method derived routine swimming speed from fish morphology as an index of activity (a "morphometric" method), and the second considered activity as a fixed proportion of standard metabolic rate (a "physiological" method). The mean consumption rate for free-ranging kingfish measured by accelerometry was 152 J·g(-1)·day(-1), which lay between the estimates from the morphometric method (μ = 134 J·g(-1)·day(-1)) and the physiological method (μ = 181 J·g(-1)·day(-1)). Incorporating field-derived activity values resulted in the smallest variance in log-normally distributed consumption rates (σ = 0.31), compared with the morphometric (σ = 0.57) and physiological (σ = 0.78) methods. Incorporating field-derived activity into bioenergetics models probably provided more realistic estimates of consumption rate compared with the traditional methods, which may further our understanding of trophic interactions that underpin ecosystem-based fisheries management. The general methods used to estimate active metabolic rates of free-ranging fish

  1. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    SciTech Connect

    Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail

    2013-12-14

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.

  2. Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma

    SciTech Connect

    Ruggieri, Ruggero; Stavreva, Nadejda; Naccarato, Stefania; Stavrev, Pavel

    2012-08-01

    Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as due to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.

  3. An integrative model of the cardiac ventricular myocyte incorporating local control of Ca2+ release.

    PubMed Central

    Greenstein, Joseph L; Winslow, Raimond L

    2002-01-01

    The local control theory of excitation-contraction (EC) coupling in cardiac muscle asserts that L-type Ca(2+) current tightly controls Ca(2+) release from the sarcoplasmic reticulum (SR) via local interaction of closely apposed L-type Ca(2+) channels (LCCs) and ryanodine receptors (RyRs). These local interactions give rise to smoothly graded Ca(2+)-induced Ca(2+) release (CICR), which exhibits high gain. In this study we present a biophysically detailed model of the normal canine ventricular myocyte that conforms to local control theory. The model formulation incorporates details of microscopic EC coupling properties in the form of Ca(2+) release units (CaRUs) in which individual sarcolemmal LCCs interact in a stochastic manner with nearby RyRs in localized regions where junctional SR membrane and transverse-tubular membrane are in close proximity. The CaRUs are embedded within and interact with the global systems of the myocyte describing ionic and membrane pump/exchanger currents, SR Ca(2+) uptake, and time-varying cytosolic ion concentrations to form a model of the cardiac action potential (AP). The model can reproduce both the detailed properties of EC coupling, such as variable gain and graded SR Ca(2+) release, and whole-cell phenomena, such as modulation of AP duration by SR Ca(2+) release. Simulations indicate that the local control paradigm predicts stable APs when the L-type Ca(2+) current is adjusted in accord with the balance between voltage- and Ca(2+)-dependent inactivation processes as measured experimentally, a scenario where common pool models become unstable. The local control myocyte model provides a means for studying the interrelationship between microscopic and macroscopic behaviors in a manner that would not be possible in experiments. PMID:12496068

  4. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    SciTech Connect

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua; Alfonsi, Andrea; Askin Guler; Tunc Aldemir

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper represents an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation

  5. Incorporating Student Mobility in Achievement Growth Modeling: A Cross-Classified Multiple Membership Growth Curve Model

    ERIC Educational Resources Information Center

    Grady, Matthew W.; Beretvas, S. Natasha

    2010-01-01

    Multiple membership random effects models (MMREMs) have been developed for use in situations where individuals are members of multiple higher level organizational units. Despite their availability and the frequency with which multiple membership structures are encountered, no studies have extended the MMREM approach to hierarchical growth curve…

  6. Model-Based Detection of Radioactive Contraband for Harbor Defense Incorporating Compton Scattering Physics

    SciTech Connect

    Candy, J V; Chambers, D H; Breitfeller, E F; Guidry, B L; Verbeke, J M; Axelrod, M A; Sale, K E; Meyer, A M

    2010-03-02

    The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representation of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.

  7. Evolutionary Models of Super-Earths and Mini-Neptunes Incorporating Cooling and Mass Loss

    NASA Astrophysics Data System (ADS)

    Howe, Alex R.; Burrows, Adam

    2015-08-01

    We construct models of the structural evolution of super-Earth- and mini-Neptune-type exoplanets with H2-He envelopes, incorporating radiative cooling and XUV-driven mass loss. We conduct a parameter study of these models, focusing on initial mass, radius, and envelope mass fractions, as well as orbital distance, metallicity, and the specific prescription for mass loss. From these calculations, we investigate how the observed masses and radii of exoplanets today relate to the distribution of their initial conditions. Orbital distance and the initial envelope mass fraction are the most important factors determining planetary evolution, particularly radius evolution. Initial mass also becomes important below a “turnoff mass,” which varies with orbital distance, with mass-radius curves being approximately flat for higher masses. Initial radius is the least important parameter we study, with very little difference between the hot start and cold start limits after an age of 100 Myr. Model sets with no mass loss fail to produce results consistent with observations, but a plausible range of mass-loss scenarios is allowed. In addition, we present scenarios for the formation of the Kepler-11 planets. Our best fit to observations of Kepler-11b and Kepler-11c involves formation beyond the snow line, after which they moved inward, circularized, and underwent a reduced degree of mass loss.

  8. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  9. Incorporating spatially explicit crown light competition into a model of canopy transpiration

    NASA Astrophysics Data System (ADS)

    Loranty, M. M.; Mackay, D. S.; Roberts, D. E.; Ewers, B. E.; Kruger, E. L.; Traver, E.

    2006-12-01

    Stomatal conductance parameterized in a transpiration model has been shown to vary spatially for aspen ( Populus tremuloides) and alder (Alnus incana) growing along a moisture gradient. We hypothesized that competition for light within the canopy would explain some of this variation. Sap flux data was collected over 10 days in 2004, and 30 days in 2005 at a 1.5 ha site near the WLEF AmeriFlux tower in the Chequmegon National Forest near Park Falls, Wisconsin. We used inverse modeling with the Terrestrial Regional Ecosystem Exchange Simulator (TREES) to estimate values of GSref for individual trees. Competition data for individual aspen sampled for sap flux was collected in August 2006. The number, height, DBH, and location of all competitors within 5 meters of each flux tree were recorded. Preliminary geostatistical analysis indicates that the number of competitor trees varies spatially for aspen. We hypothesize that height and species specific crown characteristics of competitor trees will have a spatially variable affect on transpiration via light attenuation. Furthermore, a simple light competition term will be able to incorporate this variability into the TREES transpiration model.

  10. A Model for Incorporating Chemical Reactions in Mesoscale Modeling of Laser Ablation of Polymers

    NASA Astrophysics Data System (ADS)

    Garrison, Barbara J.; Yingling, Yaroslava G.

    2004-03-01

    We have developed a methodology for including effects of chemical reactions in coarse-grained computer simulations such as those that use the united atom or bead and spring approximations. The new coarse-grained chemical reaction model (CGCRM) adopts the philosophy of kinetic Monte Carlo approaches and includes a probabilistic element to predicting when reactions occur, thus obviating the need for a chemically correct interaction potential. The CGCRM uses known chemical reactions along with their probabilities and exothermicities for a specific material in order to assess the effect of chemical reactions on a physical process of interest. The reaction event in the simulation is implemented by removing the reactant molecules from the simulation and replacing them with product molecules. The position of the product molecules is carefully adjusted to make sure that the total energy change of the system corresponds to the reaction exothermicity. The CGCR model was initially implemented in simulations of laser irradiation at fluences such that there is ablation or massive removal of material. The initial reaction is photon cleavage of a chemical bond thus creating two radicals that can undergo subsequent abstraction and radical-radical recombination reactions. The talk will discuss application of the model to photoablation of PMMA. Y. G. Yingling, L. V. Zhigilei and B. J. Garrison, J. Photochemistry and Photobiology A: Chemistry, 145, 173-181 (2001); Y. G. Yingling and B. J. Garrison, Chem. Phys. Lett., 364, 237-243 (2002).

  11. MIST: An Open Source Environmental Modelling Programming Language Incorporating Easy to Use Data Parallelism.

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2014-05-01

    Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.

  12. A land use regression model incorporating data on industrial point source pollution.

    PubMed

    Chen, Li; Wang, Yuming; Li, Peiwu; Ji, Yaqin; Kong, Shaofei; Li, Zhiyong; Bai, Zhipeng

    2012-01-01

    Advancing the understanding of the spatial aspects of air pollution in the city regional environment is an area where improved methods can be of great benefit to exposure assessment and policy support. We created land use regression (LUR) models for SO2, NO2 and PM10 for Tianjin, China. Traffic volumes, road networks, land use data, population density, meteorological conditions, physical conditions and satellite-derived greenness, brightness and wetness were used for predicting SO2, NO2 and PM10 concentrations. We incorporated data on industrial point sources to improve LUR model performance. In order to consider the impact of different sources, we calculated the PSIndex, LSIndex and area of different land use types (agricultural land, industrial land, commercial land, residential land, green space and water area) within different buffer radii (1 to 20 km). This method makes up for the lack of consideration of source impact based on the LUR model. Remote sensing-derived variables were significantly correlated with gaseous pollutant concentrations such as SO2 and NO2. R2 values of the multiple linear regression equations for SO2, NO2 and PM10 were 0.78, 0.89 and 0.84, respectively, and the RMSE values were 0.32, 0.18 and 0.21, respectively. Model predictions at validation monitoring sites went well with predictions generally within 15% of measured values. Compared to the relationship between dependent variables and simple variables (such as traffic variables or meteorological condition variables), the relationship between dependent variables and integrated variables was more consistent with a linear relationship. Such integration has a discernable influence on both the overall model prediction and health effects assessment on the spatial distribution of air pollution in the city region. PMID:23513446

  13. View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition.

    PubMed

    Muramatsu, Daigo; Makihara, Yasushi; Yagi, Yasushi

    2016-07-01

    Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view transformation model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings. PMID:26259209

  14. Incorporating dynamic collimator motion in Monte Carlo simulations: an application in modelling a dynamic wedge.

    PubMed

    Verhaegen, F; Liu, H H

    2001-02-01

    In radiation therapy, new treatment modalities employing dynamic collimation and intensity modulation increase the complexity of dose calculation because a new dimension, time, has to be incorporated into the traditional three-dimensional problem. In this work, we investigated two classes of sampling technique to incorporate dynamic collimator motion in Monte Carlo simulation. The methods were initially evaluated for modelling enhanced dynamic wedges (EDWs) from Varian accelerators (Varian Medical Systems, Palo Alto, USA). In the position-probability-sampling or PPS method, a cumulative probability distribution function (CPDF) was computed for the collimator position, which could then be sampled during simulations. In the static-component-simulation or SCS method, a dynamic field is approximated by multiple static fields in a step-shoot fashion. The weights of the particles or the number of particles simulated for each component field are computed from the probability distribution function (PDF) of the collimator position. The CPDF and PDF were computed from the segmented treatment tables (STTs) for the EDWs. An output correction factor had to be applied in this calculation to account for the backscattered radiation affecting monitor chamber readings. Comparison of the phase-space data from the PPS method (with the step-shoot motion) with those from the SCS method showed excellent agreement. The accuracy of the PPS method was further verified from the agreement between the measured and calculated dose distributions. Compared to the SCS method, the PPS method is more automated and efficient from an operational point of view. The principle of the PPS method can be extended to simulate other dynamic motions, and in particular, intensity-modulated beams using multileaf collimators. PMID:11229715

  15. Bias in Diet Determination: Incorporating Traditional Methods in Bayesian Mixing Models

    PubMed Central

    Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G.; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo

    2013-01-01

    There are not “universal methods” to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators’ diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal’s diet the sea lion’s did not have a clear dominance of any prey. In contrast, SIMM-IP’s diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs’ estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys’ contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators’ diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably

  16. Incorporating an advanced aerosol activation parameterization into WRF-CAM5: Model evaluation and parameterization intercomparison

    SciTech Connect

    Zhang, Yang; Zhang, Xin; Wang, Kai; He, Jian; Leung, Lai-Yung R.; Fan, Jiwen; Nenes, Athanasios

    2015-07-22

    Aerosol activation into cloud droplets is an important process that governs aerosol indirect effects. The advanced treatment of aerosol activation by Fountoukis and Nenes (2005) and its recent updates, collectively called the FN series, have been incorporated into a newly developed regional coupled climate-air quality model based on the Weather Research and Forecasting model with the physics package of the Community Atmosphere Model version 5 (WRF-CAM5) to simulate aerosol-cloud interactions in both resolved and convective clouds. The model is applied to East Asia for two full years of 2005 and 2010. A comprehensive model evaluation is performed for model predictions of meteorological, radiative, and cloud variables, chemical concentrations, and column mass abundances against satellite data and surface observations from air quality monitoring sites across East Asia. The model performs overall well for major meteorological variables including near-surface temperature, specific humidity, wind speed, precipitation, cloud fraction, precipitable water, downward shortwave and longwave radiation, and column mass abundances of CO, SO2, NO2, HCHO, and O3 in terms of both magnitudes and spatial distributions. Larger biases exist in the predictions of surface concentrations of CO and NOx at all sites and SO2, O3, PM2.5, and PM10 concentrations at some sites, aerosol optical depth, cloud condensation nuclei over ocean, cloud droplet number concentration (CDNC), cloud liquid and ice water path, and cloud optical thickness. Compared with the default Abdul-Razzack Ghan (2000) parameterization, simulations with the FN series produce ~107–113% higher CDNC, with half of the difference attributable to the higher aerosol activation fraction by the FN series and the remaining half due to feedbacks in subsequent cloud microphysical processes. With the higher CDNC, the FN series are more skillful in simulating cloud water path, cloud optical thickness, downward shortwave radiation

  17. Incorporating H2 Dynamics and Inhibition into a Microbially Based Methanogenesis Model for Restored Wetland Sediments

    NASA Astrophysics Data System (ADS)

    Pal, David; Jaffe, Peter

    2015-04-01

    Estimates of global CH4 emissions from wetlands indicate that wetlands are the largest natural source of CH4 to the atmosphere. In this paper, we propose that there is a missing component to these models that should be addressed. CH4 is produced in wetland sediments from the microbial degradation of organic carbon through multiple fermentation steps and methanogenesis pathways. There are multiple sources of carbon for methananogenesis; in vegetated wetland sediments, microbial communities consume root exudates as a major source of organic carbon. In many methane models propionate is used as a model carbon molecule. This simple sugar is fermented into acetate and H2, acetate is transformed to methane and CO2, while the H2 and CO2 are used to form an additional CH4 molecule. The hydrogenotrophic pathway involves the equilibrium of two dissolved gases, CH4 and H2. In an effort to limit CH4 emissions from wetlands, there has been growing interest in finding ways to limit plant transport of soil gases through root systems. Changing planted species, or genetically modifying new species of plants may control this transport of soil gases. While this may decrease the direct emissions of methane, there is little understanding about how H2 dynamics may feedback into overall methane production. The results of an incubation study were combined with a new model of propionate degradation for methanogenesis that also examines other natural parameters (i.e. gas transport through plants). This presentation examines how we would expect this model to behave in a natural field setting with changing sulfate and carbon loading schemes. These changes can be controlled through new plant species and other management practices. Next, we compare the behavior of two variations of this model, with or without the incorporation of H2 interactions, with changing sulfate, carbon loading and root volatilization. Results show that while the models behave similarly there may be a discrepancy of nearly

  18. No-net-rotation model of current plate velocities incorporating plate motion model NUVEL-1

    NASA Technical Reports Server (NTRS)

    Argus, Donald F.; Gordon, Richard G.

    1991-01-01

    NNR-NUVEL1 is presented which is a model of plate velocities relative to the unique reference frame defined by requiring no-net-rotation of the lithosphere while constraining relative plate velocities to equal those in global plate motion model NUVEL-1 (DeMets et al., 1990). In NNR-NUVEL1, the Pacific plate rotates in a right-handed sense relative to the no-net-rotation reference frame at 0.67 deg/m.y. about 63 deg S, 107 deg E. At Hawaii the Pacific plate moves relative to the no-net-rotation reference frame at 70 mm/yr, which is 25 mm/yr slower than the Pacific plate moves relative to the hotspots. Differences between NNR-NUVEL1 and HS2-NUVEL1 are described. The no-net-rotation reference frame differs significantly from the hotspot reference frame. If the difference between reference frames is caused by motion of the hotspots relative to a mean-mantle reference frame, then hotspots beneath the Pacific plate move with coherent motion towards the east-southeast. Alternatively, the difference between reference frames can show that the uniform drag, no-net-torque reference frame, which is kinematically equivalent to the no-net-rotation reference frame, is based on a dynamically incorrect premise.

  19. Incorporating psychosocial health into biocultural models: preliminary findings from Turkana women of Kenya.

    PubMed

    Pike, Ivy L; Williams, Sharon R

    2006-01-01

    This paper investigates the potential benefits and limitations of including psychosocial stress data in a biocultural framework of human adaptability. Building on arguments within human biology on the importance of political economic perspectives for examining patterns of biological variation, this paper suggests that psychosocial perspectives may further refine our understanding of the mechanisms through which social distress yields differences in health and well-being. To assess a model that integrates psychosocial experiences, we conducted a preliminary study among nomadic pastoralist women from northern Kenya. We interviewed 45 women about current and past stressful experiences, and collected anthropometric data and salivary cortisol measures. Focus group and key informant interviews were conducted to refine our understanding of how the Turkana discuss and experience distress. The results suggest that the most sensitive indicators of Turkana women's psychosocial experiences were the culturally defined idioms of distress, which showed high concordance with measures of first-day salivary cortisol. Other differences in stress reactivity were associated with the frequent movement of encampments, major herd losses, and direct experiences of livestock raiding. Despite the preliminary nature of these data, we believe that the results offer important lessons and insights into the longer-term process of incorporating psychosocial models into human adaptability studies. PMID:17039478

  20. Anisotropic constitutive model incorporating multiple damage mechanisms for multiscale simulation of dental enamel.

    PubMed

    Ma, Songyun; Scheider, Ingo; Bargmann, Swantje

    2016-09-01

    An anisotropic constitutive model is proposed in the framework of finite deformation to capture several damage mechanisms occurring in the microstructure of dental enamel, a hierarchical bio-composite. It provides the basis for a homogenization approach for an efficient multiscale (in this case: multiple hierarchy levels) investigation of the deformation and damage behavior. The influence of tension-compression asymmetry and fiber-matrix interaction on the nonlinear deformation behavior of dental enamel is studied by 3D micromechanical simulations under different loading conditions and fiber lengths. The complex deformation behavior and the characteristics and interaction of three damage mechanisms in the damage process of enamel are well captured. The proposed constitutive model incorporating anisotropic damage is applied to the first hierarchical level of dental enamel and validated by experimental results. The effect of the fiber orientation on the damage behavior and compressive strength is studied by comparing micro-pillar experiments of dental enamel at the first hierarchical level in multiple directions of fiber orientation. A very good agreement between computational and experimental results is found for the damage evolution process of dental enamel. PMID:27294283

  1. Development and application of a mark-recapture model incorporating predicted sex and transitory behaviour

    USGS Publications Warehouse

    Conroy, M.J.; Senar, J.C.; Hines, J.E.; Domenech, J.

    1999-01-01

    We developed an extension of Cormack-Jolly-Seber models to handle a complex mark-recapture problem in which (a) the sex of birds cannot be determined prior to first moult, but can be predicted on the basis of body measurements, and (b) a significant portion of captured birds appear to be transients (i.e. are captured once but leave the area or otherwise become ' untrappable'). We applied this methodology to a data set of 4184 serins (Serinus serinus) trapped in northeastern Spain during 1985-96, in order to investigate age-, sex-, and time-specific variation in survival rates. Using this approach, we were able to successfully incorporate the majority of ringings of serins. Had we eliminated birds not previously captured (as has been advocated to avoid the problem of transience) we would have reduced our sample sizes by >2000 releases. In addition, we were able to include 1610 releases of birds of unknown (but predicted) sex; these data contributed to the precision of our estimates and the power of statistical tests. We discuss problems with data structure, encoding of the algorithms to compute parameter estimates, model selection, identifiability of parameters, and goodness-of-fit, and make recommendations for the design and analysis of future studies facing similar problems.

  2. Development and application of a mark-recapture model incorporating predicted sex and transitory behaviour

    USGS Publications Warehouse

    Conroy, M.J.; Senar, J.C.; Hines, J.E.; Domenech, J.

    1999-01-01

    We developed an extension of Cormack-Jolly-Seber models to handle a complex mark-recapture problem in which (a) the sex of birds cannot be determined prior to first moult, but can be predicted on the basis of body measurements, and (b) a significant portion of captured birds appear to be transients (i.e. are captured once but leave the area or otherwise become 'untrappable'). We applied this methodology to a data set of 4184 serins (Serinus serinus) trapped in northeastern Spain during 1985-96, in order to investigate age-, sex-, and time-specific variation in survival rates. Using this approach, we were able to successfully incorporate the majority of ringings of serins. Had we eliminated birds not previously captured (as has been advocated to avoid the problem of transience) we would have reduced our sample sizes by >2000 releases. In addition, we were able to include 1610 releases of birds of unknown (but predicted) sex; these data contributed to the precision of our estimates and the power of statistical tests. We discuss problems with data structure, encoding of the algorithms to compute parameter estimates, model selection, identifiability of parameters, and goodness-of-fit, and make recommendations for the design and analysis of future studies facing similar problems.

  3. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    PubMed Central

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  4. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  5. Incorporating a wheeled vehicle model in a new monocular visual odometry algorithm for dynamic outdoor environments.

    PubMed

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  6. A large-scale methane model by incorporating the surface water transport

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoliang; Zhuang, Qianlai; Liu, Yaling; Zhou, Yuyu; Aghakouchak, Amir

    2016-06-01

    The effect of surface water movement on methane emissions is not explicitly considered in most of the current methane models. In this study, a surface water routing was coupled into our previously developed large-scale methane model. The revised methane model was then used to simulate global methane emissions during 2006-2010. From our simulations, the global mean annual maximum inundation extent is 10.6 ± 1.9 km2 and the methane emission is 297 ± 11 Tg C/yr in the study period. In comparison to the currently used TOPMODEL-based approach, we found that the incorporation of surface water routing leads to 24.7% increase in the annual maximum inundation extent and 30.8% increase in the methane emissions at the global scale for the study period, respectively. The effect of surface water transport on methane emissions varies in different regions: (1) the largest difference occurs in flat and moist regions, such as Eastern China; (2) high-latitude regions, hot spots in methane emissions, show a small increase in both inundation extent and methane emissions with the consideration of surface water movement; and (3) in arid regions, the new model yields significantly larger maximum flooded areas and a relatively small increase in the methane emissions. Although surface water is a small component in the terrestrial water balance, it plays an important role in determining inundation extent and methane emissions, especially in flat regions. This study indicates that future quantification of methane emissions shall consider the effects of surface water transport.

  7. Incorporating Geochemical And Microbial Kinetics In Reactive Transport Models For Generation Of Acid Rock Drainage

    NASA Astrophysics Data System (ADS)

    Andre, B. J.; Rajaram, H.; Silverstein, J.

    2010-12-01

    diffusion model at the scale of a single rock is developed incorporating the proposed kinetic rate expressions. Simulations of initiation, washout and AMD flows are discussed to gain a better understanding of the role of porosity, effective diffusivity and reactive surface area in generating AMD. Simulations indicate that flow boundary conditions control generation of acid rock drainage as porosity increases.

  8. Incorporation of in vitro drug metabolism data into physiologically-based pharmacokinetic models.

    PubMed

    Houston, J B; Carlile, D J

    1997-10-01

    The liver poses particular problems in constructing physiologically-based pharmacokinetic models since this organ is not only a distribution site for drugs/chemicals but frequently the major site of metabolism. The impact of hepatic drug metabolism in modelling is substantial and it is crucial to the success of the model that in vitro data on biotransformation be incorporated in a judicious manner. The value of in vitro/in vivo extrapolation is clearly demonstrated by considering kinetic data from incubations with freshly isolated hepatocytes. The determination of easily measurable in vitro parameters, such as V(max) and K(m), from initial rate studies and scaling to the in vivo situation by accounting for hepatocellularity provides intrinsic clearance estimates. A scaling factor of 1200 x 10(6) cells per 250 g rat has proved to be a robust parameter independent of laboratory technique and insensitive to animal pretreatment. Similar procedures can also be adopted for other in vitro systems such as hepatic microsomes and liver slices. An appropriate scaling factor for microsomal studies is the microsomal recovery index which allows for the incomplete recovery of cytochrome P-450 with standard differential centrifugation of liver homogenates. The hepatocellularity of a liver slice has been unsatisfactory in scaling kinetic parameters from liver slices. The level of success varies from drug to drug and substrate diffusion is a competing process to metabolism within the slice incubation system; hence, low clearance drugs are better predicted than high clearance drugs. The use of three liver models (venous-equilibration, undistributed sinusoidal and dispersion models) have been compared to predict hepatic clearance from in vitro intrinsic clearance values. As no consistent advantage of one model over the others could be demonstrated, the simplest, the venous-equilibration model, is adequate for the currently available data in hepatocytes. While these successes are

  9. Adolescent Decision-Making Processes regarding University Entry: A Model Incorporating Cultural Orientation, Motivation and Occupational Variables

    ERIC Educational Resources Information Center

    Jung, Jae Yup

    2013-01-01

    This study tested a newly developed model of the cognitive decision-making processes of senior high school students related to university entry. The model incorporated variables derived from motivation theory (i.e. expectancy-value theory and the theory of reasoned action), literature on cultural orientation and occupational considerations. A…

  10. Incorporating missingness for estimation of marginal regression models with multiple source predictors.

    PubMed

    Litman, Heather J; Horton, Nicholas J; Hernández, Bernardo; Laird, Nan M

    2007-02-28

    Multiple informant data refers to information obtained from different individuals or sources used to measure the same construct; for example, researchers might collect information regarding child psychopathology from the child's teacher and the child's parent. Frequently, studies with multiple informants have incomplete observations; in some cases the missingness of informants is substantial. We introduce a Maximum Likelihood (ML) technique to fit models with multiple informants as predictors that permits missingness in the predictors as well as the response. We provide closed form solutions when possible and analytically compare the ML technique to the existing Generalized Estimating Equations (GEE) approach. We demonstrate that the ML approach can be used to compare the effect of the informants on response without standardizing the data. Simulations incorporating missingness show that ML is more efficient than the existing GEE method. In the presence of MCAR missing data, we find through a simulation study that the ML approach is robust to a relatively extreme departure from the normality assumption. We implement both methods in a study investigating the association between physical activity and obesity with activity measured using multiple informants (children and their mothers). PMID:16755531

  11. Incorporation of GRACE Data into a Bayesian Model for Groundwater Drought Monitoring

    NASA Astrophysics Data System (ADS)

    Slinski, K.; Hogue, T. S.; McCray, J. E.; Porter, A.

    2015-12-01

    Groundwater drought, defined as the sustained occurrence of below average availability of groundwater, is marked by below average water levels in aquifers and reduced flows to groundwater-fed rivers and wetlands. The impact of groundwater drought on ecosystems, agriculture, municipal water supply, and the energy sector is an increasingly important global issue. However, current drought monitors heavily rely on precipitation and vegetative stress indices to characterize the timing, duration, and severity of drought events. The paucity of in situ observations of aquifer levels is a substantial obstacle to the development of systems to monitor groundwater drought in drought-prone areas, particularly in developing countries. Observations from the NASA/German Space Agency's Gravity Recovery and Climate Experiment (GRACE) have been used to estimate changes in groundwater storage over areas with sparse point measurements. This study incorporates GRACE total water storage observations into a Bayesian framework to assess the performance of a probabilistic model for monitoring groundwater drought based on remote sensing data. Overall, it is hoped that these methods will improve global drought preparedness and risk reduction by providing information on groundwater drought necessary to manage its impacts on ecosystems, as well as on the agricultural, municipal, and energy sectors.

  12. The metabolic pace-of-life model: incorporating ectothermic organisms into the theory of vertebrate ecoimmunology.

    PubMed

    Sandmeier, Franziska C; Tracy, Richard C

    2014-09-01

    We propose a new heuristic model that incorporates metabolic rate and pace of life to predict a vertebrate species' investment in adaptive immune function. Using reptiles as an example, we hypothesize that animals with low metabolic rates will invest more in innate immunity compared with adaptive immunity. High metabolic rates and body temperatures should logically optimize the efficacy of the adaptive immune system--through rapid replication of T and B cells, prolific production of induced antibodies, and kinetics of antibody--antigen interactions. In current theory, the precise mechanisms of vertebrate immune function oft are inadequately considered as diverse selective pressures on the evolution of pathogens. We propose that the strength of adaptive immune function and pace of life together determine many of the important dynamics of host-pathogen evolution, namely, that hosts with a short lifespan and innate immunity or with a long lifespan and strong adaptive immunity are expected to drive the rapid evolution of their populations of pathogens. Long-lived hosts that rely primarily on innate immune functions are more likely to use defense mechanisms of tolerance (instead of resistance), which are not expected to act as a selection pressure for the rapid evolution of pathogens' virulence. PMID:24760792

  13. Sensitivity studies for incorporating the direct effect of sulfate aerosols into climate models

    NASA Astrophysics Data System (ADS)

    Miller, Mary Rawlings Lamberton

    2000-09-01

    Aerosols have been identified as a major element of the climate system known to scatter and absorb solar and infrared radiation, but the development of procedures for representing them is still rudimentary. This study addresses the need to improve the treatment of sulfate aerosols in climate models by investigating how sensitive radiative particles are to varying specific sulfate aerosol properties. The degree to which sulfate particles absorb or scatter radiation, termed the direct effect, varies with the size distribution of particles, the aerosol mass density, the aerosol refractive indices, the relative humidity and the concentration of the aerosol. This study develops 504 case studies of altering sulfate aerosol chemistry, size distributions, refractive indices and densities at various ambient relative humidity conditions. Ammonium sulfate and sulfuric acid aerosols are studied with seven distinct size distributions at a given mode radius with three corresponding standard deviations implemented from field measurements. These test cases are evaluated for increasing relative humidity. As the relative humidity increases, the complex index of refraction and the mode radius for each distribution correspondingly change. Mie theory is employed to obtain the radiative properties for each case study. The case studies are then incorporated into a box model, the National Center of Atmospheric Research's (NCAR) column radiation model (CRM), and NCAR's community climate model version 3 (CCM3) to determine how sensitive the radiative properties and potential climatic effects are to altering sulfate properties. This study found the spatial variability of the sulfate aerosol leads to regional areas of intense aerosol forcing (W/m2). These areas are particularly sensitive to altering sulfate properties. Changes in the sulfate lognormal distribution standard deviation can lead to substantial regional differences in the annual aerosol forcing greater than 2 W/m 2. Changes in the

  14. NexGen PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models

    EPA Science Inventory

    We examine how the integration of evolutionary and ecological processes in population dynamics – an emerging framework in ecology – could be incorporated into population viability analysis (PVA). Driven by parallel, complementary advances in population genomics and computational ...

  15. Dynamic modeling of the servovalves incorporated in the servo hydraulic system of the 70-meter DSN antennas

    NASA Technical Reports Server (NTRS)

    Bartos, R. D.

    1992-01-01

    As the pointing accuracy and service life requirements of the DSN 70 meter antenna increase, it is necessary to gain a more complete understanding of the servo hydraulic system in order to improve system designs to meet the new requirements. A mathematical model is developed for the servovalve incorporated into the hydraulic system of the 70 meter antenna and uses experimental data to verify the validity of the model and to identify the model parameters.

  16. Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling

    PubMed Central

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2010-01-01

    In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it

  17. Incorporating remote sensing data in crop model to monitor crop growth and predict yield in regional area

    NASA Astrophysics Data System (ADS)

    Guo, Jianmao; Lu, Weisong; Zhang, Guoping; Qian, Yonglan; Yu, Qiang; Zhang, Jiahua

    2006-12-01

    Accurate crop growth monitoring and yield predicting is very important to food security and agricultural sustainable development. Crop models can be forceful tools for monitoring crop growth status and predicting yield over homogeneous areas, however, their application to a larger spatial domains is hampered by lack of sufficient spatial information about model inputs, such as the value of some of their parameters and initial conditions, which may have great difference between regions even fields. The use of remote sensing data helps to overcome this problem. By incorporating remote sensing data into the WOFOST crop model (through LAI), it is possible to incorporate remote sensing variables (vegetation index) for each point of the spatial domain, and it is possible for this point to re-estimate new values of the parameters or initial conditions, to which the model is particularly sensitive. This paper describes the use of such a method on a local scale, for winter wheat, focusing on the parameters describing emergence and early crop growth. These processes vary greatly depending on the soil, climate and seedbed preparation, and affect yield significantly. The WOFOST crop model is calibrated under standard conditions and then evaluated under test conditions to which the emergence and early growth parameters of the WOFOST model are adjusted by incorporating remote sensing data. The inversion of the combined model allows us to accurately monitoring crop growth status and predicting yield on a regional scale.

  18. Incorporating a Time Horizon in Rate-of-Return Estimations: Discounted Cash Flow Model in Electric Transmission Rate Cases

    SciTech Connect

    Chatterjee, Bishu; Sharp, Peter A.

    2006-07-15

    Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)

  19. Incorporating the effects of habitat edges into landscape models: Effective area models for cross-boundary management.

    SciTech Connect

    T.D. Sisk; N.M. Haddad

    2002-01-01

    Sisk, T.D., and N.M. Haddad. 2002. Incorporating the effects of habitat edges into landscape models: Effective area models for cross-boundary management. Chapter 8, Pp. 208-240 in J. Liu and W.W. Taylor, Integrating landscape ecology into natural resource management, Cambridge University Press, Cambridge, UK. Abstract: Natural resource managers are increasingly charged with meeting multiple, often conflicting goals in landscapes undergoing significant change due to shifts in land use. Conservation from native to anthropogenic habitats typically fragments the landscape, reducing the size and increasing the isolation of the resulting patches, with profound ecological impacts. These impacts occur both within and adjacent to areas under active management, creating extensive edges between habitat types. Boundaries established between management areas, for example, between timber harvest units or between reserves and adjacent agricultural fields, inevitably lead to differences in the quality of habitats on either side of the boundary, and a habitat edge results. Although edges are common components of undisturbed landscapes, the amount of edge proliferates rapidly as landscapes are fragmented. Insightful analysis of the complex issues associated with cross-boundary management necessitates an explicit focus on habitat quality in the boundary regions.

  20. Thermal Energy Balance Analysis of the Tokyo Metropolitan Area Using a Mesoscale Meteorological Model Incorporating an Urban Canopy Model

    NASA Astrophysics Data System (ADS)

    Ooka, Ryozo; Sato, Taiki; Harayama, Kazuya; Murakami, Shuzo; Kawamoto, Yoichi

    2011-01-01

    The summer climate around the Tokyo metropolitan area has been analysed on an urban scale, and the regional characteristics of the thermal energy balance of a bayside business district in the centre of Tokyo (Otemachi) have been compared with an inland residential district (Nerima), using a mesoscale meteorological model incorporating an urban canopy model. From the results of the analysis, the mechanism of diurnal change in air temperature and absolute humidity in these areas is quantitatively demonstrated, with a focus on the thermal energy balance. Moreover, effective countermeasures against urban heat-islands are considered from the viewpoint of each region's thermal energy balance characteristics. In addition to thermal energy outflux by turbulent diffusion, advection by sea-breezes from Tokyo Bay discharges sensible heat in Otemachi. This mitigates temperature increases during the day. On the other hand, because all sea-breezes must first cross the centre of Tokyo, it has less of a cooling effect in Nerima. As a result, the air temperature during the day in Nerima is higher than that in Otemachi.

  1. Anticipating and Incorporating Stakeholder Feedback When Developing Value-Added Models

    ERIC Educational Resources Information Center

    Balch, Ryan; Koedel, Cory

    2014-01-01

    State and local education agencies across the United States are increasingly adopting rigorous teacher evaluation systems. Most systems formally incorporate teacher performance as measured by student test-score growth, sometimes by state mandate. An important consideration that will influence the long-term persistence and efficacy of these systems…

  2. Strategies for Incorporating Women-Specific Sexuality Education into Addiction Treatment Models

    ERIC Educational Resources Information Center

    James, Raven

    2007-01-01

    This paper advocates for the incorporation of a women-specific sexuality curriculum in the addiction treatment process to aid in sexual healing and provide for aftercare issues. Sexuality in addiction treatment modalities is often approached from a sex-negative stance, or that of sexual victimization. Sexual issues are viewed as addictive in and…

  3. Incorporating temperature-sensitive Q10 and foliar respiration acclimation algorithms modifies modeled ecosystem responses to global change

    NASA Astrophysics Data System (ADS)

    Wythers, Kirk R.; Reich, Peter B.; Bradford, John B.

    2013-03-01

    Evidence suggests that respiration acclimation (RA) to temperature in plants can have a substantial influence on ecosystem carbon balance. To assess the influence of RA on ecosystem response variables in the presence of global change drivers, we incorporated a temperature-sensitive Q10 of respiration and foliar basal RA into the ecosystem model PnET-CN. We examined the new algorithms' effects on modeled net primary production (NPP), total canopy foliage mass, foliar nitrogen concentration, net ecosystem exchange (NEE), and ecosystem respiration/gross primary production ratios. This latter ratio more closely matched eddy covariance long-term data when RA was incorporated in the model than when not. Averaged across four boreal ecotone sites and three forest types at year 2100, the enhancement of NPP in response to the combination of rising [CO2] and warming was 9% greater when RA algorithms were used, relative to responses using fixed respiration parameters. The enhancement of NPP response to global change was associated with concomitant changes in foliar nitrogen and foliage mass. In addition, impacts of RA algorithms on modeled responses of NEE closely paralleled impacts on NPP. These results underscore the importance of incorporating temperature-sensitive Q10 and basal RA algorithms into ecosystem models. Given the current evidence that atmospheric [CO2] and surface temperature will continue to rise, and that ecosystem responses to those changes appear to be modified by RA, which is a common phenotypic adjustment, the potential for misleading results increases if models fail to incorporate RA into their carbon balance calculations.

  4. A Temperate Alpine Glacier as a Reservoir of Polychlorinated Biphenyls: Model Results of Incorporation, Transport, and Release.

    PubMed

    Steinlin, Christine; Bogdal, Christian; Lüthi, Martin P; Pavlova, Pavlina A; Schwikowski, Margit; Zennegg, Markus; Schmid, Peter; Scheringer, Martin; Hungerbühler, Konrad

    2016-06-01

    In previous studies, the incorporation of polychlorinated biphenyls (PCBs) has been quantified in the accumulation areas of Alpine glaciers. Here, we introduce a model framework that quantifies mass fluxes of PCBs in glaciers and apply it to the Silvretta glacier (Switzerland). The models include PCB incorporation into the entire surface of the glacier, downhill transport with the flow of the glacier ice, and chemical fate in the glacial lake. The models are run for the years 1900-2100 and validated by comparing modeled and measured PCB concentrations in an ice core, a lake sediment core, and the glacial streamwater. The incorporation and release fluxes, as well as the storage of PCBs in the glacier increase until the 1980s and decrease thereafter. After a temporary increase in the 2000s, the future PCB release and the PCB concentrations in the glacial stream are estimated to be small but persistent throughout the 21st century. This study quantifies all relevant PCB fluxes in and from a temperate Alpine glacier over two centuries, and concludes that Alpine glaciers are a small secondary source of PCBs, but that the aftermath of environmental pollution by persistent and toxic chemicals can endure for decades. PMID:27164482

  5. Inference of gene regulatory networks incorporating multi-source biological knowledge via a state space model with L1 regularization.

    PubMed

    Hasegawa, Takanori; Yamaguchi, Rui; Nagasaki, Masao; Miyano, Satoru; Imoto, Seiya

    2014-01-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid

  6. Incorporating environmental attitudes in discrete choice models: an exploration of the utility of the awareness of consequences scale.

    PubMed

    Hoyos, David; Mariel, Petr; Hess, Stephane

    2015-02-01

    Environmental economists are increasingly interested in better understanding how people cognitively organise their beliefs and attitudes towards environmental change in order to identify key motives and barriers that stimulate or prevent action. In this paper, we explore the utility of a commonly used psychometric scale, the awareness of consequences (AC) scale, in order to better understand stated choices. The main contribution of the paper is that it provides a novel approach to incorporate attitudinal information into discrete choice models for environmental valuation: firstly, environmental attitudes are incorporated using a reinterpretation of the classical AC scale recently proposed by Ryan and Spash (2012); and, secondly, attitudinal data is incorporated as latent variables under a hybrid choice modelling framework. This novel approach is applied to data from a survey conducted in the Basque Country (Spain) in 2008 aimed at valuing land-use policies in a Natura 2000 Network site. The results are relevant to policy-making because choice models that are able to accommodate underlying environmental attitudes may help in designing more effective environmental policies. PMID:25461111

  7. Solid lipid nanoparticles incorporating melatonin as new model for sustained oral and transdermal delivery systems.

    PubMed

    Priano, Lorenzo; Esposti, Daniele; Esposti, Roberto; Castagna, Giovanna; De Medici, Clotilde; Fraschini, Franco; Gasco, Maria Rosa; Mauro, Alessandro

    2007-10-01

    melatonin (MT) is a hormone produced by the pineal gland at night, involved in the regulation of circadian rhythms. For clinical purposes, exogenous MT administration should mimic the typical nocturnal endogenous MT levels, but its pharmacokinetics is not favourable due to short half-life of elimination. Aim of this study is to examine pharmacokinetics of MT incorporated in solid lipid nanoparticles (SLN), administered by oral and transdermal route. SLN peculiarity consists in the possibility of acting as a reservoir, permitting a constant and prolonged release of the drugs included. In 7 healthy subjects SLN incorporating MT 3 mg (MT-SLN-O) were orally administered at 8.30 a.m. MT 3 mg in standard formulation (MT-S) was then administered to the same subjects after one week at 8.30 a.m. as controls. In 10 healthy subjects SLN incorporating MT were administered transdermally (MT-SLN-TD) by the application of a patch at 8.30 a.m. for 24 hours. Compared to MT-S, Tmax after MT-SLN-O administration resulted delayed of about 20 minutes, while mean AUC and mean half life of elimination was significantly higher (respectively 169944.7 +/- 64954.4 pg/ml x hour vs. 85148.4 +/- 50642.6 pg/ml x hour, p = 0.018 and 93.1 +/- 37.1 min vs. 48.2 +/- 8.9 min, p = 0.009). MT absorption and elimination after MT-SLN-TD demonstrated to be slow (mean half life of absorption: 5.3 +/- 1.3 hours; mean half life of elimination: 24.6 +/- 12.0 hours), so MT plasma levels above 50 pg/ml were maintained for at least 24 hours. This study demonstrates a significant absorption of MT incorporated in SLN, with detectable plasma level achieved for several hours in particular after transdermal administration. As dosages and concentrations of drugs included in SLN can be varied, different plasma level profile could be obtained, so disclosing new possibilities for sustained delivery systems. PMID:18330178

  8. Incorporating three dimensional shapes of buildings and structures in tsunami inundation modeling of the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    BABA, T.; Takahashi, N.; Kaneda, Y.; Inazawa, Y.; Kikkojin, M.

    2012-12-01

    The tsunami caused by the 2011 Tohoku-oki earthquake widely inundated destroying or passing the buildings and structures on land. Effect of buildings and structures on tsunami inundation is represented by a bottom friction in the conventional modeling solving non-linear shallow water theory in finite difference scheme, not included 3D shapes of those. But large strong buildings would be able to directly protect incoming tsunami like as seawalls, rather than bottom friction. While LiDAR measurements are recently being carried out along the Japan coast by Geospatial Information Authority of Japan, which collect reflected plus from the top of surface such as the building roof, road, bridge, and the top of trees. This is hereby called digital surface model (DSM). We extracted buildings and structures can be possible to affect tsunami inundation from DSM by comparing the Fundamental Geospatial Data which indicates locations of buildings and structures in the city. This is because we have to remove trees and river bridges in DSM where tsunami can pass through them. The 3D building data was incorporated as topography in tsunami computation of the 2011 Tohoku-oki earthquake (hereby, incorporated model) to compare the result from the conventional model. URSGA tsunami code (Jakeman et al. 2010) was used to include variable nesting system. The finest topographic grid interval was 0.22 arc-second (about 5m) along the longitude and latitude directions in coastal area. The initial sea-surface defamation was derived from the finite fault model version 1.1 provided by Tohoku University. In the incorporated model, the maximum inundation height at the front of coastal buildings and structures is higher than that in the conventional model. The inundation height is increased by 63 % (4.8 m) at the maximum. In the area back of the coastal buildings, the inundation height is inversely smaller than that in the conventional model. The tsunami inundation area becomes to be smaller in the

  9. [Incorporation of an organic MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using independent data sources]. [MAGIC Model

    SciTech Connect

    Sullivan, T.J.

    1992-09-01

    A project was initiated in March, 1992 to (1) incorporate a rigorous organic acid representation, based on empirical data and geochemical considerations, into the MAGIC model of acidification response, and (2) test the revised model using three sets of independent data. After six months of performance, the project is on schedule and the majority of the tasks outlined for Year 1 have been successfully completed. Major accomplishments to data include development of the organic acid modeling approach, using data from the Adirondack Lakes Survey Corporation (ALSC), and coupling the organic acid model with MAGIC for chemical hindcast comparisons. The incorporation of an organic acid representation into MAGIC can account for much of the discrepancy earlier observed between MAGIC hindcasts and paleolimnological reconstructions of preindustrial pH and alkalinity for 33 statistically-selected Adirondack lakes. Additional work is on-going for model calibration and testing with data from two whole-catchment artificial acidification projects. Results obtained thus far are being prepared as manuscripts for submission to the peer-reviewed scientific literature.

  10. Computer simulation incorporating a helicopter model for evaluation of aircraft avionics systems

    NASA Technical Reports Server (NTRS)

    Ostroff, A. J.; Wood, R. B.

    1977-01-01

    A computer program was developed to integrate avionics research in navigation, guidance, controls, and displays with a realistic aircraft model. A user oriented program is described that allows a flexible combination of user supplied models to perform research in any avionics area. A preprocessor technique for selecting various models without significantly changing the memory storage is included. Also included are mathematical models for several avionics error models and for the CH-47 helicopter used in this program.

  11. A new general methodology for incorporating physico-chemical transformations into multi-phase wastewater treatment process models.

    PubMed

    Lizarralde, I; Fernández-Arévalo, T; Brouckaert, C; Vanrolleghem, P; Ikumi, D S; Ekama, G A; Ayesa, E; Grau, P

    2015-05-01

    This paper introduces a new general methodology for incorporating physico-chemical and chemical transformations into multi-phase wastewater treatment process models in a systematic and rigorous way under a Plant-Wide modelling (PWM) framework. The methodology presented in this paper requires the selection of the relevant biochemical, chemical and physico-chemical transformations taking place and the definition of the mass transport for the co-existing phases. As an example a mathematical model has been constructed to describe a system for biological COD, nitrogen and phosphorus removal, liquid-gas transfer, precipitation processes, and chemical reactions. The capability of the model has been tested by comparing simulated and experimental results for a nutrient removal system with sludge digestion. Finally, a scenario analysis has been undertaken to show the potential of the obtained mathematical model to study phosphorus recovery. PMID:25746499

  12. Nanofibers for drug delivery – incorporation and release of model molecules, influence of molecular weight and polymer structure

    PubMed Central

    Hrib, Jakub; Hobzova, Radka; Hampejsova, Zuzana; Bosakova, Zuzana; Munzarova, Marcela; Michalek, Jiri

    2015-01-01

    Summary Nanofibers were prepared from polycaprolactone, polylactide and polyvinyl alcohol using NanospiderTM technology. Polyethylene glycols with molecular weights of 2 000, 6 000, 10 000 and 20 000 g/mol, which can be used to moderate the release profile of incorporated pharmacologically active compounds, served as model molecules. They were terminated by aromatic isocyanate and incorporated into the nanofibers. The release of these molecules into an aqueous environment was investigated. The influences of the molecular length and chemical composition of the nanofibers on the release rate and the amount of released polyethylene glycols were evaluated. Longer molecules released faster, as evidenced by a significantly higher amount of released molecules after 72 hours. However, the influence of the chemical composition of nanofibers was even more distinct – the highest amount of polyethylene glycol molecules released from polyvinyl alcohol nanofibers, the lowest amount from polylactide nanofibers. PMID:26665065

  13. Incorporate Imaging Characteristics Into an Arteriovenous Malformation Radiosurgery Plan Evaluation Model

    SciTech Connect

    Zhang Pengpeng Wu, Leester; Liu Tian; Kutcher, Gerald J..; Isaacson, Steven

    2008-04-01

    Purpose: To integrate imaging performance characteristics, specifically sensitivity and specificity, of magnetic resonance angiography (MRA) and digital subtraction angiography (DSA) into arteriovenous malformation (AVM) radiosurgery planning and evaluation. Methods and Materials: Images of 10 patients with AVMs located in critical brain areas were analyzed in this retrospective planning study. The image findings were first used to estimate the sensitivity and specificity of MRA and DSA. Instead of accepting the imaging observation as a binary (yes or no) mapping of AVM location, our alternative is to translate the image into an AVM probability distribution map by incorporating imagers' sensitivity and specificity, and to use this map as a basis for planning and evaluation. Three sets of radiosurgery plans, targeting the MRA and DSA positive overlap, MRA positive, and DSA positive were optimized for best conformality. The AVM obliteration rate (ORAVM) and brain complication rate served as endpoints for plan comparison. Results: In our 10-patient study, the specificities and sensitivities of MRA and DSA were estimated to be (0.95, 0.74) and (0.71, 0.95), respectively. The positive overlap of MRA and DSA accounted for 67.8% {+-} 4.9% of the estimated true AVM volume. Compared with plans targeting MRA and DSA-positive overlap, plans targeting MRA-positive or DSA-positive improved ORAVM by 4.1% {+-} 1.9% and 15.7% {+-} 8.3%, while also increasing the complication rate by 1.0% {+-} 0.8% and 4.4% {+-} 2.3%, respectively. Conclusions: The impact of imagers' quality should be quantified and incorporated in AVM radiosurgery planning and evaluation to facilitate clinical decision making.

  14. Improving weed germination models by incorporating seed microclimate and translocation by tillage

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Weed emergence models are of critical importance in deciding the timing of field weed control measures (tillage or chemical). However, the state of weed germination modeling is still in its infancy. Existing models do provide a baseline picture of emergence patterns, but improvements are needed to m...

  15. Incorporating Multi-model Ensemble Techniques Into a Probabilistic Hydrologic Forecasting System

    NASA Astrophysics Data System (ADS)

    Sonessa, M. Y.; Bohn, T. J.; Lettenmaier, D. P.

    2008-12-01

    Multi-model ensemble techniques have been shown to reduce bias and to aid in quantification of the effects of model uncertainty in hydrologic modeling. However, these techniques are only beginning to be applied in operational hydrologic forecast systems. To investigate the performance of a multi-model ensemble in the context of probabilistic hydrologic forecasting, we have extended the University of Washington's West-wide Seasonal Hydrologic Forecasting System to use an ensemble of three models: the Variable Infiltration Capacity (VIC) model version 4.0.6, the NCEP NOAH model version 2.7.1, and the NWS grid-based Sacramento/Snow-17 model (SAC). The objective of this presentation is to assess the performance of the ensemble of the three models as compared to the performance of the models individually. Three forecast points within the West-wide forecast system domain were used for this research: the Feather River at Oroville, CA, the Salmon River at White horse, ID, and the Colorado River at Grand Junction. The forcing and observed streamflow data are for years 1951-2005 for the Feather and Salmon Rivers; and 1951-2003 for the Colorado. The models were first run for the retrospective period, then bias-corrected, and model weights were then determined using multiple linear regression. We assessed the performance of the ensemble in comparison with the individual models in terms of correlation with observed flows and Root Mean Square Error, and Nash-Sutcliffe. We found that for evaluations of retrospective simulations in comparison with observations, the ensemble performed better overall than any of the models individually even though in few individual months individual models performed slightly better than the ensemble. To test forecast skill, we performed Ensemble Streamflow Prediction (ESP) forecasts for each year of the retrospective period, using forcings from all other years, for individual models and for the multi-model ensemble. To form the ensemble for the ESP

  16. Source-mask selection using computational lithography: further investigation incorporating rigorous resist models

    NASA Astrophysics Data System (ADS)

    Kapasi, Sanjay; Robertson, Stewart; Biafore, John; Smith, Mark D.

    2009-12-01

    Recent publications have emphasized the criticality of computational lithography in source-mask selection for 32 and 22 nm technology nodes. Lithographers often select the illuminator geometries based on analyzing aerial images for a limited set of structures using computational lithography tools. Last year, Biafore, et al1 demonstrated the divergence between aerial image models and resist models in computational lithography. In a follow-up study2, it was illustrated that optimal illuminator is different when selected based on resist model in contrast to aerial image model. In the study, optimal source shapes were evaluated for 1D logic patterns using aerial image model and two distinct commercial resist models. Physics based lumped parameter resist model (LPM) was used. Accurately calibrated full physical models are portable across imaging conditions compared to the lumped models. This study will be an extension of previous work. Full physical resist models (FPM) with calibrated resist parameters3,4,5,6 will be used in selecting optimum illumination geometries for 1D logic patterns. Several imaging parameters - like Numerical Aperture (NA), source geometries (Annular, Quadrupole, etc.), illumination configurations for different sizes and pitches will be explored in the study. Our goal is to compare and analyze the optimal source-shapes across various imaging conditions. In the end, the optimal source-mask solution for given set of designs based on all the models will be recommended.

  17. Incorporating Cold Cap Behavior in a Joule-heated Waste Glass Melter Model

    SciTech Connect

    Varija Agarwal; Donna Post Guillen

    2013-08-01

    In this paper, an overview of Joule-heated waste glass melters used in the vitrification of high level waste (HLW) is presented, with a focus on the cold cap region. This region, in which feed-to-glass conversion reactions occur, is critical in determining the melting properties of any given glass melter. An existing 1D computer model of the cold cap, implemented in MATLAB, is described in detail. This model is a standalone model that calculates cold cap properties based on boundary conditions at the top and bottom of the cold cap. Efforts to couple this cold cap model with a 3D STAR-CCM+ model of a Joule-heated melter are then described. The coupling is being implemented in ModelCenter, a software integration tool. The ultimate goal of this model is to guide the specification of melter parameters that optimize glass quality and production rate.

  18. Incorporating age at onset of smoking into genetic models for nicotine dependence: Evidence for interaction with multiple genes

    PubMed Central

    Grucza, Richard A.; Johnson, Eric O.; Krueger, Robert F.; Breslau, Naomi; Saccone, Nancy L.; Chen, Li-Shiun; Derringer, Jaime; Agrawal, Arpana; Lynskey, Micheal; Bierut, Laura J.

    2011-01-01

    Nicotine dependence is moderately heritable, but identified genetic associations explain only modest portions of this heritability. We analyzed 3,369 SNPs from 349 candidate genes, and investigated whether incorporation of SNP-by-environment interaction into association analyses might bolster gene discovery efforts and prediction of nicotine dependence. Specifically, we incorporated the interaction between allele count and age-at-onset of regular smoking (AOS) into association analyses of nicotine dependence. Subjects were from the Collaborative Genetic Study of Nicotine Dependence, and included 797 cases ascertained for Fagerström nicotine dependence, and 811 non-nicotine dependent smokers as controls, all of European descent. Compared with main-effect models, SNP x AOS interaction models resulted in higher numbers of nominally significant tests, increased predictive utility at individual SNPs, and higher predictive utility in a multi-locus model. Some SNPs previously documented in main-effect analyses exhibited improved fits in the joint-analysis, including rs16969968 from CHRNA5 and rs2314379 from MAP3K4. CHRNA5 exhibited larger effects in later-onset smokers, in contrast with a previous report that suggested the opposite interaction (Weiss et al, PLOS Genetics, 4: e1000125, 2008). However, a number of SNPs that did not emerge in main-effect analyses were among the strongest findings in the interaction analyses. These include SNPs located in GRIN2B (p=1.5 × 10−5), which encodes a subunit of the NMDA receptor channel, a key molecule in mediating age-dependent synaptic plasticity. Incorporation of logically chosen interaction parameters, such as AOS, into genetic models of substance-use disorders may increase the degree of explained phenotypic variation, and constitutes a promising avenue for gene-discovery. PMID:20624154

  19. A Model Incorporating Some of the Mechanical and Biochemical Factors Underlying Clot Formation and Dissolution in Flowing Blood

    DOE PAGESBeta

    Anand, M.; Rajagopal, K.; Rajagopal, K. R.

    2003-01-01

    Multiple interacting mechanisms control the formation and dissolution of clots to maintain blood in a state of delicate balance. In addition to a myriad of biochemical reactions, rheological factors also play a crucial role in modulating the response of blood to external stimuli. To date, a comprehensive model for clot formation and dissolution, that takes into account the biochemical, medical and rheological factors, has not been put into place, the existing models emphasizing either one or the other of the factors. In this paper, after discussing the various biochemical, physiologic and rheological factors at some length, we develop a modelmore » for clot formation and dissolution that incorporates many of the relevant crucial factors that have a bearing on the problem. The model, though just a first step towards understanding a complex phenomenon, goes further than previous models in integrating the biochemical, physiologic and rheological factors that come into play.« less

  20. Incorporation of a radiation parameterization scheme into the Naval Research Laboratory limited area dynamical weather prediction model. Master's thesis

    SciTech Connect

    Stewart, P.C.

    1992-09-01

    This paper describes the incorporation of the Harshvardhan et al. (1987) radiation parameterization into the Naval Research Laboratory Limited Area Dynamical Weather Prediction Model. A comparison between model runs with the radiation scheme and runs without the scheme was made to examine three mesoscale phenomena along the west coast of the United States during the period 0000 UTC 02 May 1990 - 1200 UTC 03 %lay 1990: the land and sea breeze, the southerly surge and the Catalina eddy. In general the updated model with the radiation parameterization yielded a more accurate simulation of the layer temperatures, geopotential heights, cloud cover, and radiative processes as verified from synoptic, mesoscale: and satellite observations. Subsequently, the updated model also forecast a more realistic diurnal evolution of the sea and land breeze, the southerly surge and the Catalina eddy.

  1. Integrative modelling of animal movement: incorporating in situ habitat and behavioural information for a migratory marine predator.

    PubMed

    Bestley, Sophie; Jonsen, Ian D; Hindell, Mark A; Guinet, Christophe; Charrassin, Jean-Benoît

    2013-01-01

    A fundamental goal in animal ecology is to quantify how environmental (and other) factors influence individual movement, as this is key to understanding responsiveness of populations to future change. However, quantitative interpretation of individual-based telemetry data is hampered by the complexity of, and error within, these multi-dimensional data. Here, we present an integrative hierarchical Bayesian state-space modelling approach where, for the first time, the mechanistic process model for the movement state of animals directly incorporates both environmental and other behavioural information, and observation and process model parameters are estimated within a single model. When applied to a migratory marine predator, the southern elephant seal (Mirounga leonina), we find the switch from directed to resident movement state was associated with colder water temperatures, relatively short dive bottom time and rapid descent rates. The approach presented here can have widespread utility for quantifying movement-behaviour (diving or other)-environment relationships across species and systems. PMID:23135676

  2. A strategy for residual error modeling incorporating scedasticity of variance and distribution shape.

    PubMed

    Dosne, Anne-Gaëlle; Bergstrand, Martin; Karlsson, Mats O

    2016-04-01

    Nonlinear mixed effects models parameters are commonly estimated using maximum likelihood. The properties of these estimators depend on the assumption that residual errors are independent and normally distributed with mean zero and correctly defined variance. Violations of this assumption can cause bias in parameter estimates, invalidate the likelihood ratio test and preclude simulation of real-life like data. The choice of error model is mostly done on a case-by-case basis from a limited set of commonly used models. In this work, two strategies are proposed to extend and unify residual error modeling: a dynamic transform-both-sides approach combined with a power error model (dTBS) capable of handling skewed and/or heteroscedastic residuals, and a t-distributed residual error model allowing for symmetric heavy tails. Ten published pharmacokinetic and pharmacodynamic models as well as stochastic simulation and estimation were used to evaluate the two approaches. dTBS always led to significant improvements in objective function value, with most examples displaying some degree of right-skewness and variances proportional to predictions raised to powers between 0 and 1. The t-distribution led to significant improvement for 5 out of 10 models with degrees of freedom between 3 and 9. Six models were most improved by the t-distribution while four models benefited more from dTBS. Changes in other model parameter estimates were observed. In conclusion, the use of dTBS and/or t-distribution models provides a flexible and easy-to-use framework capable of characterizing all commonly encountered residual error distributions. PMID:26679003

  3. A Hall Thruster Performance Model Incorporating the Effects of a Multiply-Charged Plasma

    NASA Technical Reports Server (NTRS)

    Hofer, Richard R.; Jankovsky, Robert S.

    2002-01-01

    A Hall thruster performance model that predicts anode specific impulse, anode efficiency, and thrust is discussed. The model is derived as a function of a voltage loss parameter, an electron loss parameter, and the charge state of the plasma. Experimental data from SPT and TAL type thrusters up to discharge powers of 21.6 kW are used to determine the best fit for model parameters. General values for the model parameters are found, applicable to high power thrusters and irrespective of thruster type. Performance of a 50 kW thruster is calculated for an anode specific impulse of 2500 seconds or a discharge current of 100 A.

  4. Probabilistic model for pressure vessel reliability incorporating fracture mechanics and nondestructive examination

    SciTech Connect

    Tow, D.M.; Reuter, W.G.

    1998-03-01

    A probabilistic model has been developed for predicting the reliability of structures based on fracture mechanics and the results of nondestructive examination (NDE). The distinctive feature of this model is the way in which inspection results and the probability of detection (POD) curve are used to calculate a probability density function (PDF) for the number of flaws and the distribution of those flaws among the various size ranges. In combination with a probabilistic fracture mechanics model, this density function is used to estimate the probability of failure (POF) of a structure in which flaws have been detected by NDE. The model is useful for parametric studies of inspection techniques and material characteristics.

  5. Incorporating NDVI in a gravity model setting to describe spatio-temporal patterns of Lyme borreliosis incidence

    NASA Astrophysics Data System (ADS)

    Barrios, J. M.; Verstraeten, W. W.; Farifteh, J.; Maes, P.; Aerts, J. M.; Coppin, P.

    2012-04-01

    Lyme borreliosis (LB) is the most common tick-borne disease in Europe and incidence growth has been reported in several European countries during the last decade. LB is caused by the bacterium Borrelia burgdorferi and the main vector of this pathogen in Europe is the tick Ixodes ricinus. LB incidence and spatial spread is greatly dependent on environmental conditions impacting habitat, demography and trophic interactions of ticks and the wide range of organisms ticks parasite. The landscape configuration is also a major determinant of tick habitat conditions and -very important- of the fashion and intensity of human interaction with vegetated areas, i.e. human exposure to the pathogen. Hence, spatial notions as distance and adjacency between urban and vegetated environments are related to human exposure to tick bites and, thus, to risk. This work tested the adequacy of a gravity model setting to model the observed spatio-temporal pattern of LB as a function of location and size of urban and vegetated areas and the seasonal and annual change in the vegetation dynamics as expressed by MODIS NDVI. Opting for this approach implies an analogy with Newton's law of universal gravitation in which the attraction forces between two bodies are directly proportional to the bodies mass and inversely proportional to distance. Similar implementations have proven useful in fields like trade modeling, health care service planning, disease mapping among other. In our implementation, the size of human settlements and vegetated systems and the distance separating these landscape elements are considered the 'bodies'; and the 'attraction' between them is an indicator of exposure to pathogen. A novel element of this implementation is the incorporation of NDVI to account for the seasonal and annual variation in risk. The importance of incorporating this indicator of vegetation activity resides in the fact that alterations of LB incidence pattern observed the last decade have been ascribed

  6. A rhenium tris-carbonyl derivative as a model molecule for incorporation into phospholipid assemblies for skin applications.

    PubMed

    Fernández, Estibalitz; Rodríguez, Gelen; Hostachy, Sarah; Clède, Sylvain; Cócera, Mercedes; Sandt, Christophe; Lambert, François; de la Maza, Alfonso; Policar, Clotilde; López, Olga

    2015-07-01

    A rhenium tris-carbonyl derivative (fac-[Re(CO)3Cl(2-(1-dodecyl-1H-1,2,3,triazol-4-yl)-pyridine)]) was incorporated into phospholipid assemblies, called bicosomes, and the penetration of this molecule into skin was monitored using Fourier-transform infrared microspectroscopy (FTIR). To evaluate the capacity of bicosomes to promote the penetration of this derivative, the skin penetration of the Re(CO)3 derivative dissolved in dimethyl sulfoxide (DMSO), a typical enhancer, was also studied. Dynamic light scattering results (DLS) showed an increase in the size of the bicosomes with the incorporation of the Re(CO)3 derivative, and the FTIR microspectroscopy showed that the Re(CO)3 derivative incorporated in bicosomes penetrated deeper into the skin than when dissolved in DMSO. When this molecule was applied on the skin using the bicosomes, 60% of the Re(CO)3 derivative was retained in the stratum corneum (SC) and 40% reached the epidermis (Epi). Otherwise, the application of this molecule via DMSO resulted in 95% of the Re(CO)3 derivative being in the SC and only 5% reaching the Epi. Using a Re(CO)3 derivative with a dodecyl-chain as a model molecule, it was possible to determine the distribution of molecules with similar physicochemical characteristics in the skin using bicosomes. This fact makes these nanostructures promising vehicles for the application of lipophilic molecules inside the skin. PMID:25969419

  7. Combined harvesting of a stage structured prey-predator model incorporating cannibalism in competitive environment.

    PubMed

    Chakraborty, Kunal; Das, Kunal; Kar, Tapan Kumar

    2013-01-01

    In this paper, we propose a prey-predator system with stage structure for predator. The proposed system incorporates cannibalism for predator populations in a competitive environment. The combined fishing effort is considered as control used to harvest the populations. The steady states of the system are determined and the dynamical behavior of the system is discussed. Local stability of the system is analyzed and sufficient conditions are derived for the global stability of the system at the positive equilibrium point. The existence of the Hopf bifurcation phenomenon is examined at the positive equilibrium point of the proposed system. We consider harvesting effort as a control parameter and subsequently, characterize the optimal control parameter in order to formulate the optimal control problem under the dynamic framework towards optimal utilization of the resource. Moreover, the optimal system is solved numerically to investigate the sustainability of the ecosystem using an iterative method with a Runge-Kutta fourth-order scheme. Simulation results show that the optimal control scheme can achieve sustainable ecosystem. Results are analyzed with the help of graphical illustrations. PMID:23537768

  8. Model for the incorporation of plant detritus within clastic accumulating interdistributary bays

    SciTech Connect

    Gastaldo, R.A.; McCarroll, S.M.; Douglass, D.P.

    1985-01-01

    Plant-bearing clastic lithologies interpreted as interdistributary bay deposits are reported from rocks Devonian to Holocene in age. Often, these strata preserve accumulations of discrete, laterally continuous leaf beds or coaly horizons. Investigations within two modern inter-distributary bays in the lower delta plain of the Mobile Delta, Alabama have provided insight into the phytotaphonomic processes responsible for the generation of carbonaceous lithologies, coaly horizons and laterally continuous leaf beds. Delvan and Chacalooche Bays lie adjacent to the Tensaw River distributary channel and differ in the mode of clastic and plant detrital accumulation. Delvan Bay, lying west of the distributary channel, is accumulating detritus solely by overbank deposition. Chacaloochee Bay, lying east of the channel, presently is accumulating detritus by active crevasse-splay activity. Plant detritus is accumulating as transported assemblages in both bays, but the mode of preservation differs. In Delvan Bay, the organic component is highly degraded and incorporated within the clastic component resulting in a carbonaceous silt. Little identifiable plant detritus can be recovered. On the other hand, the organic component in Chacaloochee Bay is accumulating in locally restricted allochthonous peat deposits up to 2 m in thickness, and discrete leaf beds generated by flooding events. In addition, autochthonous plant accumulations occur on subaerially and aerially exposed portions of the crevasse. The resultant distribution of plant remains is a complicated array of transported and non-transported organics.

  9. Augmenting watershed model calibration with incorporation of ancillary data sources and qualitative soft data sources

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Watershed simulation models can be calibrated using “hard data” such as temporal streamflow observations; however, users may find upon examination of detailed outputs that some of the calibrated models may not reflect summative actual watershed behavior. Thus, it is necessary to use “soft data” (i....

  10. A Preventative Model of School Consultation: Incorporating Perspectives from Positive Psychology

    ERIC Educational Resources Information Center

    Akin-Little, K. Angeleque; Little, Steven G.; Delligatti, Nina

    2004-01-01

    Using the principles of mental health and behavioral consultation, combined with concepts from positive psychology, this paper generates a new preventative model of school consultation. This model has two steps: (1) the school psychologist aids the teacher in the development and use of his/her personal positive psychology (e.g., optimism,…

  11. LINKING MICROBES TO CLIMATE: INCORPORATING MICROBIAL ACTIVITY INTO CLIMATE MODELS COLLOQUIUM

    SciTech Connect

    DeLong, Edward; Harwood, Caroline; Reid, Ann

    2011-01-01

    This report explains the connection between microbes and climate, discusses in general terms what modeling is and how it applied to climate, and discusses the need for knowledge in microbial physiology, evolution, and ecology to contribute to the determination of fluxes and rates in climate models. It recommends with a multi-pronged approach to address the gaps.

  12. Building out a Measurement Model to Incorporate Complexities of Testing in the Language Domain

    ERIC Educational Resources Information Center

    Wilson, Mark; Moore, Stephen

    2011-01-01

    This paper provides a summary of a novel and integrated way to think about the item response models (most often used in measurement applications in social science areas such as psychology, education, and especially testing of various kinds) from the viewpoint of the statistical theory of generalized linear and nonlinear mixed models. In addition,…

  13. Incorporating Video Modeling into a School-Based Intervention for Students with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Wilson, Kaitlyn P.

    2013-01-01

    Purpose: Video modeling is an intervention strategy that has been shown to be effective in improving the social and communication skills of students with autism spectrum disorders, or ASDs. The purpose of this tutorial is to outline empirically supported, step-by-step instructions for the use of video modeling by school-based speech-language…

  14. Applications of explicitly-incorporated/post-processing measurement uncertainty in watershed modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The importance of measurement uncertainty in terms of calculation of model evaluation error statistics has been recently stated in the literature. The impact of measurement uncertainty on calibration results indicates the potential vague zone in the field of watershed modeling where the assumption ...

  15. A simple 2-D inundation model for incorporating flood damage in urban drainage planning

    NASA Astrophysics Data System (ADS)

    Pathirana, A.; Tsegaye, S.; Gersonius, B.; Vairavamoorthy, K.

    2008-11-01

    In this paper a new inundation model code is developed and coupled with Storm Water Management Model, SWMM, to relate spatial information associated with urban drainage systems as criteria for planning of storm water drainage networks. The prime objective is to achive a model code that is simple and fast enough to be consistently be used in planning stages of urban drainage projects. The formulation for the two-dimensional (2-D) surface flow model algorithms is based on the Navier Stokes equation in two dimensions. An Alternating Direction Implicit (ADI) finite difference numerical scheme is applied to solve the governing equations. This numerical scheme is used to express the partial differential equations with time steps split into two halves. The model algorithm is written using C++ computer programming language. This 2-D surface flow model is then coupled with SWMM for simulation of both pipe flow component and surcharge induced inundation in urban areas. In addition, a damage calculation block is integrated within the inundation model code. The coupled model is shown to be capable of dealing with various flow conditions, as well as being able to simulate wetting and drying processes that will occur as the flood flows over an urban area. It has been applied under idealized and semi-hypothetical cases to determine detailed inundation zones, depths and velocities due to surcharged water on overland surface.

  16. A Dynamic Photovoltaic Model Incorporating Capacitive and Reverse-Bias Characteristics

    SciTech Connect

    Kim, KA; Xu, CY; Jin, L; Krein, PT

    2013-10-01

    Photovoltaics (PVs) are typically modeled only for their forward-biased dc characteristics, as in the commonly used single-diode model. While this approach accurately models the I-V curve under steady forward bias, it lacks dynamic and reverse-bias characteristics. The dynamic characteristics, primarily parallel capacitance and series inductance, affect operation when a PV cell or string interacts with switching converters or experiences sudden transients. Reverse-bias characteristics are often ignored because PV devices are not intended to operate in the reverse-biased region. However, when partial shading occurs on a string of PVs, the shaded cell can become reverse biased and develop into a hot spot that permanently degrades the cell. To fully examine PV behavior under hot spots and various other faults, reverse-bias characteristics must also be modeled. This study develops a comprehensive mathematical PV model based on circuit components that accounts for forward bias, reverse bias, and dynamic characteristics. Using a series of three experimental tests on an unilluminated PV cell, all required model parameters are determined. The model is implemented in MATLAB Simulink and accurately models the measured data.

  17. Incorporating Fuzzy Systems Modeling and Possibility Theory in Hydrogeological Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.

    2008-12-01

    Hydrogeological predictions are subject to numerous uncertainties, including the development of conceptual, mathematical, and numerical models, as well as determination of their parameters. Stochastic simulations of hydrogeological systems and the associated uncertainty analysis are usually based on the assumption that the data characterizing spatial and temporal variations of hydrogeological processes are random, and the output uncertainty is quantified using a probability distribution. However, hydrogeological systems are often characterized by imprecise, vague, inconsistent, incomplete or subjective information. One of the modern approaches to modeling and uncertainty quantification of such systems is based on using a combination of statistical and fuzzy-logic uncertainty analyses. The aims of this presentation are to: (1) present evidence of fuzziness in developing conceptual hydrogeological models, and (2) give examples of the integration of the statistical and fuzzy-logic analyses in modeling and assessing both aleatoric uncertainties (e.g., caused by vagueness in assessing the subsurface system heterogeneities of fractured-porous media) and epistemic uncertainties (e.g., caused by the selection of different simulation models) involved in hydrogeological modeling. The author will discuss several case studies illustrating the application of fuzzy modeling for assessing the water balance and water travel time in unsaturated-saturated media. These examples will include the evaluation of associated uncertainties using the main concepts of possibility theory, a comparison between the uncertainty evaluation using probabilistic and possibility theories, and a transformation of the probabilities into possibilities distributions (and vice versa) for modeling hydrogeological processes.

  18. Eco-Evo PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models

    EPA Science Inventory

    We synthesize how advances in computational methods and population genomics can be combined within an Ecological-Evolutionary (Eco-Evo) PVA model. Eco-Evo PVA models are powerful new tools for understanding the influence of evolutionary processes on plant and animal population pe...

  19. Zero inflation in ordinal data: Incorporating susceptibility to response through the use of a mixture model

    PubMed Central

    Kelley, Mary E.; Anderson, Stewart J.

    2008-01-01

    Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711

  20. Incorporation of a Left Ventricle Finite Element Model Defining Infarction Into the XCAT Imaging Phantom

    PubMed Central

    Veress, Alexander I.; Segars, W. Paul; Tsui, Benjamin M. W.; Gullberg, Grant T.

    2011-01-01

    The 4D extended cardiac-torso (XCAT) phantom was developed to provide a realistic and flexible model of the human anatomy and cardiac and respiratory motions for use in medical imaging research. A prior limitation to the phantom was that it did not accurately simulate altered functions of the heart that result from cardiac pathologies such as coronary artery disease (CAD). We overcame this limitation in a previous study by combining the phantom with a finite-element (FE) mechanical model of the left ventricle (LV) capable of more realistically simulating regional defects caused by ischemia. In the present work, we extend this model giving it the ability to accurately simulate motion abnormalities caused by myocardial infarction (MI), a far more complex situation in terms of altered mechanics compared with the modeling of acute ischemia. The FE model geometry is based on high resolution CT images of a normal male subject. An anterior region was defined as infarcted and the material properties and fiber distribution were altered, according to the bio-physiological properties of two types of infarction, i.e., fibrous and remodeled infarction (30% thinner wall than fibrous case). Compared with the original, surface-based 4D beating heart model of the XCAT, where regional abnormalities are modeled by simply scaling down the motion in those regions, the FE model was found to provide a more accurate representation of the abnormal motion of the LV due to the effects of fibrous infarction as well as depicting the motion of remodeled infarction. In particular, the FE models allow for the accurate depiction of dyskinetic motion. The average circumferential strain results were found to be consistent with measured dyskinetic experimental results. Combined with the 4D XCAT phantom, the FE model can be used to produce realistic multimodality sets of imaging data from a variety of patients in which the normal or abnormal cardiac function is accurately represented. PMID:21041157

  1. A model for arsenic anti-site incorporation in GaAs grown by hydride vapor phase epitaxy

    SciTech Connect

    Schulte, K. L.; Kuech, T. F.

    2014-12-28

    GaAs growth by hydride vapor phase epitaxy (HVPE) has regained interest as a potential route to low cost, high efficiency thin film photovoltaics. In order to attain the highest efficiencies, deep level defect incorporation in these materials must be understood and controlled. The arsenic anti-site defect, As{sub Ga} or EL2, is the predominant deep level defect in HVPE-grown GaAs. In the present study, the relationships between HVPE growth conditions and incorporation of EL2 in GaAs epilayers were determined. Epitaxial n-GaAs layers were grown under a wide range of deposition temperatures (T{sub D}) and gallium chloride partial pressures (P{sub GaCl}), and the EL2 concentration, [EL2], was determined by deep level transient spectroscopy. [EL2] agreed with equilibrium thermodynamic predictions in layers grown under conditions in which the growth rate, R{sub G}, was controlled by conditions near thermodynamic equilibrium. [EL2] fell below equilibrium levels when R{sub G} was controlled by surface kinetic processes, with the disparity increasing as R{sub G} decreased. The surface chemical composition during growth was determined to have a strong influence on EL2 incorporation. Under thermodynamically limited growth conditions, e.g., high T{sub D} and/or low P{sub GaCl}, the surface vacancy concentration was high and the bulk crystal was close to equilibrium with the vapor phase. Under kinetically limited growth conditions, e.g., low T{sub D} and/or high P{sub GaCl}, the surface attained a high GaCl coverage, blocking As adsorption. This competitive adsorption process reduced the growth rate and also limited the amount of arsenic that incorporated as As{sub Ga}. A defect incorporation model which accounted for the surface concentration of arsenic as a function of the growth conditions, was developed. This model was used to identify optimal growth parameters for the growth of thin films for photovoltaics, conditions in which a high growth rate and low [EL2] could be

  2. Development of a human eye model incorporated with intraocular scattering for visual performance assessment

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chun; Jiang, Chong-Jhih; Yang, Tsung-Hsun; Sun, Ching-Cherng

    2012-07-01

    A biometry-based human eye model was developed by using the empirical anatomic and optical data of ocular parameters. The gradient refractive index of the crystalline lens was modeled by concentric conicoid isoindical surfaces and was adaptive to accommodation and age. The chromatic dispersion of ocular media was described by Cauchy equations. The intraocular scattering model was composed of volumetric Mie scattering in the cornea and the crystalline lens, and a diffusive-surface model at the retina fundus. The retina was regarded as a Lambertian surface and was assigned its corresponding reflectance at each wavelength. The optical performance of the eye model was evaluated in CodeV and ASAP and presented by the modulation transfer functions at single and multiple wavelengths. The chromatic optical powers obtained from this model resembled that of the average physiological eyes. The scattering property was assessed by means of glare veiling luminance and compared with the CIE general disability glare equation. By replacing the transparent lens with a cataractous lens, the disability glare curve of cataracts was generated to compare with the normal disability glare curve. This model has high potential for investigating visual performance in ordinary lighting and display conditions and under the influence of glare sources.

  3. Incorporation of an Energy Equation into a Pulsed Inductive Thruster Performance Model

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Reneau, Jarred P.; Sankaran, Kameshwaran

    2011-01-01

    A model for pulsed inductive plasma acceleration containing an energy equation to account for the various sources and sinks in such devices is presented. The model consists of a set of circuit equations coupled to an equation of motion and energy equation for the plasma. The latter two equations are obtained for the plasma current sheet by treating it as a one-element finite volume, integrating the equations over that volume, and then matching known terms or quantities already calculated in the model to the resulting current sheet-averaged terms in the equations. Calculations showing the time-evolution of the various sources and sinks in the system are presented to demonstrate the efficacy of the model, with two separate resistivity models employed to show an example of how the plasma transport properties can affect the calculation. While neither resistivity model is fully accurate, the demonstration shows that it is possible within this modeling framework to time-accurately update various plasma parameters.

  4. Numerically simulating alpine landscapes: The geomorphologic consequences of incorporating glacial erosion in surface process models

    NASA Astrophysics Data System (ADS)

    Tomkin, Jonathan H.

    2009-01-01

    A numerical model that simulates the evolution of glaciated mountain landscapes is presented. By employing a popular, sliding based, glacial erosion model, many common glacial landforms are created. The numerical model builds on earlier work as it is fully two-dimensional and employs the first order forcing on mountain evolution. These forcings include tectonic uplift, isostasy, hillslope processes, fluvial processes, and glacial processes. A climate-dependent model of ice dynamics is employed to determine ice coverage and ice flux. Two simulations are presented; one with generic model parameters, and a second with parameters that correspond to conditions in the Southern Alps of New Zealand. Landforms that are produced by the model include climate climate-dependent elevation lowering, similar to what might be expected by a "glacial buzz-saw", valley overdeepening, terminal moraines, and valley retreat. The model also predicts that current rates of sedimentation are higher than the long-term average, and that several tens of thousands of years are required for the landscape to adjust to a change in the dominant erosional forcing. Therefore, glaciated orogens are unlikely to achieve topographic steady state over Milankovitch timescales.

  5. Ionospheric model-observation comparisons: E layer at Arecibo Incorporation of SDO-EVE solar irradiances

    NASA Astrophysics Data System (ADS)

    Sojka, Jan J.; Jensen, Joseph B.; David, Michael; Schunk, Robert W.; Woods, Tom; Eparvier, Frank; Sulzer, Michael P.; Gonzalez, Sixto A.; Eccles, J. Vincent

    2014-05-01

    This study evaluates how the new irradiance observations from the NASA Solar Dynamics Observatory (SDO) Extreme Ultraviolet Variability Experiment (EVE) can, with its high spectral resolution and 10 s cadence, improve the modeling of the E region. To demonstrate this a campaign combining EVE observations with that of the NSF Arecibo incoherent scatter radar (ISR) was conducted. The ISR provides E region electron density observations with high-altitude resolution, 300 m, and absolute densities using the plasma line technique. Two independent ionospheric models were used, the Utah State University Time-Dependent Ionospheric Model (TDIM) and Space Environment Corporation's Data-Driven D Region (DDDR) model. Each used the same EVE irradiance spectrum binned at 1 nm resolution from 0.1 to 106 nm. At the E region peak the modeled TDIM density is 20% lower and that of the DDDR is 6% higher than observed. These differences could correspond to a 36% lower (TDIM) and 12% higher (DDDR) production rate if the differences were entirely attributed to the solar irradiance source. The detailed profile shapes that included the E region altitude and that of the valley region were only qualitatively similar to observations. Differences on the order of a neutral-scale height were present. Neither model captured a distinct dawn to dusk tilt in the E region peak altitude. A model sensitivity study demonstrated how future improved spectral resolution of the 0.1 to 7 nm irradiance could account for some of these model shortcomings although other relevant processes are also poorly modeled.

  6. Incorporating representation of agricultural ecosystems and management within a dynamic biosphere model: Approach, validation, and significance

    NASA Astrophysics Data System (ADS)

    Kucharik, C.

    2004-12-01

    At the scale of individual fields, crop models have long been used to examine the interactions between soils, vegetation, the atmosphere and human management, using varied levels of numerical sophistication. While previous efforts have contributed significantly towards the advancement of modeling tools, the models themselves are not typically applied across larger continental scales due to a lack of crucial data. Furthermore, many times crop models are used to study a single quantity, process, or cycle in isolation, limiting their value in considering the important tradeoffs between competing ecosystem services such as food production, water quality, and sequestered carbon. In response to the need for a more integrated agricultural modeling approach across the continental scale, an updated agricultural version of a dynamic biosphere model (IBIS) now integrates representations of land-surface physics and soil physics, canopy physiology, terrestrial carbon and nitrogen balance, crop phenology, solute transport, and farm management into a single framework. This version of the IBIS model (Agro-IBIS) uses a short 20 to 60-minute timestep to simulate the rapid exchange of energy, carbon, water, and momentum between soils, vegetative canopies, and the atmosphere. The model can be driven either by site-specific meteorological data or by gridded climate datasets. Mechanistic crop models for corn, soybean, and wheat use physiologically-based representations of leaf photosynthesis, stomatal conductance, and plant respiration. Model validation has been performed using a variety of temporal scale data collected at the following spatial scales: (1) the precision-agriculture scale (5 m), (2) the individual field experiment scale (AmeriFlux), and (3) regional and continental scales using annual USDA county-level yield data and monthly satellite (AVHRR) observations of vegetation characteristics at 0.5 degree resolution. To date, the model has been used with great success to

  7. Incorporation of NREL Solar Advisor Model Photovoltaic Capabilities with GridLAB-D

    SciTech Connect

    Tuffner, Francis K.; Hammerstrom, Janelle L.; Singh, Ruchi

    2012-10-19

    This report provides a summary of the work updating the photovoltaic model inside GridLAB-D. The National Renewable Energy Laboratory Solar Advisor Model (SAM) was utilized as a basis for algorithms and validation of the new implementation. Subsequent testing revealed that the two implementations are nearly identical in both solar impacts and power output levels. This synergized model aides the system-level impact studies of GridLAB-D, but also allows more specific details of a particular site to be explored via the SAM software.

  8. Alzheimer's disease: analysis of a mathematical model incorporating the role of prions.

    PubMed

    Helal, Mohamed; Hingant, Erwan; Pujo-Menjouet, Laurent; Webb, Glenn F

    2014-11-01

    We introduce a mathematical model of the in vivo progression of Alzheimer's disease with focus on the role of prions in memory impairment. Our model consists of differential equations that describe the dynamic formation of β-amyloid plaques based on the concentrations of Aβ oligomers, PrP(C) proteins, and the Aβ-x-Aβ-PrP(C)complex, which are hypothesized to be responsible for synaptic toxicity. We prove the well-posedness of the model and provided stability results for its unique equilibrium, when the polymerization rate of Aβ-amyloid is constant and also when it is described by a power law. PMID:24146290

  9. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code

    NASA Technical Reports Server (NTRS)

    Freeh, Josh

    2003-01-01

    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  10. Simple model atmospheres incorporating new opacities of VO and TiO

    NASA Astrophysics Data System (ADS)

    Brett, J. M.

    1989-11-01

    The effect of the VO A-X, VO B-X and TiO-upsilon molecular band opacities upon the atmospheres of red giants is investigated by construction of plane-parallel straight mean opacity models. From these simple preliminary models a limited heating of the upper atmosphere is found producing a temperature rise of up to 100 K. This result and its effect upon computed bandstrengths suggests that these opacities should be included in more accurate model atmospheres of late M stars.

  11. On Boundary Misorientation Distribution Functions and How to Incorporate them into 3D Models of Microstructural Evolution

    SciTech Connect

    Godfrey, A.W.; Holm, E.A.; Hughes, D.A.; Miodownik, M.

    1998-12-23

    The fundamental difficulties incorporating experimentally obtained-boundary disorientation distributions (BMD) into 3D microstructural models are discussed. An algorithm is described which overcomes these difficulties. The boundary misorientations are treated as a statistical ensemble which is evolved toward the desired BMD using a Monte Carlo method. The application of this algorithm to a number complex arbitrary BMDs shows that the approach is effective for both conserved and non-conserved textures. The algorithm is successfully used to create the BMDs observed in deformation microstructure containing both incidental dislocation boundaries (IDBs) and geometrically necessary boundaries (GNBs).

  12. On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi

    2010-12-01

    The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.

  13. On boundary misorientation distribution functions and how to incorporate them into three-dimensional models of microstructural evolution

    SciTech Connect

    Miodownik, M.; Holm, E.A. ); Godfrey, A.W.; Hughes, D.A. )

    1999-07-09

    The fundamental difficulties of incorporating experimentally obtained boundary misorientation distributions (BMDs) into three-dimensional microstructural models are discussed. An algorithm is described which overcomes these difficulties. The boundary misorientations are treated as a statistical ensemble which is evolved toward the desired BMD using a Monte Carlo method. The application of this algorithm to a number of complex arbitrary BMDs shows that the approach is effective for both conserved and non-conserved textures. The algorithm is successfully used to create the BMDs observed in deformation microstructures containing both incidental dislocation boundaries (IDBs) and geometrically necessary boundaries (GNBs). The application of an algorithm to grain boundary engineering is discussed.

  14. Incorporating Daily Flood Control Objectives Into a Monthly Stochastic Dynamic Programing Model for a Hydroelectric Complex

    NASA Astrophysics Data System (ADS)

    Druce, Donald J.

    1990-01-01

    A monthly stochastic dynamic programing model was recently developed and implemented at British Columbia (B.C.) Hydro to provide decision support for short-term energy exports and, if necessary, for flood control on the Peace River in northern British Columbia. The model establishes the marginal cost of supplying energy from the B.C. Hydro system, as well as a monthly operating policy for the G.M. Shrum and Peace Canyon hydroelectric plants and the Williston Lake storage reservoir. A simulation model capable of following the operating policy then determines the probability of refilling Williston Lake and possible spill rates and volumes. Reservoir inflows are input to both models in daily and monthly formats. The results indicate that flood control can be accommodated without sacrificing significant export revenue.

  15. A creep model for austenitic stainless steels incorporating cavitation and wedge cracking

    NASA Astrophysics Data System (ADS)

    Mahesh, S.; Alur, K. C.; Mathew, M. D.

    2011-01-01

    A model of damage evolution in austenitic stainless steels under creep loading at elevated temperatures is proposed. The initial microstructure is idealized as a space-tiling aggregate of identical rhombic dodecahedral grains, which undergo power-law creep deformation. Damage evolution in the form of cavitation and wedge cracking on grain-boundary facets is considered. Both diffusion- and deformation-driven grain-boundary cavity growth are treated. Cavity and wedge-crack length evolution are derived from an energy balance argument that combines and extends the models of Cottrell (1961 Trans. AIME 212 191-203), Williams (1967 Phil. Mag. 15 1289-91) and Evans (1971 Phil Mag. 23 1101-12). The time to rupture predicted by the model is in good agreement with published experimental data for a type 316 austenitic stainless steel under uniaxial creep loading. Deformation and damage evolution at the microscale predicted by the present model are also discussed.

  16. Chemical evolution of dwarf spheroidal galaxies based on model calculations incorporating observed star formation histories

    NASA Astrophysics Data System (ADS)

    Homma, H.; Murayama, T.

    We investigate the chemical evolution model explaining the chemical composition and the star formation histories (SFHs) simultaneously for the dwarf spheroidal galaxies (dSphs). Recently, wide imaging photometry and multi-object spectroscopy give us a large number of data. Therefore, we start to develop the chemical evolution model based on an SFH given by photometric observations and estimates a metallicity distribution function (MDF) comparing with spectroscopic observations. With this new model we calculate the chemical evolution for 4 dSphs (Fornax, Sculptor, Leo II, Sextans), and then we found that the model of 0.1 Gyr for the delay time of type Ia SNe is too short to explain the observed [alpha /Fe] vs. [Fe/H] diagrams.

  17. Incorporating Results of Avian Toxicity Tests into a Model of Annual Reproductive Success

    EPA Science Inventory

    This manuscript presents a modeling approach for translating results from laboratory avian reproduction tests into an estimate of pesticide-caused change in the annual reproductive success of birds, also known as fecundity rate.

  18. Incorporation of prior information on parameters into nonlinear regression groundwater flow models. l. Theory.

    USGS Publications Warehouse

    Cooley, R.L.

    1982-01-01

    Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: 1) prior information having known reliability (that is, bias and random error structure), and 2) prior information consisting of best available estimates of unknown reliability. It is shown that if both scales of prior information are available, then a combined regression analysis may be made. -from Author

  19. Creating a process for incorporating epidemiological modelling into outbreak management decisions.

    PubMed

    Akselrod, Hana; Mercon, Monica; Kirkeby Risoe, Petter; Schlegelmilch, Jeffrey; McGovern, Joanne; Bogucki, Sandy

    2012-01-01

    Modern computational models of infectious diseases greatly enhance our ability to understand new infectious threats and assess the effects of different interventions. The recently-released CDC Framework for Preventing Infectious Diseases calls for increased use of predictive modelling of epidemic emergence for public health preparedness. Currently, the utility of these technologies in preparedness and response to outbreaks is limited by gaps between modelling output and information requirements for incident management. The authors propose an operational structure that will facilitate integration of modelling capabilities into action planning for outbreak management, using the Incident Command System (ICS) and Synchronization Matrix framework. It is designed to be adaptable and scalable for use by state and local planners under the National Response Framework (NRF) and Emergency Support Function #8 (ESF-8). Specific epidemiological modelling requirements are described, and integrated with the core processes for public health emergency decision support. These methods can be used in checklist format to align prospective or real-time modelling output with anticipated decision points, and guide strategic situational assessments at the community level. It is anticipated that formalising these processes will facilitate translation of the CDC's policy guidance from theory to practice during public health emergencies involving infectious outbreaks. PMID:22948107

  20. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  1. Incorporating Non-Linear Sorption into High Fidelity Subsurface Reactive Transport Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Rabideau, A. J.; Allen-King, R. M.

    2014-12-01

    A variety of studies, including multiple NRC (National Research Council) reports, have stressed the need for simulation models that can provide realistic predictions of contaminant behavior during the groundwater remediation process, most recently highlighting the specific technical challenges of "back diffusion and desorption in plume models". For a typically-sized remediation site, a minimum of about 70 million grid cells are required to achieve desired cm-level thickness among low-permeability lenses responsible for driving the back-diffusion phenomena. Such discretization is nearly three orders of magnitude more than is typically seen in modeling practice using public domain codes like RT3D (Reactive Transport in Three Dimensions). Consequently, various extensions have been made to the RT3D code to support efficient modeling of recently proposed dual-mode non-linear sorption processes (e.g. Polanyi with linear partitioning) at high-fidelity scales of grid resolution. These extensions have facilitated development of exploratory models in which contaminants are introduced into an aquifer via an extended multi-decade "release period" and allowed to migrate under natural conditions for centuries. These realistic simulations of contaminant loading and migration provide high fidelity representation of the underlying diffusion and sorption processes that control remediation. Coupling such models with decision support processes is expected to facilitate improved long-term management of complex remediation sites that have proven intractable to conventional remediation strategies.

  2. A modified capacitance model of RF MEMS shunt switch incorporating fringing field effects of perforated beam

    NASA Astrophysics Data System (ADS)

    Guha, Koushik; Kumar, Mithlesh; Agarwal, Saurabh; Baishya, Srimanta

    2015-12-01

    This paper deals with the approach to accurately model the capacitance of non-uniform meander based RF MEMS shunt switch with perforated structure. Here the general analytical model of capacitance is proposed for both up state and down state condition of the switch. The model also accounts for fringing capacitance due to beam thickness and etched holes on the beam. Calculated results are validated with the simulated results of full 3D FEM solver Coventorware in both the conditions of the switch. Variation of Up-state and Down-state capacitances with different dielectric thicknesses and voltages are plotted and error of analytical value is estimated and analyzed. Three benchmark models of parallel plate capacitance are modified for MEMS switch operation and their results are compared with the proposed model. Percentage contribution of fringing capacitance in up-state and down-state is approx. 25% and 2%, respectively, of the total capacitance. The model shows good accuracy with the mean error of -4.45% in up-state and -5.78% in down-state condition for a wide range of parameter variations and -2.13% for ligament efficiency of μ = 0.3.

  3. Delay-driven pattern formation in a reaction-diffusion predator-prey model incorporating a prey refuge

    NASA Astrophysics Data System (ADS)

    Lian, Xinze; Wang, Hailing; Wang, Weiming

    2013-04-01

    In this paper, we consider a diffusive predation model with a delay effect, which is based on a modified version of the Leslie-Gower scheme incorporating a prey refuge. We mainly investigate the effects of time delay on the stability of the homogeneous state point and the formation of spatial patterns. We give the conditions for diffusion-driven and delay-diffusion-driven instability in detail. Furthermore, we illustrate the spatial patterns via numerical simulations, which show that the model dynamics exhibits a delay and diffusion controlled formation growth not only of spots, stripes and holes, but also of self-replicating spiral patterns. The results indicate that the delay plays an important role in the pattern selection. This may enrich the pattern dynamics for a delay diffusive model.

  4. Incorporating GIS data into an agent-based model to support planning policy making for the development of creative industries

    NASA Astrophysics Data System (ADS)

    Liu, Helin; Silva, Elisabete A.; Wang, Qian

    2016-06-01

    This paper presents an extension to the agent-based model "Creative Industries Development-Urban Spatial Structure Transformation" by incorporating GIS data. Three agent classes, creative firms, creative workers and urban government, are considered in the model, and the spatial environment represents a set of GIS data layers (i.e. road network, key housing areas, land use). With the goal to facilitate urban policy makers to draw up policies locally and optimise the land use assignment in order to support the development of creative industries, the improved model exhibited its capacity to assist the policy makers conducting experiments and simulating different policy scenarios to see the corresponding dynamics of the spatial distributions of creative firms and creative workers across time within a city/district. The spatiotemporal graphs and maps record the simulation results and can be used as a reference by the policy makers to adjust land use plans adaptively at different stages of the creative industries' development process.

  5. Incorporating GIS data into an agent-based model to support planning policy making for the development of creative industries

    NASA Astrophysics Data System (ADS)

    Liu, Helin; Silva, Elisabete A.; Wang, Qian

    2016-07-01

    This paper presents an extension to the agent-based model "Creative Industries Development-Urban Spatial Structure Transformation" by incorporating GIS data. Three agent classes, creative firms, creative workers and urban government, are considered in the model, and the spatial environment represents a set of GIS data layers (i.e. road network, key housing areas, land use). With the goal to facilitate urban policy makers to draw up policies locally and optimise the land use assignment in order to support the development of creative industries, the improved model exhibited its capacity to assist the policy makers conducting experiments and simulating different policy scenarios to see the corresponding dynamics of the spatial distributions of creative firms and creative workers across time within a city/district. The spatiotemporal graphs and maps record the simulation results and can be used as a reference by the policy makers to adjust land use plans adaptively at different stages of the creative industries' development process.

  6. Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study

    NASA Astrophysics Data System (ADS)

    Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2013-04-01

    The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique

  7. Documentation of Hybrid Hydride Model for Incorporation into Moose-Bison and Validation Strategy.

    SciTech Connect

    Weck, Philippe F; Tikare, Veena; Schultz, Peter Andrew; Clark, B; Mitchell, J; Glazoff, Michael V.; Homer, Eric R.

    2014-10-01

    This report documents the development, demonstration and validation of a mesoscale, microstructural evolution model for simulation of zirconium hydride δ-ZrH{sub 1.5} precipitation in the cladding of used nuclear fuels that may occur during long-term dry storage. While the Zr-based claddings are manufactured free of any hydrogen, they absorb hydrogen during service, in the reactor by a process commonly termed ‘hydrogen pick-up’. The precipitation and growth of zirconium hydrides during dry storage is one of the most likely fuel rod integrity failure mechanisms either by embrittlement or delayed hydride cracking of the cladding (Hanson et al., 2011). While the phenomenon is well documented and identified as a potential key failure mechanism during long-term dry storage (Birk et al., 2012 and NUREG/CR-7116), the ability to actually predict the formation of hydrides is poor. The model being documented in this work is a computational capability for the prediction of hydride formation in different claddings of used nuclear fuels. This work supports the Used Fuel Disposition Research and Development Campaign in assessing the structural engineering performance of the cladding during and after long-term dry storage. In this work, a model to numerically simulate hydride precipitation at the microstructural scale, in a wide variety of Zr-based claddings, under dry-storage conditions is being developed. It will be used to aid in the evaluation of the mechanical integrity of used fuel rods during dry storage and transportation by providing the structural conditions from the microstructural scale to the continuum scale to engineering component scale models to predict if the used fuel rods will perform without failure under normal and off-normal conditions. The microstructure, especially, the hydride structure is thought to be a primary determinant of cladding failure, thus this component of UFD’s storage and transportation analysis program is critical. The model

  8. Incorporating landslide erosion into the SWAT for modeling suspended sediment discharge from snowmelt-dependent watersheds in northern Japan

    NASA Astrophysics Data System (ADS)

    Mizugaki, S.; Kubo, M.; Tanise, A.; Hirai, Y.; Hamamoto, S.

    2015-12-01

    Landslide erosion is a key driver of sediment yield at the watershed scale. In snowy cold region, landslide is active especially in snowmelt season, leading to increase in lateral erosion during snowmelt floods. Then, snowmelt flood in spring season is crucial for water, sediment and nutrient cycles from mountain to coast. Common hydrological models such as SWAT and WEPP are useful tools to evaluate and/or predict hydrology and sediment dynamics at watershed scale in snowy cold region, but not include the landslide erosion process. To develop the SWAT model for modeling water and suspended sediment discharge in snowmelt-dependent watersheds, we investigated the suspended sediment yield using fingerprinting technique, and correlated with landslide distribution using GIS. Suspended sediment yield was determined and found to be varied with each lithological group in the watersheds, showing the highest sediment yield in the metamorphic rock area represented by serpentine rock. From this result, the suspended sediment yield at sub-basin scale was estimated according to the geological composition and correlated with landslide density. The correlation analysis showed that the suspended sediment yield increases in respond to the landslide area along the stream channel. To incorporate the landslide erosion process into the SWAT model, the parameter associated with channel bank erosion was weighted with landslide density along main channel, assuming the lateral erosion as a key driver of landslide erosion. In result of calibration for sediment discharge, the Nash-Sutcliffe coefficient was improved for the landslide-incorporated model from for the model with default parameter set.

  9. Modeling Mode Choice Behavior Incorporating Household and Individual Sociodemographics and Travel Attributes Based on Rough Sets Theory

    PubMed Central

    Chen, Xuewu; Wei, Ming; Wu, Jingxian; Hou, Xianyao

    2014-01-01

    Most traditional mode choice models are based on the principle of random utility maximization derived from econometric theory. Alternatively, mode choice modeling can be regarded as a pattern recognition problem reflected from the explanatory variables of determining the choices between alternatives. The paper applies the knowledge discovery technique of rough sets theory to model travel mode choices incorporating household and individual sociodemographics and travel information, and to identify the significance of each attribute. The study uses the detailed travel diary survey data of Changxing county which contains information on both household and individual travel behaviors for model estimation and evaluation. The knowledge is presented in the form of easily understood IF-THEN statements or rules which reveal how each attribute influences mode choice behavior. These rules are then used to predict travel mode choices from information held about previously unseen individuals and the classification performance is assessed. The rough sets model shows high robustness and good predictive ability. The most significant condition attributes identified to determine travel mode choices are gender, distance, household annual income, and occupation. Comparative evaluation with the MNL model also proves that the rough sets model gives superior prediction accuracy and coverage on travel mode choice modeling. PMID:25431585

  10. Incorporating Detailed Chemical Characterization of Biomass Burning Emissions into Air Quality Models

    NASA Astrophysics Data System (ADS)

    Barsanti, K.; Hatch, L. E.; Yokelson, R. J.; Stockwell, C.; Orlando, J. J.; Emmons, L. K.; Knote, C. J.; Wiedinmyer, C.

    2015-12-01

    Approximately 500 Tg/yr of non-methane organic compounds (NMOCs) are emitted by biomass burning (BB) to the global atmosphere, leading to the photochemical production of ozone (O3) and secondary particulate matter (PM). Until recently, in studies of BB emissions, a significant mass fraction of NMOCs (up to 80%) remained uncharacterized or unidentified. Models used to simulate the air quality impacts of BB thus have relied on very limited chemical characterization of the emitted compounds. During the Fourth Fire Lab at Missoula Experiment (FLAME-IV), an unprecedented fraction of emitted NMOCs were identified and quantified through the application of advanced analytical techniques. Here we use FLAME-IV data to improve BB emissions speciation profiles for individual fuel types. From box model simulations we evaluate the sensitivity of predicted precursor and pollutant concentrations (e.g., formaldehyde, acetaldehyde, and terpene oxidation products) to differences in the emission speciation profiles, for a range of ambient conditions (e.g., high vs. low NOx). Appropriate representation of emitted NMOCs in models is critical for the accurate prediction of downwind air quality. Explicit simulation of hundreds of NMOCs is not feasible; therefore we also investigate the consequences of using existing assumptions and lumping schemes to map individual NMOCs to model surrogates and we consider alternative strategies. The updated BB emissions speciation profiles lead to markedly different surrogate compound distributions than the default speciation profiles, and box model results suggest that these differences are likely to affect predictions of PM and important gas-phase species in chemical transport models. This study highlights the potential for further BB emissions characterization studies, with concerted model development efforts, to improve the accuracy of BB predictions using necessarily simplified mechanisms.

  11. Incorporating floating surface objects into a fully dispersive surface wave model

    NASA Astrophysics Data System (ADS)

    Orzech, Mark D.; Shi, Fengyan; Veeramony, Jayaram; Bateman, Samuel; Calantoni, Joseph; Kirby, James T.

    2016-06-01

    The shock-capturing, non-hydrostatic, three-dimensional (3D) finite-volume model NHWAVE was originally developed to simulate wave propagation and landslide-generated tsunamis in finite water depth (Ma, G., Shi, F., Kirby, J. T., 2012. Ocean Model. 43-44, 22-35). The model is based on the incompressible Navier-Stokes equations, in which the z-axis is transformed to a σ-coordinate that tracks the bed and surface. As part of an ongoing effort to simulate waves in polar marginal ice zones (MIZs), the model has now been adapted to allow objects of arbitrary shape and roughness to float on or near its water surface. The shape of the underside of each floating object is mapped onto an upper σ-level slightly below the surface. In areas without floating objects, this σ-level continues to track the surface and bed as before. Along the sides of each floating object, an immersed boundary method is used to interpolate the effects of the object onto the neighboring fluid volume. Provided with the object's shape, location, and velocity over time, NHWAVE determines the fluid fluxes and pressure variations from the corresponding accelerations at neighboring cell boundaries. The system was validated by comparison with analytical solutions and a VOF model for a 2D floating box and with laboratory measurements of wave generation by a vertically oscillating sphere. A steep wave simulation illustrated the high efficiency of NHWAVE relative to a VOF model. In a more realistic MIZ simulation, the adapted model produced qualitatively reasonable results for wave attenuation, diffraction, and scattering.

  12. Incorporating harvest rates into the sex-age-kill model for white-tailed deer

    USGS Publications Warehouse

    Norton, Andrew S.; Diefenbach, Duane R.; Rosenberry, Christopher S.; Wallingford, Bret D.

    2013-01-01

    Although monitoring population trends is an essential component of game species management, wildlife managers rarely have complete counts of abundance. Often, they rely on population models to monitor population trends. As imperfect representations of real-world populations, models must be rigorously evaluated to be applied appropriately. Previous research has evaluated population models for white-tailed deer (Odocoileus virginianus); however, the precision and reliability of these models when tested against empirical measures of variability and bias largely is untested. We were able to statistically evaluate the Pennsylvania sex-age-kill (PASAK) population model using realistic error measured using data from 1,131 radiocollared white-tailed deer in Pennsylvania from 2002 to 2008. We used these data and harvest data (number killed, age-sex structure, etc.) to estimate precision of abundance estimates, identify the most efficient harvest data collection with respect to precision of parameter estimates, and evaluate PASAK model robustness to violation of assumptions. Median coefficient of variation (CV) estimates by Wildlife Management Unit, 13.2% in the most recent year, were slightly above benchmarks recommended for managing game species populations. Doubling reporting rates by hunters or doubling the number of deer checked by personnel in the field reduced median CVs to recommended levels. The PASAK model was robust to errors in estimates for adult male harvest rates but was sensitive to errors in subadult male harvest rates, especially in populations with lower harvest rates. In particular, an error in subadult (1.5-yr-old) male harvest rates resulted in the opposite error in subadult male, adult female, and juvenile population estimates. Also, evidence of a greater harvest probability for subadult female deer when compared with adult (≥2.5-yr-old) female deer resulted in a 9.5% underestimate of the population using the PASAK model. Because obtaining

  13. Incorporating microbial dormancy dynamics into soil decomposition models to improve quantification of soil carbon dynamics of northern temperate forests

    USGS Publications Warehouse

    He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, Anthony; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong

    2015-01-01

    Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil RH (7.5 ± 2.4 Pg C yr−1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4–0.6) in the simulated spatial pattern of soil RHwith both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = −0.43 to −0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.

  14. Incorporating microbial dormancy dynamics into soil decomposition models to improve quantification of soil carbon dynamics of northern temperate forests

    NASA Astrophysics Data System (ADS)

    He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, Anthony D.; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong

    2015-12-01

    Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil RH (7.5 ± 2.4 Pg C yr-1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4-0.6) in the simulated spatial pattern of soil RH with both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = -0.43 to -0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.

  15. Incorporating advanced language models into the P300 speller using particle filtering

    NASA Astrophysics Data System (ADS)

    Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.

    2015-08-01

    Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.

  16. Simulated masking of right whale sounds by shipping noise: incorporating a model of the auditory periphery.

    PubMed

    Cunningham, Kane A; Mountain, David C

    2014-03-01

    Many species of large, mysticete whales are known to produce low-frequency communication sounds. These low-frequency sounds are susceptible to communication masking by shipping noise, which also tends to be low frequency in nature. The size of these species makes behavioral assessment of auditory capabilities in controlled, captive environments nearly impossible, and field-based playback experiments are expensive and necessarily limited in scope. Hence, it is desirable to produce a masking model for these species that can aid in determining the potential effects of shipping and other anthropogenic noises on these protected animals. The aim of this study was to build a model that combines a sophisticated representation of the auditory periphery with a spectrogram-based decision stage to predict masking levels. The output of this model can then be combined with a habitat-appropriate propagation model to calculate the potential effects of noise on communication range. For this study, the model was tested on three common North Atlantic right whale communication sounds, both to demonstrate the method and to probe how shipping noise affects the detection of sounds with varying spectral and temporal characteristics. PMID:24606298

  17. Incorporating advanced language models into the P300 speller using particle filtering

    PubMed Central

    Speier, W; Arnold, CW; Deshpande, A; Knall, J

    2015-01-01

    Objective The P300 speller is a common brain–computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram (EEG) signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main Result This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance. PMID:26061188

  18. Incorporating a Turbulence Transport Model into 2-D Hybrid Hall Thruster Simulations

    NASA Astrophysics Data System (ADS)

    Cha, Eunsun; Cappelli, Mark A.; Fernandez, Eduardo

    2014-10-01

    2-D hybrid simulations of Hall plasma thrusters that do not resolve cross-field transport-generating fluctuations require a model to capture how electrons migrate across the magnetic field. We describe the results of integrating a turbulent electron transport model into simulations of plasma behavior in a plane spanned by the E and B field vectors. The simulations treat the electrons as a fluid and the heavy species (ions/neutrals) as discrete particles. The transport model assumes that the turbulent eddy cascade in the electron fluid to smaller scales is the primary means of electron energy dissipation. Using this model, we compare simulations to experimental measurements made on a laboratory Hall discharge over a range of discharge voltage. Both the current-voltage trends as well as the plasma properties such as plasma temperature, electron density, and ion velocities seem agree favorably with experiments, where a simple Bohm transport model tends to perform poorly in capturing much of the discharge behavior.

  19. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    PubMed

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. PMID:24852412

  20. A Modeling Framework to Incorporate Effects of Infrastructure in Sociohydrological Systems

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, R.

    2014-12-01

    In studying coupled natural-human systems, most modeling efforts focus on humans and the natural resources. In reality, however, humans rarely interact with these resources directly; the relationships between humans and resources are mediated by infrastructures. In sociohydrological systems, these include, for example, dams and irrigation canals. These infrastructures have important characteristics such as threshold behavior and a separate entity/organization tasked with maintaining them. These characteristics influence social dynamics within the system, which in turn determines the state of infrastructure and water usage, thereby exerting feedbacks onto the hydrological processes. Infrastructure is thus a necessary ingredient for modeling co-evolution of human and water in sociohydrological systems. A conceptual framework to address this gap has been proposed by Anderies, Janssen, and Ostrom (2004). Here we develop a model to operationalize the framework and report some preliminary results. Simple in its setup, the model highlights the structure of the social dilemmas and how it affects the system's sustainability. The model also offers a platform to explore how the system's sustainability may respond to external shocks from globalization and global climate change.

  1. A magnetospheric field model incorporating the OGO 3 and 5 magnetic field observations.

    NASA Technical Reports Server (NTRS)

    Sugiura, M.; Poros, D. J.

    1973-01-01

    A magnetospheric field model is presented in which the usually assumed toroidal ring current is replaced by a circular disk current of finite thickness that extends from the tail to geocentric distances less than 3 earth radii. The drastic departure of this model from the concept of the conventional ring current lies in that the current is continuous from the tail to the inner magnetosphere. This conceptual change was required to account for the recent results of analysis of the OGO 3 and 5 magnetic field observations. In the present model the cross-tail current flows along circular arcs concentric with the earth and completes circuit via surface currents on the magnetopause. Apart from these return currents in the tail magnetopause, Mead's (1964) model is used for the field from the magnetopause current. The difference scalar field, Delta B, defined as the difference between the scalar field calculated from the present model and the magnitude of the dipole field, is found to be in gross agreement with the observed Delta B.

  2. Ultrasonically assisted drilling: A finite-element model incorporating acoustic softening effects

    NASA Astrophysics Data System (ADS)

    Phadnis, V. A.; Roy, A.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically assisted drilling (UAD) is a novel machining technique suitable for drilling in hard-to-machine quasi-brittle materials such as carbon fibre reinforced polymer composites (CFRP). UAD has been shown to possess several advantages compared to conventional drilling (CD), including reduced thrust forces, diminished burr formation at drill exit and an overall improvement in roundness and surface finish of the drilled hole. Recently, our in-house experiments of UAD in CFRP composites demonstrated remarkable reductions in thrust-force and torque measurements (average force reductions in excess of 80%) when compared to CD with the same machining parameters. In this study, a 3D finite-element model of drilling in CFRP is developed. In order to model acoustic (ultrasonic) softening effects, a phenomenological model, which accounts for ultrasonically induced plastic strain, was implemented in ABAQUS/Explicit. The model also accounts for dynamic frictional effects, which also contribute to the overall improved machining characteristics in UAD. The model is validated with experimental findings, where an excellent correlation between the reduced thrust force and torque magnitude was achieved.

  3. On incorporating damping and gravity effects in models of structural dynamics of the SCOLE configuration

    NASA Technical Reports Server (NTRS)

    Taylor, Larry; Leary, Terry; Stewart, Eric

    1987-01-01

    The damping for structural dynamic models of flexible spacecraft is usually ignored and then added after modal frequencies and mode shapes are calculated. It is common practice to assume the same damping ratio for all modes, although it is known that damping due to bending and that due to torsion are sometimes ignored. Two methods of including damping in the modeling process from its onset are examined. First, the partial derivative equations of motion are analyzed for a pinned-pinned beam with damping. The end conditions are altered to handle bodies with mass and inertia for the Spacecraft Control Laboratory Experiment (SCOLE) configuration. Second, a massless beam approximation is used for the modes with low frequencies, and a clamped-clamped system is used to approximate the modes for arbitrarily high frequency. The model is then modified to include gravity effects and is compared with experimental results.

  4. Approximate world models: Incorporating qualitative and linguistic information into vision systems

    SciTech Connect

    Pinhanez, C.S.; Bobick, A.F.

    1996-12-31

    Approximate world models are coarse descriptions of the elements of a scene, and are intended to be used in the selection and control of vision routines in a vision system. In this paper we present a control architecture in which the approximate models represent the complex relationships among the objects in the world, allowing the vision routines to be situation or context specific. Moreover, because of their reduced accuracy requirements, approximate world models can employ qualitative information such as those provided by linguistic descriptions of the scene. The concept is demonstrated in the development of automatic cameras for a TV studio-SmartCams. Results are shown where SmartCams use vision processing of real imagery and information written in the script of a TV show to achieve TV-quality framing.

  5. Incorporating Hydrologic Routing into Reservoir Optimization Model: Implications for Hydropower Production Planning

    NASA Astrophysics Data System (ADS)

    Zmijewski, N.; Worman, A. L. E.; Bottacin-Busolin, A.

    2014-12-01

    Renewable and intermittent energy sources are likely to become more important in the future and consequently lead to a change in the production strategies of hydropower to account for expected production fluctuations. Optimization models are used as a tool to achieve the best overall result in a network of reservoirs and hydropower plants. The computational demand increases for large networks, making simplification of the physical description of the stream flows necessary. In management optimization models, the flows behavior is often described using a constant time-lag for water on flow stretches, i.e., the release of water mass from an upstream reservoir is time-shifted as inflow to the subsequent reservoir. We developed an optimization model that included the kinematic-diffusion wave equation for flow on stretches, which was used to evaluate the role of the model improvement for short term production planning in Dalälven River, a study case with 36 hydropower stations and 13 major reservoirs . The increased complexity of the time-lag distributions of the streams resulting from the kinematic-diffusion wave equation compared to the constant time-lag model was found to be highly important for many situation of hydropower production planning in a regulated water system. The deviation of optimized power production resulting from the two models (time-lag and kinematic-diffusive) increases with decreasing Peclet number for the flow stretches - the latter being evaluated for all included stretches. A procedure emulating the data-assimilation procedure present in modern systems, using the receding horizon approach, was used in order to describe the dynamic effect of the resulting flow prediction deviation. The procedure also demonstrated the importance of high frequency data assimilation for highly effected streams, which implies that the error in predicted power production decreases with decreasing time step of forecast updating.

  6. Incorporating photon recycling into the analytical drift-diffusion model of high efficiency solar cells

    SciTech Connect

    Lumb, Matthew P.; Steiner, Myles A.; Geisz, John F.; Walters, Robert J.

    2014-11-21

    The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close to the fundamental efficiency limit.

  7. Effect of Control-released Basic Fibroblast Growth Factor Incorporated in β-Tricalcium Phosphate for Murine Cranial Model

    PubMed Central

    Shimizu, Azusa; Tajima, Satoshi; Tobita, Morikuni; Tanaka, Rica; Tabata, Yasuhiko

    2014-01-01

    Background: β-Tricalcium phosphate (β-TCP) is used clinically as a bone substitute, but complete osteoinduction is slow. Basic fibroblast growth factor (bFGF) is important in bone regeneration, but the biological effects are very limited because of the short half-life of the free form. Incorporation in gelatin allows slow release of growth factors during degradation. The present study evaluated whether control-released bFGF incorporated in β-TCP can promote bone regeneration in a murine cranial defect model. Methods: Bilateral cranial defects of 4 mm in diameter were made in 10-week-old male Sprague-Dawley rats treated as follows: group 1, 20 μl saline as control; group 2, β-TCP disk in 20 μl saline; group 3, β-TCP disk in 50 μg bFGF solution; and group 4, β-TCP disk in 50 μg bFGF-containing gelatin hydrogel (n = 6 each). Histological and imaging analyses were performed at 1, 2, and 4 weeks after surgery. Results: The computed tomography value was lower in groups 3 and 4, whereas the rate of osteogenesis was higher histologically in group 4 than in the other groups. The appearance of tartrate-resistant acid phosphate–positive cells and osteocalcin-positive cells and disappearance of osteopontin-positive cells occurred earlier in group 4 than in the other groups. Conclusions: These findings suggest that control-released bFGF incorporated in β-TCP can accelerate bone regeneration in the murine cranial defect model and may be promising for the clinical treatment of cranial defects. PMID:25289319

  8. The effects of hyaluronic acid incorporated as a wetting agent on lysozyme denaturation in model contact lens materials.

    PubMed

    Weeks, Andrea; Boone, Adrienne; Luensmann, Doerte; Jones, Lyndon; Sheardown, Heather

    2013-09-01

    Conventional and silicone hydrogels as models for contact lenses were prepared to determine the effect of the presence of hyaluronic acid on lysozyme sorption and denaturation. Hyaluronic acid was loaded into poly(2-hydroxyethyl methacrylate) and poly(2-hydroxyethyl methacrylate)/TRIS--methacryloxypropyltris (trimethylsiloxy silane) hydrogels, which served as models for conventional and silicone hydrogel contact lens materials. The hyaluronic acid was cross-linked using 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide in the presence of dendrimers. Active lysozyme was quantified using a Micrococcus lysodeikticus assay while total lysozyme was determined using 125-I radiolabeled protein. To examine the location of hyaluronic acid in the gels, 6-aminofluorescein labeled hyaluronic acid was incorporated into the gels using 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide chemistry and the gels were examined using confocal laser scanning microscopy. Hyaluronic acid incorporation significantly reduced lysozyme sorption in poly(2-hydroxyethyl methacrylate) (p < 0.00001) and poly(2-hydroxyethyl methacrylate)/TRIS--methacryloxypropyltris (trimethylsiloxy silane) (p < 0.001) hydrogels, with the modified materials sorbing only 20% and 16% that of the control, respectively. More importantly, hyaluronic acid also decreased lysozyme denaturation in poly(2-hydroxyethyl methacrylate) (p < 0.005) and poly(2-hydroxyethyl methacrylate)/TRIS--methacryloxypropyltris (trimethylsiloxy silane) (p < 0.02) hydrogels. The confocal laser scanning microscopy results showed that the hyaluronic acid distribution was dependent on both the material type and the molecular weight of hyaluronic acid. This study demonstrates that hyaluronic acid incorporated as a wetting agent has the potential to reduce lysozyme sorption and denaturation in contact lens applications. The distribution of hyaluronic acid within hydrogels appears to affect denaturation, with more surface mobile, lower

  9. SAI (SYSTEMS APPLICATIONS, INCORPORATED) AIRSHED MODEL OPERATIONS MANUALS. VOLUME 2. SYSTEMS MANUAL

    EPA Science Inventory

    This report describes the Systems Applications, Inc. (SAI) Airshed Model System from a programmer's point of view. Included are discussions of all subroutines and how they fit together, run-time core allocation techniques, internal methods of segment handling using secondary stor...

  10. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  11. A Model for Incorporating Content on Aging into the Curriculum: K-12.

    ERIC Educational Resources Information Center

    Blackwell, David L.; Hunt, Sara Stockard

    Following a statement of the problem of putting aging education in the elementary secondary curriculum, and a review of the relevant literature, a model for developing a curriculum on aging is presented. An overview of the 3-year project, developed in Baton Rouge, Louisiana schools for grades K-12, is offered, including activities and yearly…

  12. Incorporating Retention Time to Refine Models Predicting Thermal Regimes of Stream Networks Across New England

    EPA Science Inventory

    Thermal regimes are a critical factor in models predicting effects of watershed management activities on fish habitat suitability. We have assembled a database of lotic temperature time series across New England (> 7000 station-year combinations) from state and Federal data s...

  13. Incorporation of Monitoring Systems to Model Irrigated Cotton at a Landscape Level

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advances in computer speed, industry IT core capabilities, and available soils and weather information have resulted in the need for “cropping system models” that address in detail the spatial and temporal water, energy and carbon balance of the system at a landscape scale. Many of these models have...

  14. A Probabilistic Model of Visual Working Memory: Incorporating Higher Order Regularities into Working Memory Capacity Estimates

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Tenenbaum, Joshua B.

    2013-01-01

    When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…

  15. Scale and hierarchical relationships when incorporating observed data into fish models

    EPA Science Inventory

    Identifying correlations between environmental variables and fish presence or density is usually the main focus of efforts to model fish-habitat relationships. These relationships, however, can be confounded by scale and hierarchical effects. In particular the strength of fish –...

  16. Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.

    ERIC Educational Resources Information Center

    Buchal, Ralph O.

    2001-01-01

    Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)

  17. Process model for ammonia volatilization from anaerobic swine lagoons incorporating varying wind speeds and biogas bubbling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ammonia volatilization from treatment lagoons varies widely with the total ammonia concentration, pH, temperature, suspended solids, atmospheric ammonia concentration above the water surface, and wind speed. Ammonia emissions were estimated with a process-based mechanistic model integrating ammonia ...

  18. Incorporating Artificial Neural Networks in the dynamic thermal-hydraulic model of a controlled cryogenic circuit

    NASA Astrophysics Data System (ADS)

    Carli, S.; Bonifetto, R.; Savoldi, L.; Zanino, R.

    2015-09-01

    A model based on Artificial Neural Networks (ANNs) is developed for the heated line portion of a cryogenic circuit, where supercritical helium (SHe) flows and that also includes a cold circulator, valves, pipes/cryolines and heat exchangers between the main loop and a saturated liquid helium (LHe) bath. The heated line mimics the heat load coming from the superconducting magnets to their cryogenic cooling circuits during the operation of a tokamak fusion reactor. An ANN is trained, using the output from simulations of the circuit performed with the 4C thermal-hydraulic (TH) code, to reproduce the dynamic behavior of the heated line, including for the first time also scenarios where different types of controls act on the circuit. The ANN is then implemented in the 4C circuit model as a new component, which substitutes the original 4C heated line model. For different operational scenarios and control strategies, a good agreement is shown between the simplified ANN model results and the original 4C results, as well as with experimental data from the HELIOS facility confirming the suitability of this new approach which, extended to an entire magnet systems, can lead to real-time control of the cooling loops and fast assessment of control strategies for heat load smoothing to the cryoplant.

  19. Teaching Note--Incorporating Journal Clubs into Social Work Education: An Exploratory Model

    ERIC Educational Resources Information Center

    Moore, Megan; Fawley-King, Kya; Stone, Susan I.; Accomazzo, Sarah M.

    2013-01-01

    This article outlines the implementation of a journal club for master's and doctoral social work students interested in mental health practice. It defines educational journal clubs and discusses the history of journal clubs in medical education and the applicability of the model to social work education. The feasibility of implementing…

  20. Optimizing hydrological consistency by incorporating hydrological signatures into model calibration objectives

    NASA Astrophysics Data System (ADS)

    Shafii, Mahyar; Tolson, Bryan A.

    2015-05-01

    The simulated outcome of a calibrated hydrologic model should be hydrologically consistent with the measured response data. Hydrologic modelers typically calibrate models to optimize residual-based goodness-of-fit measures, e.g., the Nash-Sutcliffe efficiency measure, and then evaluate the obtained results with respect to hydrological signatures, e.g., the flow duration curve indices. The literature indicates that the consideration of a large number of hydrologic signatures has not been addressed in a full multiobjective optimization context. This research develops a model calibration methodology to achieve hydrological consistency using goodness-of-fit measures, many hydrological signatures, as well as a level of acceptability for each signature. The proposed framework relies on a scoring method that transforms any hydrological signature to a calibration objective. These scores are used to develop the hydrological consistency metric, which is maximized to obtain hydrologically consistent parameter sets during calibration. This consistency metric is implemented in different signature-based calibration formulations that adapt the sampling according to hydrologic signature values. These formulations are compared with the traditional formulations found in the literature for seven case studies. The results reveal that Pareto dominance-based multiobjective optimization yields the highest level of consistency among all formulations. Furthermore, it is found that the choice of optimization algorithms does not affect the findings of this research.

  1. Teaching for Art Criticism: Incorporating Feldman's Critical Analysis Learning Model in Students' Studio Practice

    ERIC Educational Resources Information Center

    Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib

    2016-01-01

    This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…

  2. A probabilistic model of visual working memory: Incorporating higher order regularities into working memory capacity estimates.

    PubMed

    Brady, Timothy F; Tenenbaum, Joshua B

    2013-01-01

    When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no higher order structure and treats items independently from one another. We present a probabilistic model of change detection that attempts to bridge this gap by formalizing the role of perceptual organization and allowing for richer, more structured memory representations. Using either standard visual working memory displays or displays in which the items are purposefully arranged in patterns, we find that models that take into account perceptual grouping between items and the encoding of higher order summary information are necessary to account for human change detection performance. Considering the higher order structure of items in visual working memory will be critical for models to make useful predictions about observers' memory capacity and change detection abilities in simple displays as well as in more natural scenes. PMID:23230888

  3. Experimental validation of a Monte Carlo proton therapy nozzle model incorporating magnetically steered protons

    NASA Astrophysics Data System (ADS)

    Peterson, S. W.; Polf, J.; Bues, M.; Ciangaru, G.; Archambault, L.; Beddar, S.; Smith, A.

    2009-05-01

    The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.

  4. Incorporating patch subspace model in Mumford-Shah type active contours.

    PubMed

    Wang, Junyan; Chan, Kap Luk

    2013-11-01

    In this paper, we propose a unified energy minimization model for segmentation of non-smooth image structures, e.g., textures, based on Mumford-Shah functional and linear patch model. We consider that image patches of a non-smooth image structure can be modeled by a patch subspace, and image patches of different non-smooth image structures belong to different patch subspaces, which leads to a computational framework for segmentation of non-smooth image structures. Motivated by the Mumford-Shah model, we show that this segmentation framework is equivalent to minimizing a piecewise linear patch reconstruction energy. We also prove that the error of segmentation is bounded by the error of the linear patch reconstruction, meaning that improving the linear patch reconstruction for each region leads to reduction of the segmentation error. In addition, we derive an algorithm for the linear patch reconstruction with proven global optimality and linear rate of convergence. The segmentation in our method is achieved by minimizing a single energy functional without requiring predefined features. Hence, compared with the previous methods that require predefined texture features, our method can be more suitable for handling general textures in unsupervised segmentation. As a by-product, our method also produces a dictionary of optimized orthonormal descriptors for each segmented region. We mainly evaluate our method on the Brodatz textures. The experiments validate our theoretical claims and show the clear superior performance of our methods over other related methods for segmentation of the textures. PMID:23893721

  5. A Theory and Model of Conflict Detection in Air Traffic Control: Incorporating Environmental Constraints

    ERIC Educational Resources Information Center

    Loft, Shayne; Bolland, Scott; Humphreys, Michael S.; Neal, Andrew

    2009-01-01

    A performance theory for conflict detection in air traffic control is presented that specifies how controllers adapt decisions to compensate for environmental constraints. This theory is then used as a framework for a model that can fit controller intervention decisions. The performance theory proposes that controllers apply safety margins to…

  6. Incorporating Biological, Chemical and Toxicological Knowledge into Predictive Models of Toxicity: Letter to the Editor

    EPA Science Inventory

    Thomas et al. (2012) recently published an evaluation of statistical models for classifying in vivo toxicity endpoints from ToxRefDB (Knudsen et al. 2009; Martin et al. 2009a and 2009b) using ToxCast in vitro bioactivity data (Judson et al. 2010) and chemical structure descriptor...

  7. URBAN AEROSOL MODELING: INCORPORATION OF AN SO2 PHOTOCHEMICAL OXIDATION MODULE IN AROSOL

    EPA Science Inventory

    Modules for the conversion of sulfate has been included in the urban scale K-theory particulate model, AROSOL. Two modules are included: one is an empirical first order SO2 conversion scheme termed EMM and the other is a series of chemical kinetic reactions based on the Carbon Bo...

  8. Experimental validation of a Monte Carlo proton therapy nozzle model incorporating magnetically steered protons.

    PubMed

    Peterson, S W; Polf, J; Bues, M; Ciangaru, G; Archambault, L; Beddar, S; Smith, A

    2009-05-21

    The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy. PMID:19420426

  9. Incorporating Stakeholder Decision Support Needs into an Integrated Regional Earth System Model

    SciTech Connect

    Rice, Jennie S.; Moss, Richard H.; Runci, Paul J.; Anderson, K. L.; Malone, Elizabeth L.

    2012-03-21

    A new modeling effort exploring the opportunities, constraints, and interactions between mitigation and adaptation at regional scale is utilizing stakeholder engagement in an innovative approach to guide model development and demonstration, including uncertainty characterization, to effectively inform regional decision making. This project, the integrated Regional Earth System Model (iRESM), employs structured stakeholder interactions and literature reviews to identify the most relevant adaptation and mitigation alternatives and decision criteria for each regional application of the framework. The information is used to identify important model capabilities and to provide a focus for numerical experiments. This paper presents the stakeholder research results from the first iRESM pilot region. The pilot region includes the Great Lakes Basin in the Midwest portion of the United States as well as other contiguous states. This geographic area (14 states in total) permits cohesive modeling of hydrologic systems while also providing gradients in climate, demography, land cover/land use, and energy supply and demand. The results from the stakeholder research indicate that iRESM should prioritize addressing adaptation alternatives in the water resources, urban infrastructure, and agriculture sectors, such as water conservation, expanded water quality monitoring, altered reservoir releases, lowered water intakes, urban infrastructure upgrades, increased electric power reserves in urban areas, and land use management/crop selection changes. Regarding mitigation alternatives, the stakeholder research shows a need for iRESM to focus on policies affecting the penetration of renewable energy technologies, and the costs and effectiveness of energy efficiency, bioenergy production, wind energy, and carbon capture and sequestration.

  10. Incorporating Stage-Specific Drug Action into Pharmacological Modeling of Antimalarial Drug Treatment

    PubMed Central

    2016-01-01

    Pharmacological modeling of antiparasitic treatment based on a drug's pharmacokinetic and pharmacodynamic properties plays an increasingly important role in identifying optimal drug dosing regimens and predicting their potential impact on control and elimination programs. Conventional modeling of treatment relies on methods that do not distinguish between parasites at different developmental stages. This is problematic for malaria parasites, as their sensitivity to drugs varies substantially during their 48-h developmental cycle. We investigated four drug types (short or long half-lives with or without stage-specific killing) to quantify the accuracy of the standard methodology. The treatment dynamics of three drug types were well characterized with standard modeling. The exception were short-half-life drugs with stage-specific killing (i.e., artemisinins) because, depending on time of treatment, parasites might be in highly drug-sensitive stages or in much less sensitive stages. We describe how to bring such drugs into pharmacological modeling by including additional variation into the drug's maximal killing rate. Finally, we show that artemisinin kill rates may have been substantially overestimated in previous modeling studies because (i) the parasite reduction ratio (PRR) (generally estimated to be 104) is based on observed changes in circulating parasite numbers, which generally overestimate the “true” PRR, which should include both circulating and sequestered parasites, and (ii) the third dose of artemisinin at 48 h targets exactly those stages initially hit at time zero, so it is incorrect to extrapolate the PRR measured over 48 h to predict the impact of doses at 48 h and later. PMID:26902760

  11. Incorporating Stage-Specific Drug Action into Pharmacological Modeling of Antimalarial Drug Treatment.

    PubMed

    Hodel, Eva Maria; Kay, Katherine; Hastings, Ian M

    2016-05-01

    Pharmacological modeling of antiparasitic treatment based on a drug's pharmacokinetic and pharmacodynamic properties plays an increasingly important role in identifying optimal drug dosing regimens and predicting their potential impact on control and elimination programs. Conventional modeling of treatment relies on methods that do not distinguish between parasites at different developmental stages. This is problematic for malaria parasites, as their sensitivity to drugs varies substantially during their 48-h developmental cycle. We investigated four drug types (short or long half-lives with or without stage-specific killing) to quantify the accuracy of the standard methodology. The treatment dynamics of three drug types were well characterized with standard modeling. The exception were short-half-life drugs with stage-specific killing (i.e., artemisinins) because, depending on time of treatment, parasites might be in highly drug-sensitive stages or in much less sensitive stages. We describe how to bring such drugs into pharmacological modeling by including additional variation into the drug's maximal killing rate. Finally, we show that artemisinin kill rates may have been substantially overestimated in previous modeling studies because (i) the parasite reduction ratio (PRR) (generally estimated to be 10(4)) is based on observed changes in circulating parasite numbers, which generally overestimate the "true" PRR, which should include both circulating and sequestered parasites, and (ii) the third dose of artemisinin at 48 h targets exactly those stages initially hit at time zero, so it is incorrect to extrapolate the PRR measured over 48 h to predict the impact of doses at 48 h and later. PMID:26902760

  12. Computationally Efficient Finite Element Analysis Method Incorporating Virtual Equivalent Projected Model For Metallic Sandwich Panels With Pyramidal Truss Cores

    SciTech Connect

    Seong, Dae-Yong; Jung, Chang Gyun; Yang, Dong-Yol

    2007-05-17

    Metallic sandwich panels composed of two face sheets and cores with low relative density have lightweight characteristics and various static and dynamic load bearing functions. To predict the forming characteristics, performance, and formability of these structured materials, full 3D modeling and analysis involving tremendous computational time and memory are required. Some constitutive continuum models including homogenization approaches to solve these problems have limitations with respect to the prediction of local buckling of face sheets and inner structures. In this work, a computationally efficient FE-analysis method incorporating a virtual equivalent projected model that enables the simulation of local buckling modes is newly introduced for analysis of metallic sandwich panels. Two-dimensional models using the projected shapes of 3D structures have the same equivalent elastic-plastic properties with original geometries that have anisotropic stiffness, yield strength, and hardening function. The sizes and isotropic properties of the virtual equivalent projected model have been estimated analytically with the same equivalent properties and face buckling strength of the full model. The 3-point bending processes with quasi-two-dimensional loads and boundary conditions are simulated to establish the validity of the proposed method. The deformed shapes and load-displacement curves of the virtual equivalent projected model are found to be almost the same as those of a full three-dimensional FE-analysis while reducing computational time drastically.

  13. A density dependent delayed predator-prey model with Beddington-DeAngelis type function response incorporating a prey refuge

    NASA Astrophysics Data System (ADS)

    Tripathi, Jai Prakash; Abbas, Syed; Thakur, Manoj

    2015-05-01

    This paper describes a predator-prey model incorporating a prey refuge. The feeding rate of consumers (predators) per consumer (i.e. functional response) is considered to be of Beddington-DeAngelis type. The Beddington-DeAngelis functional response is similar to the Holling-type II functional response but contains an extra term describing mutual interference by predators. We investigate the role of prey refuge and degree of mutual interference among predators in the dynamics of system. The dynamics of the system is discussed mainly from the point of view of permanence and stability. We obtain conditions that affect the persistence of the system. Local and global asymptotic stability of various equilibrium solutions is explored to understand the dynamics of the model system. The global asymptotic stability of positive interior equilibrium solution is established using suitable Lyapunov functional. The dynamical behaviour of the delayed system is further analyzed through incorporating discrete type gestation delay of predator. It is found that Hopf bifurcation occurs when the delay parameter τ crosses some critical value. The analytical results found in the paper are illustrated with the help of numerical examples.

  14. Incorporating Ecosystem Processes Controlling Carbon Balance Into Models of Coupled Human-Natural Systems

    NASA Astrophysics Data System (ADS)

    Currie, W.; Brown, D. G.; Brunner, A.; Fouladbash, L.; Hadzick, Z.; Hutchins, M.; Kiger, S. E.; Makino, Y.; Nassauer, J. I.; Robinson, D. T.; Riolo, R. L.; Sun, S.

    2012-12-01

    A key element in the study of coupled human-natural systems is the interactions of human populations with vegetation and soils. In human-dominated landscapes, vegetation production and change results from a combination of ecological processes and human decision-making and behavior. Vegetation is often dramatically altered, whether to produce food for humans and livestock, to harvest fiber for construction and other materials, to harvest fuel wood or feedstock for biofuels, or simply for cultural preferences as in the case of residential lawns with sparse trees in the exurban landscape. This alteration of vegetation and its management has a substantial impact on the landscape carbon balance. Models can be used to simulate scenarios in human-natural systems and to examine the integration of processes that determine future trajectories of carbon balance. However, most models of human-natural systems include little integration of the human alteration of vegetation with the ecosystem processes that regulate carbon balance. Here we illustrate a few case studies of pilot-study models that strive for this integration from our research across various types of landscapes. We focus greater detail on a fully developed research model linked to a field study of vegetation and soils in the exurban residential landscape of Southeastern Michigan, USA. The field study characterized vegetation and soil carbon storage in 5 types of ecological zones. Field-observed carbon storage in the vegetation in these zones ranged widely, from 150 g C/m2 in turfgrass zones, to 6,000 g C/m2 in zones defined as turfgrass with sparse woody vegetation, to 16,000 g C/m2 in a zone defined as dense trees and shrubs. Use of these zones facilitated the scaling of carbon pools to the landscape, where the areal mixtures of zone types had a significant impact on landscape C storage. Use of these zones also facilitated the use of the ecosystem process model Biome-BGC to simulate C trajectories and also

  15. Finite-volume model for chemical vapor infiltration incorporating radiant heat transfer. Interim report

    SciTech Connect

    Smith, A.W.; Starr, T.L.

    1995-05-01

    Most finite-volume thermal models account for the diffusion and convection of heat and may include volume heating. However, for certain simulation geometries, a large percentage of heat flux is due to thermal radiation. In this paper a finite-volume computational procedure for the simulation of heat transfer by conduction, convection and radiation in three dimensional complex enclosures is developed. The radiant heat transfer is included as a source term in each volume element which is derived by Monte Carlo ray tracing from all possible radiating and absorbing faces. The importance of radiative heat transfer is illustrated in the modeling of chemical vapor infiltration (CVI) of tubes. The temperature profile through the tube preform matches experimental measurements only when radiation is included. An alternative, empirical approach using an {open_quotes}effective{close_quotes} thermal conductivity for the gas space can match the initial temperature profile but does not match temperature changes that occur during preform densification.

  16. Time dependent reliability model incorporating continuum damage mechanics for high-temperature ceramics

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Gyekenyesi, John P.

    1989-01-01

    Presently there are many opportunities for the application of ceramic materials at elevated temperatures. In the near future ceramic materials are expected to supplant high temperature metal alloys in a number of applications. It thus becomes essential to develop a capability to predict the time-dependent response of these materials. The creep rupture phenomenon is discussed, and a time-dependent reliability model is outlined that integrates continuum damage mechanics principles and Weibull analysis. Several features of the model are presented in a qualitative fashion, including predictions of both reliability and hazard rate. In addition, a comparison of the continuum and the microstructural kinetic equations highlights a strong resemblance in the two approaches.

  17. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    SciTech Connect

    Yuan, Lulin Wu, Q. Jackie; Yin, Fang-Fang; Yoo, David; Jiang, Yuliang; Ge, Yaorong

    2014-02-15

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trained with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined

  18. Extrapolation of extreme sea levels: incorporation of Over-Threshold-Modeling to the Joint Probability Method

    NASA Astrophysics Data System (ADS)

    Mazas, Franck; Hamm, Luc; Kergadallan, Xavier

    2013-04-01

    In France, the storm Xynthia of February 27-28th, 2010 reminded engineers and stakeholders of the necessity for an accurate estimation of extreme sea levels for the risk assessment in coastal areas. Traditionally, two main approaches exist for the statistical extrapolation of extreme sea levels: the direct approach performs a direct extrapolation on the sea level data, while the indirect approach carries out a separate analysis of the deterministic component (astronomical tide) and stochastic component (meteorological residual, or surge). When the tidal component is large compared with the surge one, the latter approach is known to perform better. In this approach, the statistical extrapolation is performed on the surge component then the distribution of extreme seal levels is obtained by convolution of the tide and surge distributions. This model is often referred to as the Joint Probability Method. Different models from the univariate extreme theory have been applied in the past for extrapolating extreme surges, in particular the Annual Maxima Method (AMM) and the r-largest method. In this presentation, we apply the Peaks-Over-Threshold (POT) approach for declustering extreme surge events, coupled with the Poisson-GPD model for fitting extreme surge peaks. This methodology allows a sound estimation of both lower and upper tails of the stochastic distribution, including the estimation of the uncertainties associated to the fit by computing the confidence intervals. After convolution with the tide signal, the model yields the distribution for the whole range of possible sea level values. Particular attention is paid to the necessary distinction between sea level values observed at a regular time step, such as hourly, and sea level events, such as those occurring during a storm. Extremal indexes for both surges and levels are thus introduced. This methodology will be illustrated with a case study at Brest, France.

  19. Terrestrial Feedbacks Incorporated in Global Vegetation Models through Observed Trait-Environment Responses

    NASA Astrophysics Data System (ADS)

    Bodegom, P. V.

    2015-12-01

    Most global vegetation models used to evaluate climate change impacts rely on plant functional types to describe vegetation responses to environmental stresses. In a traditional set-up in which vegetation characteristics are considered constant within a vegetation type, the possibility to implement and infer feedback mechanisms are limited as feedback mechanisms will likely involve a changing expression of community trait values. Based on community assembly concepts, we implemented functional trait-environment relationships into a global dynamic vegetation model to quantitatively assess this feature. For the current climate, a different global vegetation distribution was calculated with and without the inclusion of trait variation, emphasizing the importance of feedbacks -in interaction with competitive processes- for the prevailing global patterns. These trait-environmental responses do, however, not necessarily imply adaptive responses of vegetation to changing conditions and may locally lead to a faster turnover in vegetation upon climate change. Indeed, when running climate projections, simulations with trait variation did not yield a more stable or resilient vegetation than those without. Through the different feedback expressions, global and regional carbon and water fluxes were -however- strongly altered. At a global scale, model projections suggest an increased productivity and hence an increased carbon sink in the next decades to come, when including trait variation. However, by the end of the century, a reduced carbon sink is projected. This effect is due to a downregulation of photosynthesis rates, particularly in the tropical regions, even when accounting for CO2-fertilization effects. Altogether, the various global model simulations suggest the critical importance of including vegetation functional responses to changing environmental conditions to grasp terrestrial feedback mechanisms at global scales in the light of climate change.

  20. Incorporating biologically based models into assessments of risk from chemical contaminants

    NASA Technical Reports Server (NTRS)

    Bull, R. J.; Conolly, R. B.; De Marini, D. M.; MacPhail, R. C.; Ohanian, E. V.; Swenberg, J. A.

    1993-01-01

    The general approach to assessment of risk from chemical contaminants in drinking water involves three steps: hazard identification, exposure assessment, and dose-response assessment. Traditionally, the risks to humans associated with different levels of a chemical have been derived from the toxic responses observed in animals. It is becoming increasingly clear, however, that further information is needed if risks to humans are to be assessed accurately. Biologically based models help clarify the dose-response relationship and reduce uncertainty.

  1. A microstructurally based continuum model of cartilage viscoelasticity and permeability incorporating measured statistical fiber orientations.

    PubMed

    Pierce, David M; Unterberger, Michael J; Trobin, Werner; Ricken, Tim; Holzapfel, Gerhard A

    2016-02-01

    The remarkable mechanical properties of cartilage derive from an interplay of isotropically distributed, densely packed and negatively charged proteoglycans; a highly anisotropic and inhomogeneously oriented fiber network of collagens; and an interstitial electrolytic fluid. We propose a new 3D finite strain constitutive model capable of simultaneously addressing both solid (reinforcement) and fluid (permeability) dependence of the tissue's mechanical response on the patient-specific collagen fiber network. To represent fiber reinforcement, we integrate the strain energies of single collagen fibers-weighted by an orientation distribution function (ODF) defined over a unit sphere-over the distributed fiber orientations in 3D. We define the anisotropic intrinsic permeability of the tissue with a structure tensor based again on the integration of the local ODF over all spatial fiber orientations. By design, our modeling formulation accepts structural data on patient-specific collagen fiber networks as determined via diffusion tensor MRI. We implement our new model in 3D large strain finite elements and study the distributions of interstitial fluid pressure, fluid pressure load support and shear stress within a cartilage sample under indentation. Results show that the fiber network dramatically increases interstitial fluid pressure and focuses it near the surface. Inhomogeneity in the tissue's composition also increases fluid pressure and reduces shear stress in the solid. Finally, a biphasic neo-Hookean material model, as is available in commercial finite element codes, does not capture important features of the intra-tissue response, e.g., distributions of interstitial fluid pressure and principal shear stress. PMID:26001349

  2. Towards an improvement of carbon accounting for wildfires: incorporation of charcoal production into carbon emission models

    NASA Astrophysics Data System (ADS)

    Doerr, Stefan H.; Santin, Cristina; de Groot, Bill

    2015-04-01

    Every year fires release to the atmosphere the equivalent to 20-30% of the carbon (C) emissions from fossil fuel consumption, with future emissions from wildfires expected to increase under a warming climate. Critically, however, part of the biomass C affected by fire is not emitted during burning, but converted into charcoal, which is very resistant to environmental degradation and, thus, contributes to long-term C sequestration. The magnitude of charcoal production from wildfires as a long-term C sink remains essentially unknown and, to the date, charcoal production has not been included in wildfire emission and C budget models. Here we present complete inventories of charcoal production in two fuel-rich, but otherwise very different ecosystems: i) a boreal conifer forest (experimental stand-replacing crown fire; Canada, 2012) and a dry eucalyptus forest (high-intensity fuel reduction burn; Australia 2014). Our data show that, when considering all the fuel components and quantifying all the charcoal produced from each (i.e. bark, dead wood debris, fine fuels), the overall amount of charcoal produced is significant: up to a third of the biomass C affected by fire. These findings indicate that charcoal production from wildfires could represent a major and currently unaccounted error in the estimation of the effects of wildfires in the global C balance. We suggest an initial approach to include charcoal production in C emission models, by using our case study of a boreal forest fire and the Canadian Fire Effects Model (CanFIRE). We also provide recommendations of how a 'conversion factor' for charcoal production could be relatively easily estimated when emission factors for different types of fuels and fire conditions are experimentally obtained. Ultimately, this presentation is a call for integrative collaboration between the fire emission modelling community and the charcoal community to work together towards the improvement of C accounting for wildfires.

  3. Incorporating the effect of gas in modelling the impact of CBM extraction on regional groundwater systems

    NASA Astrophysics Data System (ADS)

    Herckenrath, Daan; Doherty, John; Panday, Sorab

    2015-04-01

    Production of Coalbed Methane (CBM) requires extraction of large quantities of groundwater. To date, standard groundwater flow simulators have mostly been used to assess the impact of this extraction on regional groundwater systems. Recent research has demonstrated that predictions of regional impact assessment made by such models may be seriously compromised unless account is taken of the presence of a gas phase near extraction wells. At the same time, CBM impact assessment must accommodate the traditional requirements of regional groundwater modelling. These include representation of surficial groundwater processes and up-scaled rock properties as well as the need for calibration and predictive uncertainty quantification. The study documented herein (1) quantifies errors in regional drawdown predictions incurred through neglect of the presence of a gas phase near CBM extraction centres, and (2) evaluates the extent to which these errors can be mitigated by simulating near-well desaturation using a modified Richards equation formulation within a standard groundwater flow simulator. Two synthetic examples are provided to quantify the impact of the gas phase and verify the proposed modelling approach (implemented in MODFLOW-USG) against rigorous multiphase flow simulations (undertaken using ECLIPSE)

  4. A Partition-Based Active Contour Model Incorporating Local Information for Image Segmentation

    PubMed Central

    Wu, Jiaji; Jiao, Licheng; Gong, Maoguo

    2014-01-01

    Active contour models are always designed on the assumption that images are approximated by regions with piecewise-constant intensities. This assumption, however, cannot be satisfied when describing intensity inhomogeneous images which frequently occur in real world images and induced considerable difficulties in image segmentation. A milder assumption that the image is statistically homogeneous within different local regions may better suit real world images. By taking local image information into consideration, an enhanced active contour model is proposed to overcome difficulties caused by intensity inhomogeneity. In addition, according to curve evolution theory, only the region near contour boundaries is supposed to be evolved in each iteration. We try to detect the regions near contour boundaries adaptively for satisfying the requirement of curve evolution theory. In the proposed method, pixels within a selected region near contour boundaries have the opportunity to be updated in each iteration, which enables the contour to be evolved gradually. Experimental results on synthetic and real world images demonstrate the advantages of the proposed model when dealing with intensity inhomogeneity images. PMID:25147868

  5. Finite Element Surface Registration Incorporating Curvature, Volume Preservation, and Statistical Model Information

    PubMed Central

    Lüthi, Marcel; Vetter, Thomas

    2013-01-01

    We present a novel method for nonrigid registration of 3D surfaces and images. The method can be used to register surfaces by means of their distance images, or to register medical images directly. It is formulated as a minimization problem of a sum of several terms representing the desired properties of a registration result: smoothness, volume preservation, matching of the surface, its curvature, and possible other feature images, as well as consistency with previous registration results of similar objects, represented by a statistical deformation model. While most of these concepts are already known, we present a coherent continuous formulation of these constraints, including the statistical deformation model. This continuous formulation renders the registration method independent of its discretization. The finite element discretization we present is, while independent of the registration functional, the second main contribution of this paper. The local discontinuous Galerkin method has not previously been used in image registration, and it provides an efficient and general framework to discretize each of the terms of our functional. Computational efficiency and modest memory consumption are achieved thanks to parallelization and locally adaptive mesh refinement. This allows for the first time the use of otherwise prohibitively large 3D statistical deformation models. PMID:24187581

  6. A refined model for the structure of acireductone dioxygenase from Klebsiella ATCC 8724 incorporating residual dipolar couplings

    PubMed Central

    Pochapsky, Thomas C.; Pochapsky, Susan S.; Ju, Tingting; Hoefler, Chris; Liang, Jue

    2006-01-01

    Summary Acireductone dioxygenase (ARD) from Klebsiella ATCC 8724 is a metalloenzyme that is capable of catalyzing different reactions with the same substrates (acireductone and O2) depending upon the metal bound in the active site. A model for the solution structure of the paramagnetic Ni2+-containing ARD has been refined using residual dipolar couplings (RDCs) measured in two media. Additional dihedral restraints based on chemical shift (TALOS) were included in the refinement, and backbone structure in the vicinity of the active site was modeled from a crystallographic structure of the mouse homolog of ARD. The incorporation of residual dipolar couplings into the structural refinement alters the relative orientations of several structural features significantly, and improves local secondary structure determination. Comparisons between the solution structures obtained with and without RDCs are made, and structural similarities and differences between mouse and bacterial enzymes are described. Finally, the biological significance of these differences is considered. PMID:16518698

  7. Incorporation of SemiSpan SuperSonic Transport (S4T) Aeroservoelastic Models into SAREC-ASV Simulation

    NASA Technical Reports Server (NTRS)

    Christhilf, David M.; Pototzky, Anthony S.; Stevens, William L.

    2010-01-01

    The Simulink-based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) was modified to incorporate linear models representing aeroservoelastic characteristics of the SemiSpan SuperSonic Transport (S4T) wind-tunnel model. The S4T planform is for a Technology Concept Aircraft (TCA) design from the 1990s. The model has three control surfaces and is instrumented with accelerometers and strain gauges. Control laws developed for wind-tunnel testing for Ride Quality Enhancement, Gust Load Alleviation, and Flutter Suppression System functions were implemented in the simulation. The simulation models open- and closed-loop response to turbulence and to control excitation. It provides time histories for closed-loop stable conditions above the open-loop flutter boundary. The simulation is useful for assessing the potential impact of closed-loop control rate and position saturation. It also provides a means to assess fidelity of system identification procedures by providing time histories for a known plant model, with and without unmeasured turbulence as a disturbance. Sets of linear models representing different Mach number and dynamic pressure conditions were implemented as MATLAB Linear Time Invariant (LTI) objects. Configuration changes were implemented by selecting which LTI object to use in a Simulink template block. A limited comparison of simulation versus wind-tunnel results is shown.

  8. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    NASA Astrophysics Data System (ADS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  9. Incorporating Satellite Remote Sensing Data into Hydrologic Models: Towards Improved Performance in Modeling the Past and Reduced Uncertainty in Predicting the Future

    NASA Astrophysics Data System (ADS)

    Parr, D.; Wang, G.

    2014-12-01

    In many regions of the worlds, studies of past hydrological variability have to rely on hydrological models either because river gauge measurement is not available or because measurements do not reflect the natural flow due to water diversion or reservoir regulation. However, results from these studies are subject to major uncertainty related to the challenges in quantifying vegetation conditions and evapotranspiration, both of which are important for surface water and energy budgets. This study incorporates satellite remote sensing data for ET and vegetation into the VIC model to improve the model performance in simulating the surface water budget, hydrological seasonality, and timing of hydrological extremes. Using the Connecticut River Basin as an example, and driven with the NASA NLDAS-2 meteorological forcing data, the VIC model has been modified to read in LAI and ET data derived from MODIS among others. The MODIS LAI data provides VIC with the inter-annually varying seasonal cycle of vegetation, and the MODIS ET data replaces the model simulated ET. The data-enhanced model performs significantly better in simulating river discharge, its magnitude, seasonality, timing, soil moisture and its temporal variation. Incorporation of the ET data led to an increase of stream flow correlations between model and observations on the daily and biweekly temporal scales, and the seasonality is better represented on a monthly scale with particular magnitude improvements during the summer when ET is greatest. Incorporation of the LAI data led to improved simulation of inter-annual variability. This joint application of remote sensing and modeling helps quantify the extent to which remote sensing data improves model performance, facilitates a more accurate understanding and attribution of past hydrological variability/changes, and helps characterize the range of model-related uncertainties in future predictions.

  10. Estimation of freshwater runoff into Glacier Bay, Alaska and incorporation into a tidal circulation model

    NASA Astrophysics Data System (ADS)

    Hill, D. F.; Ciavola, S. J.; Etherington, L.; Klaar, M. J.

    2009-03-01

    Freshwater discharge is one of the most critical parameters driving water properties within fjord estuarine environments. To date, however, little attention has been paid to the issue of freshwater runoff into Glacier Bay, a recently deglaciated fjord in southeastern Alaska. Estimates of discharge into Glacier Bay and the outlying waters of Icy Strait and Cross Sound are therefore presented. Existing regression equations for southcentral and southeastern coastal Alaska are applied to Glacier Bay to arrive at the estimates. A limited set of acoustic Doppler current profiler (ADCP) measurements generally support the predictions of the regression equations. The results suggest that discharge into the bay ranges from a few hundred to a few thousand m 3 s -1 during a typical year. Peak discharges can be much higher, approximately 10,000 m 3 s -1 for the 10-year flow event. Estimates of the seasonal variation of discharge are also obtained and reveal a broad peak during the summer months. The hydrologic estimates are then coupled with a barotropic tidal circulation model (ADCIRC - ADvanced CIRCulation model) of Glacier Bay waters. This coupling is achieved by treating the entire coastline boundary as a non-zero normal-flux boundary. Numerical simulations with the inclusion of runoff allow for the estimation of parameters such as the estuarine Richardson number, which is an indicator of estuary mixing. Simulations also allow for the comparison of Lagrangian trajectories in the presence and absence of runoff. The results of the present paper are intended to complement a comprehensive and recently-published dataset on the oceanographic conditions of Glacier Bay. The results will also guide continuing efforts to model three-dimensional circulations in the bay.

  11. A 3-D probabilistic stability model incorporating the variability of root reinforcement

    NASA Astrophysics Data System (ADS)

    Cislaghi, Alessio; Chiaradia, Enrico; Battista Bischetti, Gian

    2016-04-01

    Process-oriented models of hillslope stability have a great potentiality to improve spatially-distributed landslides hazard analyses. At the same time, they may have severe limitations and among them the variability and uncertainty of the parameters play a key role. In this context, the application of a probabilistic approach through Monte Carlo techniques can be the right practice to deal with the variability of each input parameter by considering a proper probability distribution. In forested areas an additional point must be taken into account: the reinforcement due to roots permeating the soil and its variability and uncertainty. While the probability distributions of geotechnical and hydrological parameters have been widely investigated, little is known concerning the variability and the spatial heterogeneity of root reinforcement. Moreover, there are still many difficulties in measuring and in evaluating such a variable. In our study we aim to: i) implement a robust procedure to evaluate the variability of root reinforcement as a probabilistic distribution, according to the stand characteristics of forests, such as the trees density, the average diameter at breast height, the minimum distance among trees, and (ii) combine a multidimensional process-oriented model with a Monte Carlo Simulation technique, to obtain a probability distribution of the Factor of Safety. The proposed approach has been applied to a small Alpine area, mainly covered by a coniferous forest and characterized by steep slopes and a high landslide hazard. The obtained results show a good reliability of the model according to the landslide inventory map. At the end, our findings contribute to improve the reliability of landslide hazard mapping in forested areas and help forests managers to evaluate different management scenarios.

  12. Improving Public Health DSSs by Including Saharan Dust Forecasts Through Incorporation of NASA's GOCART Model Results

    NASA Technical Reports Server (NTRS)

    Berglund, Judith

    2007-01-01

    Approximately 2-3 billion metric tons of soil dust are estimated to be transported in the Earth's atmosphere each year. Global transport of desert dust is believed to play an important role in many geochemical, climatological, and environmental processes. This dust carries minerals and nutrients, but it has also been shown to carry pollutants and viable microorganisms capable of harming human, animal, plant, and ecosystem health. Saharan dust, which impacts the eastern United States (especially Florida and the southeast) and U.S. Territories in the Caribbean primarily during the summer months, has been linked to increases in respiratory illnesses in this region and has been shown to carry other human, animal, and plant pathogens. For these reasons, this candidate solution recommends integrating Saharan dust distribution and concentration forecasts from the NASA GOCART global dust cycle model into a public health DSS (decision support system), such as the CDC's (Centers for Disease Control and Prevention's) EPHTN (Environmental Public Health Tracking Network), for the eastern United States and Caribbean for early warning purposes regarding potential increases in respiratory illnesses or asthma attacks, potential disease outbreaks, or bioterrorism. This candidate solution pertains to the Public Health National Application but also has direct connections to Air Quality and Homeland Security. In addition, the GOCART model currently uses the NASA MODIS aerosol product as an input and uses meteorological forecasts from the NASA GEOS-DAS (Goddard Earth Observing System Data Assimilation System) GEOS-4 AGCM. In the future, VIIRS aerosol products and perhaps CALIOP aerosol products could be assimilated into the GOCART model to improve the results.

  13. Conditional simulation of Thwaites Glacier bed topography for flow models: Incorporating inhomogeneous statistics and channelized morphology

    NASA Astrophysics Data System (ADS)

    Goff, J. A.; Powell, E.; Young, D. A.; Blankenship, D. D.

    2012-12-01

    Thwaites Glacier, Antarctica, is a large glacier experiencing rapid change whose mass could, if disgorged into the ocean, lead to global sea level rise on the order of 1 m. Efforts to model flow for Thwaites Glacier are strongly dependent on an accurate topographic model of the ice bed. Airborne radar data collected in 2004/5 provides 35,000 line km of bed topography measurements sampled 20 m along track on a grid survey covering much of the glacier. However, at ~15 km track spacing, this extensive data set nevertheless misses considerable important detail, particularly: (1) resolution of mesoscale channelized morphology that can guide glacier flow; and (2) resolution of small-scale roughness between the track lines that is critical for determining topographic resistance to flow. Both issues are addressed using a hybrid conditional simulation methodology that merges an unconditional stochastic realization surface with a mean surface. Channelized morphology is established in the mean surface using an algorithm developed earlier for interpolating sinuous river channels. This algorithm applies a coordinate transformation to channel picks, where the X-axis is distance along-channel, and the Y-axis is distance across-channel. Interpolation in channel space ensures along-channel continuity where interpolation in Cartesian space would not. Inverse transformation brings the interpolated channel back into Cartesian space, where a spline-in-tension interpolation completes the mean surface for areas not identified as channels. The statistical characteristics of the bed topography are modeled with an isotropic von Kármán spectrum, which specifies rms height, characteristic scale, and fractal dimension. These parameters are estimated from the data using a covariance analysis, and are determined as a function of position across the grid. RMS heights and characteristic scales are well resolved by this estimation, whereas fractal dimension is better constrained through an

  14. An Expanded Notch-Delta Model Exhibiting Long-Range Patterning and Incorporating MicroRNA Regulation

    PubMed Central

    Chen, Jerry S.; Gumbayan, Abygail M.; Zeller, Robert W.; Mahaffy, Joseph M.

    2014-01-01

    Notch-Delta signaling is a fundamental cell-cell communication mechanism that governs the differentiation of many cell types. Most existing mathematical models of Notch-Delta signaling are based on a feedback loop between Notch and Delta leading to lateral inhibition of neighboring cells. These models result in a checkerboard spatial pattern whereby adjacent cells express opposing levels of Notch and Delta, leading to alternate cell fates. However, a growing body of biological evidence suggests that Notch-Delta signaling produces other patterns that are not checkerboard, and therefore a new model is needed. Here, we present an expanded Notch-Delta model that builds upon previous models, adding a local Notch activity gradient, which affects long-range patterning, and the activity of a regulatory microRNA. This model is motivated by our experiments in the ascidian Ciona intestinalis showing that the peripheral sensory neurons, whose specification is in part regulated by the coordinate activity of Notch-Delta signaling and the microRNA miR-124, exhibit a sparse spatial pattern whereby consecutive neurons may be spaced over a dozen cells apart. We perform rigorous stability and bifurcation analyses, and demonstrate that our model is able to accurately explain and reproduce the neuronal pattern in Ciona. Using Monte Carlo simulations of our model along with miR-124 transgene over-expression assays, we demonstrate that the activity of miR-124 can be incorporated into the Notch decay rate parameter of our model. Finally, we motivate the general applicability of our model to Notch-Delta signaling in other animals by providing evidence that microRNAs regulate Notch-Delta signaling in analogous cell types in other organisms, and by discussing evidence in other organisms of sparse spatial patterns in tissues where Notch-Delta signaling is active. PMID:24945987

  15. A process-based evapotranspiration model incorporating coupled soil water-atmospheric controls

    NASA Astrophysics Data System (ADS)

    Haghighi, Erfan; Kirchner, James

    2016-04-01

    Despite many efforts to develop evapotranspiration models (in the framework of the Penman-Monteith equation) with improved parametrizations of various resistance terms to water vapor transfer into the atmosphere, evidence suggests that estimates of evapotranspiration and its partitioning are prone to bias. Much of this bias could arise from the exclusion of surface hydro-thermal properties and of physical interactions close to the surface where heat and water vapor fluxes originate. Recent progress has been made in mechanistic modeling of surface-turbulence interactions, accounting for localized heat and mass exchange rates from bare soil surfaces covered by protruding obstacles. We seek to extend these results partially vegetated surfaces, to improve predictive capabilities and accuracy of remote sensing techniques quantifying evapotranspiration fluxes. The governing equations of liquid water, water vapor, and energy transport dynamics in the soil-plant-atmosphere system are coupled to resolve diffusive vapor fluxes from isolated pores (plant stomata and soil pores) across a near-surface viscous sublayer, explicitly accounting for pore-scale transport mechanisms and environmental forcing. Preliminary results suggest that this approach offers unique opportunities for directly linking transport properties in plants and adjacent bare soil with resulting plant transpiration and localized bare soil evaporation rates. It thus provides an essential building block for interpreting and upscaling results to field and landscape scales for a range of vegetation cover and atmospheric conditions.

  16. A Framework for Incorporating Dyads in Models of HIV-Prevention

    PubMed Central

    Hops, Hyman; Redding, Colleen A.; Reis, Harry T.; Rothman, Alexander J.; Simpson, Jeffry A.

    2014-01-01

    Although HIV is contracted by individuals, it is typically transmitted in dyads. Most efforts to promote safer sex practices, however, focus exclusively on individuals. The goal of this paper is to provide a theoretical framework that specifies how models of dyadic processes and relationships can inform models of HIV-prevention. At the center of the framework is the proposition that safer sex between two people requires a dyadic capacity for successful coordination. According to this framework, relational, individual, and structural variables that affect the enactment of safer sex do so through their direct and indirect effects on that dyadic capacity. This dyadic perspective does not require an ongoing relationship between two individuals; rather, it offers a way of distinguishing between dyads along a continuum from anonymous strangers (with minimal coordination of behavior) to long-term partners (with much greater coordination). Acknowledging the dyadic context of HIV-prevention offers new targets for interventions and suggests new approaches to tailoring interventions to specific populations. PMID:20838872

  17. Incorporating Information on (micro)Topography when Modelling Soil Erosion at the Watershed Scale

    NASA Astrophysics Data System (ADS)

    Cerdan, O.

    2015-12-01

    In the context of shallow flows, the spatial distribution of the flow is highly influenced by the micro-topography. For instance, local oriented depressions may exist in which the flow depth and velocity may exceed the threshold for soil erosion initiation. If a mean uniform flow shear stress is used to characterize the area, it would be smaller and therefore may not initiate erosion. However, management of water and sediment fluxes requires analysis and modeling at the watershed scale in order to integrate the relations between upstream and downstream areas. At this scale, high resolution information on the microtopography is usually not always available and would anyway require too extensive computation resources to be explicitly integrated in modelling attempt. Moreover, in agricultural context, this information is likely to change during the year depending on the agricultural practices. In this context, the objective of this study is to propose a parameterisation of the influence of microtopography on erosion into the framework of the shallow water equation. For each cell, the proportion of wetted area is used as a microtopography indicator. For the case of erosion, the system is coupled to the sediment transport equations. In such context, an additional equation describing the micro-topography evolution caused by erosion is introduced. Different case study will be presented to investigate the potential of the approach.

  18. Incorporating Religiosity into a Developmental Model of Positive Family Functioning across Generations

    PubMed Central

    Spilman, Sarah K.; Neppl, Tricia K.; Donnellan, M. Brent; Schofield, Thomas J.; Conger, Rand D.

    2012-01-01

    This study evaluated a developmental model of intergenerational continuity in religiosity and its association with observed competency in romantic and parent-child relationships across two generations. Using multi-informant data from the Family Transitions Project, a 20-year longitudinal study of families that began during early adolescence (N = 451), we found that parental religiosity assessed during the youth’s adolescence was positively related to the youth’s own religiosity during adolescence which, in turn, predicted their religiosity after the transition to adulthood. The findings also supported the theoretical model guiding the study, which proposes that religiosity acts as a personal resource that will be uniquely and positively associated with the quality of family relationships. Especially important, the findings demonstrate support for the role of religiosity in a developmental process that promotes positive family functioning after addressing earlier methodological limitations in this area of study, such as cross-sectional research designs, single informant measurement, retrospective reports, and the failure to control for other individual differences. PMID:22545832

  19. Incorporation of measurement models in the IHCP: validation of methods for computing correction kernels

    NASA Astrophysics Data System (ADS)

    Woolley, J. W.; Wilson, H. B.; Woodbury, K. A.

    2008-11-01

    Thermocouples or other measuring devices are often imbedded into a solid to provide data for an inverse calculation. It is well-documented that such installations will result in erroneous (biased) sensor readings, unless the thermal properties of the measurement wires and surrounding insulation can be carefully matched to those of the parent domain. Since this rarely can be done, or doing so is prohibitively expensive, an alternative is to include a sensor model in the solution of the inverse problem. In this paper we consider a technique in which a thermocouple model is used to generate a correction kernel for use in the inverse solver. The technique yields a kernel function with terms in the Laplace domain. The challenge of determining the values of the correction kernel function is the focus of this paper. An adaptation of the sequential function specification method[1] as well as numerical Laplace transform inversion techniques are considered for determination of the kernel function values. Each inversion method is evaluated with analytical test functions which provide simulated "measurements". Reconstruction of the undisturbed temperature from the "measured" temperature and the correction kernel is demonstrated.

  20. Study of an intraurban travel demand model incorporating commuter preference variables

    NASA Technical Reports Server (NTRS)

    Holligan, P. E.; Coote, M. A.; Rushmer, C. R.; Fanning, M. L.

    1971-01-01

    The model is based on the substantial travel data base for the nine-county San Francisco Bay Area, provided by the Metropolitan Transportation Commission. The model is of the abstract type, and makes use of commuter attitudes towards modes and simple demographic characteristics of zones in a region to predict interzonal travel by mode for the region. A characterization of the STOL/VTOL mode was extrapolated by means of a subjective comparison of its expected characteristics with those of modes characterized by the survey. Predictions of STOL demand were made for the Bay Area and an aircraft network was developed to serve this demand. When this aircraft system is compared to the base case system, the demand for STOL service has increased five fold and the resulting economics show considerable benefit from the increased scale of operations. In the previous study all systems required subsidy in varying amounts. The new system shows a substantial profit at an average fare of $3.55 per trip.

  1. On the tectonics and metallogenesis of West Africa: a model incorporating new geophysical data

    USGS Publications Warehouse

    Hastings, David A.

    1982-01-01

    The gold, diamond and manganese deposits of Ghana have attracted commercial interest, but appropriate geophysical data to delineate the tectonic setting of these and other deposits have been lacking until recently. Recent gravity surveys, however, now cover about 75% of the country. When used in a synthesis of the sometimes contradictory existing theories about the geology and metallogenesis of West Africa, the available gravity, magnetic, and seismic data lead to a preliminary tectonic model that postulates rifting at the time of the (1800-2000 m.y. old) Eburnean orogeny and is consistent with the occurrences of mineral deposits in the region. In this model, diamond-bearing kimberlites formed during the commencement of rifting during the Eburnean orogenesis. Later emplacement of kimberlites was associated with the initiation of Mesozoic rifting of Gondwanaland. Primary gold vein deposits were probably formed by the migration of hydrothermal fluids (associated with the formation of granitoids) into dilatant zones, such as rift-related faults and anticlinal axial areas, toward the end of the Eburnean orogeny. At this time, the major concordant granitoids were formed, with smaller plutonic granitoids forming on the fringes of the concordant masses as partial melting fractions of the latter. Sedimentary manganese deposits were formed along the margins of rift lakes toward the end of the orogeny.

  2. Quantifying the regional water footprint of biofuel production by incorporating hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Wu, M.; Chiu, Y.; Demissie, Y.

    2012-10-01

    A spatially explicit life cycle water analysis framework is proposed, in which a standardized water footprint methodology is coupled with hydrologic modeling to assess blue water, green water (rainfall), and agricultural grey water discharge in the production of biofuel feedstock at county-level resolution. Grey water is simulated via SWAT, a watershed model. Evapotranspiration (ET) estimates generated with the Penman-Monteith equation and crop parameters were verified by using remote sensing results, a satellite-imagery-derived data set, and other field measurements. Crop irrigation survey data are used to corroborate the estimate of irrigation ET. An application of the concept is presented in a case study for corn-stover-based ethanol grown in Iowa (United States) within the Upper Mississippi River basin. Results show vast spatial variations in the water footprint of stover ethanol from county to county. Producing 1 L of ethanol from corn stover growing in the Iowa counties studied requires from 4.6 to 13.1 L of blue water (with an average of 5.4 L), a majority (86%) of which is consumed in the biorefinery. The county-level green water (rainfall) footprint ranges from 760 to 1000 L L-1. The grey water footprint varies considerably, ranging from 44 to 1579 L, a 35-fold difference, with a county average of 518 L. This framework can be a useful tool for watershed- or county-level biofuel sustainability metric analysis to address the heterogeneity of the water footprint for biofuels.

  3. Assessing New Dry Deposition Parameterization Schemes for Incorporation into Global Atmospheric Transport Models

    NASA Astrophysics Data System (ADS)

    Khan, T.; Perlinger, J. A.; Wu, S.; Fairall, C. W.

    2014-12-01

    Dry deposition is a key process in atmosphere-surface exchange and is an important transmission route for atmospheric gases and aerosols to enter terrestrial and aquatic ecosystems. Vertical transport of atmospheric aerosols to Earth's surface is governed by several processes including turbulent transfer, interception, inertial impaction, settling, diffusion, turbophoresis, thermophoresis, and electrostatic effects. In global transport models (GTMs), particle dry deposition velocity (vd) from the lowest model layer to the surface is often parameterized using an electrical resistance analogy. This resistance analogy is widely used in a modified form to compute vd for steady-state dry deposition flux. Recently, a mass conservative formulation of dry deposition applicable to smooth and rough surfaces was proposed. Here, we evaluate dry deposition velocities computed using five different schemes with measurement results from a variety of surfaces including bare soil, grass, and coniferous, broad-leaf, and deciduous forest canopies. Based on this assessment, we provide suggestions for optimal treatment of dry deposition processes in GTMs and evaluate implementation of new dry deposition schemes.

  4. Etoposide incorporated into camel milk phospholipids liposomes shows increased activity against fibrosarcoma in a mouse model.

    PubMed

    Maswadeh, Hamzah M; Aljarbou, Ahmad N; Alorainy, Mohammed S; Alsharidah, Mansour S; Khan, Masood A

    2015-01-01

    Phospholipids were isolated from camel milk and identified by using high performance liquid chromatography and gas chromatography-mass spectrometry (GC/MS). Anticancer drug etoposide (ETP) was entrapped in liposomes, prepared from camel milk phospholipids, to determine its activity against fibrosarcoma in a murine model. Fibrosarcoma was induced in mice by injecting benzopyrene (BAP) and tumor-bearing mice were treated with various formulations of etoposide, including etoposide entrapped camel milk phospholipids liposomes (ETP-Cam-liposomes) and etoposide-loaded DPPC-liposomes (ETP-DPPC-liposomes). The tumor-bearing mice treated with ETP-Cam-liposomes showed slow progression of tumors and increased survival compared to free ETP or ETP-DPPC-liposomes. These results suggest that ETP-Cam-liposomes may prove to be a better drug delivery system for anticancer drugs. PMID:25821817

  5. Constitutive Modeling and Testing of Polymer Matrix Composites Incorporating Physical Aging at Elevated Temperatures

    NASA Technical Reports Server (NTRS)

    Veazie, David R.

    1998-01-01

    Advanced polymer matrix composites (PMC's) are desirable for structural materials in diverse applications such as aircraft, civil infrastructure and biomedical implants because of their improved strength-to-weight and stiffness-to-weight ratios. For example, the next generation military and commercial aircraft requires applications for high strength, low weight structural components subjected to elevated temperatures. A possible disadvantage of polymer-based composites is that the physical and mechanical properties of the matrix often change significantly over time due to the exposure of elevated temperatures and environmental factors. For design, long term exposure (i.e. aging) of PMC's must be accounted for through constitutive models in order to accurately assess the effects of aging on performance, crack initiation and remaining life. One particular aspect of this aging process, physical aging, is considered in this research.

  6. An ecosystem modelling framework for incorporating climate regime shifts into fisheries management

    NASA Astrophysics Data System (ADS)

    Fu, Caihong; Perry, R. Ian; Shin, Yunne-Jai; Schweigert, Jake; Liu, Huizhu

    2013-08-01

    Ecosystem-based approaches to fisheries management (EBM) attempt to account for fishing, climate variability and species interactions when formulating fisheries management advice. Ecosystem models that investigate the combined effects of ecological processes are vital to support the implementation of EBM by assessing the effectiveness of management strategies in an ecosystem context. In this study, an individual-based ecosystem model was used to demonstrate how species at different trophic levels and of different life histories responded to climate regimes and how well different single- or various multi-species fisheries at different intensities perform in terms of human benefits (yield) and trade-offs (fishery closures) as well as their impacts on the ecosystem. In addition, other performance indicators were also used to evaluate management strategies. The simulations indicated that under no fishing, each species responded to the regimes differently due to different life history traits and different trophic interactions. Fishing at the level of natural mortality (F = M) produced the highest yields within each fishery, however, an F adjusted for the current productivity conditions (regime) resulted in much fewer fishery closures compared with F = M, indicating the advantage of implementing a policy of a regime-specific F from the stand point of conservation and fishery stability. Furthermore, a regime-specific F strategy generally resulted in higher yield and fewer fishery closures compared with F = 0.5M. Other performance indicators also pointed to the advantage of using a regime-specific F strategy in terms of the stability of both ecosystem and fishery production. As a specific example, fishing the predators of Pacific herring under all multi-species fisheries scenarios increased the yield of Pacific herring and reduced the number of herring fishery closures. This supports the conclusion that an exploitation strategy which is balanced across all trophic levels

  7. Incorporating Plasticity of the Interfibrillar Matrix in Shear Lag Models is Necessary to Replicate the Multiscale Mechanics of Tendon Fascicles

    PubMed Central

    Szczesny, Spencer E.; Elliott, Dawn M.

    2015-01-01

    Despite current knowledge of tendon structure, the fundamental deformation mechanisms underlying tendon mechanics and failure are unknown. We recently showed that a shear lag model, which explicitly assumed plastic interfibrillar load transfer between discontinuous fibrils, could explain the multiscale fascicle mechanics, suggesting that fascicle yielding is due to plastic deformation of the interfibrillar matrix. However, it is unclear whether alternative physical mechanisms, such as elastic interfibrillar deformation or fibril yielding, also contribute to fascicle mechanical behavior. The objective of the current work was to determine if plasticity of the interfibrillar matrix is uniquely capable of explaining the multiscale mechanics of tendon fascicles including the tissue post-yield behavior. This was examined by comparing the predictions of a continuous fibril model and three separate shear lag models incorporating an elastic, plastic, or elastoplastic interfibrillar matrix with multiscale experimental data. The predicted effects of fibril yielding on each of these models were also considered. The results demonstrated that neither the continuous fibril model nor the elastic shear lag model can successfully predict the experimental data, even if fibril yielding is included. Only the plastic or elastoplastic shear lag models were capable of reproducing the multiscale tendon fascicle mechanics. Differences between these two models were small, although the elastoplastic model did improve the fit of the experimental data at low applied tissue strains. These findings suggest that while interfibrillar elasticity contributes to the initial stress response, plastic deformation of the interfibrillar matrix is responsible for tendon fascicle post-yield behavior. This information sheds light on the physical processes underlying tendon failure, which is essential to improve our understanding of tissue pathology and guide the development of successful repair. PMID:25262202

  8. Incorporation of an explosive cloud rise code into ARAC's (Atmospheric Release Advisory Capability) ADPIC transport and diffusion model

    SciTech Connect

    Foster, K.T.; Freis, R.P. ); Nasstrom, J.S. )

    1990-04-01

    The US Department of Energy's Atmospheric Release Advisory Capability (ARAC) supports various government agencies by modeling the transport and diffusion of radiological material released into the atmosphere. ARAC provides this support principally in the form of computer-generated isopleths of radionuclide concentrations. In order to supply these concentration estimates in a timely manner, a suite of operational computer models is maintained by the ARAC staff. One primary tools used by ARAC is the ADPIC transport and diffusion computer model. This three-dimensional, particle-in-cell code simulates the release of a pollutant into the atmosphere, by injecting marker particles into a gridded, mass-consistent modeled wind field. The particles are then moved through the gridded domain by applying the appropriate advection, diffusion, and gravitational fall velocities. A cloud rise module has been incorporated into ARAC's ADPIC dispersion model to allow better simulation of particle distribution early after an explosive release of source material. The module is based on the conservation equations of mass, momentum, and energy, which are solved for the cloud radius, height, temperature, and velocity as a function of time. 6 refs., 5 figs., 2 tabs.

  9. Development of a model of a multi-lymphangion lymphatic vessel incorporating realistic and measured parameter values.

    PubMed

    Bertram, C D; Macaskill, C; Davis, M J; Moore, J E

    2014-04-01

    Our published model of a lymphatic vessel consisting of multiple actively contracting segments between non-return valves has been further developed by the incorporation of properties derived from observations and measurements of rat mesenteric vessels. These included (1) a refractory period between contractions, (2) a highly nonlinear form for the passive part of the pressure-diameter relationship, (3) hysteretic and transmural-pressure-dependent valve opening and closing pressure thresholds and (4) dependence of active tension on muscle length as reflected in local diameter. Experimentally, lymphatic valves are known to be biased to stay open. In consequence, in the improved model, vessel pumping of fluid suffers losses by regurgitation, and valve closure is dependent on backflow first causing an adverse valve pressure drop sufficient to reach the closure threshold. The assumed resistance of an open valve therefore becomes a critical parameter, and experiments to measure this quantity are reported here. However, incorporating this parameter value, along with other parameter values based on existing measurements, led to ineffective pumping. It is argued that the published measurements of valve-closing pressure threshold overestimate this quantity owing to neglect of micro-pipette resistance. An estimate is made of the extent of the possible resulting error. Correcting by this amount, the pumping performance is improved, but still very inefficient unless the open-valve resistance is also increased beyond the measured level. Arguments are given as to why this is justified, and other areas where experimental data are lacking are identified. The model is capable of future adaptation as new experimental data appear. PMID:23801424

  10. Incorporating covariates into fisheries stock assessment models with application to Pacific herring.

    PubMed

    Deriso, Richard B; Maunder, Mark N; Pearson, Walter H

    2008-07-01

    We present a framework for evaluating the cause of fishery declines by integrating covariates into a fisheries stock assessment model. This allows the evaluation of fisheries' effects vs. natural and other human impacts. The analyses presented are based on integrating ecological science and statistics and form the basis for environmental decision-making advice. Hypothesis tests are described to rank hypotheses and determine the size of a multiple covariate model. We extend recent developments in integrated analysis and use novel methods to produce effect size estimates that are relevant to policy makers and include estimates of uncertainty. Results can be directly applied to evaluate trade-offs among alternative management decisions. The methods and results are also broadly applicable outside fisheries stock assessment. We show that multiple factors influence populations and that analysis of factors in isolation can be misleading. We illustrate the framework by applying it to Pacific herring of Prince William Sound, Alaska (USA). The Pacific herring stock that spawns in Prince William Sound is a stock that has collapsed, but there are several competing or alternative hypotheses to account for the initial collapse and subsequent lack of recovery. Factors failing the initial screening tests for statistical significance included indicators of the 1989 Exxon Valdez oil spill, coho salmon predation, sea lion predation, Pacific Decadal Oscillation, Northern Oscillation Index, and effects of containment in the herring egg-on-kelp pound fishery. The overall results indicate that the most statistically significant factors related to the lack of recovery of the herring stock involve competition or predation by juvenile hatchery pink salmon on herring juveniles. Secondary factors identified in the analysis were poor nutrition in the winter, ocean (Gulf of Alaska) temperature in the winter, the viral hemorrhagic septicemia virus, and the pathogen Ichthyophonus hoferi. The

  11. A method for incorporating equilibrium chemical reactions into multiphase flow models for CO2 storage

    NASA Astrophysics Data System (ADS)

    Saaltink, Maarten W.; Vilarrasa, Victor; De Gaspari, Francesca; Silva, Orlando; Carrera, Jesús; Rötting, Tobias S.

    2013-12-01

    CO2 injection and storage in deep saline aquifers involves many coupled processes, including multiphase flow, heat and mass transport, rock deformation and mineral precipitation and dissolution. Coupling is especially critical in carbonate aquifers, where minerals will tend to dissolve in response to the dissolution of CO2 into the brine. The resulting neutralization will drive further dissolution of both CO2 and calcite. This suggests that large cavities may be formed and that proper simulation may require full coupling of reactive transport and multiphase flow. We show that solving the latter may suffice whenever two requirements are met: (1) all reactions can be assumed to occur in equilibrium and (2) the chemical system can be calculated as a function of the state variables of the multiphase flow model (i.e., liquid and gas pressure, and temperature). We redefine the components of multiphase flow codes (traditionally, water and CO2), so that they are conservative for all reactions of the chemical system. This requires modifying the traditional constitutive relationships of the multiphase flow codes, but yields the concentrations of all species and all reaction rates by simply performing speciation and mass balance calculations at the end of each time step. We applied this method to the H2O-CO2-Na-Cl-CaCO3 system, so as to model CO2 injection into a carbonate aquifer containing brine. Results were very similar to those obtained with traditional formulations, which implies that full coupling of reactive transport and multi-phase flow is not really needed for this kind of systems, but the resulting simplifications may make it advisable even for cases where the above requirements are not met. Regarding the behavior of carbonate rocks, we find that porosity development near the injection well is small because of the low solubility of calcite. Moreover, dissolution concentrates at the front of the advancing CO2 plume because the brine below the plume tends to reach

  12. Incorporation of cooling-induced crystallization into a 2-dimensional axisymmetric conduit heat flow model

    NASA Astrophysics Data System (ADS)

    Heptinstall, David; Bouvet de Maisonneuve, Caroline; Neuberg, Jurgen; Taisne, Benoit; Collinson, Amy

    2016-04-01

    Heat flow models can bring new insights into the thermal and rheological evolution of volcanic 3 systems. We shall investigate the thermal processes and timescales in a crystallizing, static 4 magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/° C (runs 1 & 3) and 0.2MPa/° C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69E5 J/kg*K, 9.32E5 J/kg*K, and 9.49E5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the centre of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10m depth, it takes 4.1-9.2 years for the magma column to cool by 108-131oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and the dominant latent heat producing crystallizing phase, Albite-rich Plagioclase Feldspar. Run 1 is shown to cool fastest and run 3 cool the slowest, with surface emissivity having the strongest cooling

  13. Incorporating induced seismicity in the 2014 United States National Seismic Hazard Model: results of the 2014 workshop and sensitivity studies

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.

    2015-01-01

    The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after

  14. New Direction in Hydrogeochemical Transport Modeling: Incorporating Multiple Kinetic and Equilibrium Reaction Pathways

    SciTech Connect

    Steefel, C.I.

    2000-02-02

    At least two distinct kinds of hydrogeochemical models have evolved historically for use in analyzing contaminant transport, but each has important limitations. One kind, focusing on organic contaminants, treats biodegradation reactions as parts of relatively simple kinetic reaction networks with no or limited coupling to aqueous and surface complexation and mineral dissolution/precipitation reactions. A second kind, evolving out of the speciation and reaction path codes, is capable of handling a comprehensive suite of multicomponent complexation (aqueous and surface) and mineral precipitation and dissolution reactions, but has not been able to treat reaction networks characterized by partial redox disequilibrium and multiple kinetic pathways. More recently, various investigators have begun to consider biodegradation reactions in the context of comprehensive equilibrium and kinetic reaction networks (e.g. Hunter et al. 1998, Mayer 1999). Here we explore two examples of multiple equilibrium and kinetic reaction pathways using the reactive transport code GIMRT98 (Steefel, in prep.): (1) a computational example involving the generation of acid mine drainage due to oxidation of pyrite, and (2) a computational/field example where the rates of chlorinated VOC degradation are linked to the rates of major redox processes occurring in organic-rich wetland sediments overlying a contaminated aerobic aquifer.

  15. Toward More Realistic Analytic Models of the Heliotail: Incorporating Magnetic Flattening via Distortion Flows

    NASA Astrophysics Data System (ADS)

    Kleimann, Jens; Röken, Christian; Fichtner, Horst; Heerikhuisen, Jacob

    2016-01-01

    Both physical arguments and simulations of the global heliosphere indicate that the tailward heliopause is flattened considerably in the direction perpendicular to both the incoming flow and the large-scale interstellar magnetic field. Despite this fact, all of the existing global analytical models of the outer heliosheath's magnetic field assume a circular cross section of the heliotail. To eliminate this inconsistency, we introduce a mathematical procedure by which any analytically or numerically given magnetic field can be deformed in such a way that the cross sections along the heliotail axis attain freely prescribed, spatially dependent values for their total area and aspect ratio. The distorting transformation of this method honors both the solenoidality condition and the stationary induction equation with respect to an accompanying flow field, provided that both constraints were already satisfied for the original magnetic and flow fields prior to the transformation. In order to obtain realistic values for the above parameters, we present the first quantitative analysis of the heliotail's overall distortion as seen in state-of-the-art three-dimensional hybrid MHD-kinetic simulations.

  16. Modelling how incorporation of divalent cations affects calcite wettability–implications for biomineralisation and oil recovery

    NASA Astrophysics Data System (ADS)

    Andersson, M. P.; Dideriksen, K.; Sakuma, H.; Stipp, S. L. S.

    2016-06-01

    Using density functional theory and geochemical speciation modelling, we predicted how solid-fluid interfacial energy is changed, when divalent cations substitute into a calcite surface. The effect on wettability can be dramatic. Trace metal uptake can impact organic compound adsorption, with effects for example, on the ability of organisms to control crystal growth and our ability to predict the wettability of pore surfaces. Wettability influences how easily an organic phase can be removed from a surface, either organic compounds from contaminated soil or crude oil from a reservoir. In our simulations, transition metals substituted exothermically into calcite and more favourably into sites at the surface than in the bulk, meaning that surface properties are more strongly affected than results from bulk experiments imply. As a result of divalent cation substitution, calcite-fluid interfacial energy is significantly altered, enough to change macroscopic contact angle by tens of degrees. Substitution of Sr, Ba and Pb makes surfaces more hydrophobic. With substitution of Mg and the transition metals, calcite becomes more hydrophilic, weakening organic compound adsorption. For biomineralisation, this provides a switch for turning on and off the activity of organic crystal growth inhibitors, thereby controlling the shape of the associated mineral phase.

  17. Classifying Multifunctional Enzymes by Incorporating Three Different Models into Chou's General Pseudo Amino Acid Composition.

    PubMed

    Zou, Hong-Liang; Xiao, Xuan

    2016-08-01

    With the avalanche of the newly found protein sequences in the post-genomic epoch, there is an increasing trend for annotating a number of newly discovered enzyme sequences. Among the various proteins, enzyme was considered as the one of the largest kind of proteins. It takes part in most of the biochemical reactions and plays a key role in metabolic pathways. Multifunctional enzyme is enzyme that plays multiple physiological roles. Given a multifunctional enzyme sequence, how can we identify its class? Especially, how can we deal with the multi-classes problem since an enzyme may simultaneously belong to two or more functional classes? To address these problems, which are obviously very important both to basic research and drug development, a multi-label classifier was developed via three different prediction models with multi-label K-nearest algorithm. Experimental results obtained on a stringent benchmark dataset of enzymes by jackknife cross-validation test show that the predicting results were exciting, indicating that the current method could be an effective and promising high throughput method in the enzyme research. We hope it could play an important complementary role to the existing predictors in identifying the classes of enzymes. PMID:27113936

  18. Incorporating pattern identification of Chinese medicine into precision medicine: An integrative model for individualized medicine.

    PubMed

    Li, Jin-gen; Xu, Hao

    2015-11-01

    On 20 January, 2015, U.S. President Obama announced an ambitious plan called "Precision Medicine (PM) Initiative", aiming to deliver genetics-based medical treatments. PM has shown a promising prospect by tailoring disease treatments and preventions to individuals. However, a predominantly genetics-based method restricts its benefit and applicability in most chronic and complex diseases. Pattern identification (PI) is one of the representative characteristics of Chinese medicine implying the concept of holism and individualized treatment. It is another classification method taking environmental, psychosocial and other individual factors into account. Integrating PI with disease diagnosis of Western medicine will provide a strong complement to genetics-based PM, thus establish an integrative model for individualized medicine. PI provides new perspectives for PM, not only in clinical practice, but also in new drug development and clinical trial design. It is for sure that the integrative approach will ultimately lead to a safer, more convenient and effective patient-centered healthcare and most patients will benefit in the era of PM. PMID:26519373

  19. Modelling how incorporation of divalent cations affects calcite wettability-implications for biomineralisation and oil recovery.

    PubMed

    Andersson, M P; Dideriksen, K; Sakuma, H; Stipp, S L S

    2016-01-01

    Using density functional theory and geochemical speciation modelling, we predicted how solid-fluid interfacial energy is changed, when divalent cations substitute into a calcite surface. The effect on wettability can be dramatic. Trace metal uptake can impact organic compound adsorption, with effects for example, on the ability of organisms to control crystal growth and our ability to predict the wettability of pore surfaces. Wettability influences how easily an organic phase can be removed from a surface, either organic compounds from contaminated soil or crude oil from a reservoir. In our simulations, transition metals substituted exothermically into calcite and more favourably into sites at the surface than in the bulk, meaning that surface properties are more strongly affected than results from bulk experiments imply. As a result of divalent cation substitution, calcite-fluid interfacial energy is significantly altered, enough to change macroscopic contact angle by tens of degrees. Substitution of Sr, Ba and Pb makes surfaces more hydrophobic. With substitution of Mg and the transition metals, calcite becomes more hydrophilic, weakening organic compound adsorption. For biomineralisation, this provides a switch for turning on and off the activity of organic crystal growth inhibitors, thereby controlling the shape of the associated mineral phase. PMID:27352933

  20. Incorporating elastic and plastic work rates into energy balance for long-term tectonic modeling

    NASA Astrophysics Data System (ADS)

    Ahamed, M. S.; Choi, E.

    2014-12-01

    Deformation-related energy budget is usually considered in the simplest form or even completely omitted from the energy balance equation. We derive an energy balance equation that accounts not only for heat energy but also for elastic and plastic work. Such a general description of the energy balance principle will be useful for modeling complicated interactions between geodynamic processes such as thermoelastisity, thermoplasticity and mechanical consequences of metamorphism. Following the theory of large deformation plasticity, we start from the assumption that Gibbs free energy (g) is a function of temperature (T), the second Piola-Kirchhoff stress (S), density (ρ) and internal variables (qj, j=1…n). In this formulation, new terms are derived, which are related to the energy dissipated through plastic work and the elastically stored energy that are not seen in the usual form of the energy balance equation used in geodynamics. We then simplify the generic equation to one involving more familiar quantities such as Cauchy stress and material density assuming that the small deformation formulation holds for our applications. The simplified evolution equation for temperature is implemented in DyanEarthSol3D, an unstructured finite element solver for long-term tectonic deformation. We calculate each of the newly derived terms separately in simple settings and compare the numerical results with a corresponding analytic solution. We also present the effects of the new energy balance on the evolution of a large offset normal fault.

  1. Treatment of penetrating brain injury in a rat model using collagen scaffolds incorporating soluble Nogo receptor.

    PubMed

    Elias, Paul Z; Spector, Myron

    2015-02-01

    Injuries and diseases of the central nervous system (CNS) have the potential to cause permanent loss of brain parenchyma, with severe neurological consequences. Cavitary defects in the brain may afford the possibility of treatment with biomaterials that fill the lesion site while delivering therapeutic agents. This study examined the treatment of penetrating brain injury (PBI) in a rat model with collagen biomaterials and a soluble Nogo receptor (sNgR) molecule. sNgR was aimed at neutralizing myelin proteins that hinder axon regeneration by inducing growth cone collapse. Scaffolds containing sNgR were implanted in the brains of adult rats 1 week after injury and analysed 4 weeks or 8 weeks later. Histological analysis revealed that the scaffolds filled the lesion sites, remained intact with open pores and were infiltrated with cells and extracellular matrix. Immunohistochemical staining demonstrated the composition of the cellular infiltrate to include macrophages, astrocytes and vascular endothelial cells. Isolated regions of the scaffold borders showed integration with surrounding viable brain tissue that included neurons and oligodendrocytes. While axon regeneration was not detected in the scaffolds, the cellular infiltration and vascularization of the lesion site demonstrated a modification of the injury environment with implications for regenerative strategies. PMID:23038669

  2. Searching for the True Diet of Marine Predators: Incorporating Bayesian Priors into Stable Isotope Mixing Models

    PubMed Central

    Chiaradia, André; Forero, Manuela G.; McInnes, Julie C.; Ramírez, Francisco

    2014-01-01

    Reconstructing the diet of top marine predators is of great significance in several key areas of applied ecology, requiring accurate estimation of their true diet. However, from conventional stomach content analysis to recent stable isotope and DNA analyses, no one method is bias or error free. Here, we evaluated the accuracy of recent methods to estimate the actual proportion of a controlled diet fed to a top-predator seabird, the Little penguin (Eudyptula minor). We combined published DNA data of penguins scats with blood plasma δ15N and δ13C values to reconstruct the diet of individual penguins fed experimentally. Mismatch between controlled (true) ingested diet and dietary estimates obtained through the separately use of stable isotope and DNA data suggested some degree of differences in prey assimilation (stable isotope) and digestion rates (DNA analysis). In contrast, combined posterior isotope mixing model with DNA Bayesian priors provided the closest match to the true diet. We provided the first evidence suggesting that the combined use of these complementary techniques may provide better estimates of the actual diet of top marine predators- a powerful tool in applied ecology in the search for the true consumed diet. PMID:24667296

  3. Performance model incorporating weather related constraints for fields of unattended ground sensors

    NASA Astrophysics Data System (ADS)

    Swanson, David C.

    1999-07-01

    Tactical sensing on the battlefield involves real-time information processing of acoustic, seismic, electromagnetic, and environmental sensor data to obtain and exploit an automated situational awareness. The value-added of tactical sensing is to give the war-fighter real-time situational information without requiring human interpretation of the underlying scientific data. For example, acoustic, seismic, and magnetic signatures of a ground vehicle can be used in a pattern recognition algorithm to identify a tank, truck, or TEL, but all the war-fighter wants to know is how many of each vehicle type are present and which way are they going. However, the confidences of this automated information processing are dependent on environmental conditions and background interference. A key feature of tactical ground sensing is the ability to integrate objective statistical confidences into the process to intelligently suppress false alarms, thus allowing the war-fighter to concentrate on war fighting. This paper presents how the situation confidence metric is generated starting with the sensor SNR going all the way through to the target classification and track confidences. This technique also allows modeling of ground sensor performance in hypothetical environments such as bad weather.

  4. Incorporating social anxiety into a model of college student problematic drinking

    PubMed Central

    Ham, Lindsay S.; Hope, Debra A.

    2009-01-01

    College problem drinking and social anxiety are significant public health concerns with highly negative consequences. College students are faced with a variety of novel social situations and situations encouraging alcohol consumption. The current study involved developing a path model of college problem drinking, including social anxiety, in 316 college students referred to an alcohol intervention due to a campus alcohol violation. Contrary to hypotheses, social anxiety generally had an inverse relationship with problem drinking. As expected, perceived drinking norms had important positive, direct effects on drinking variables. However, the results generally did not support the hypotheses regarding the mediating or moderating function of the valuations of expected effects and provided little support for the mediating function of alcohol expectancies in the relations among social anxiety and alcohol variables. Therefore, it seems that the influence of peers may be more important for college students than alcohol expectancies and valuations of alcohol’s effects are. College students appear to be a unique population in respect to social anxiety and problem drinking. The implications of these results for college prevention and intervention programs were discussed. PMID:15561454

  5. A heat transfer model for incorporating carbon foam fabrics in firefighter's garment

    NASA Astrophysics Data System (ADS)

    Elgafy, Ahmed; Mishra, Sarthak

    2014-04-01

    In the present work, a numerical study was performed to predict and investigate the performance of a thermal protection system for firefighter's garment consisting of carbon foam fabric in both the outer shell and the thermal liner elements. Several types of carbon foam with different thermal conductivity, porosity, and density were introduced to conduct a parametric study. Additionally, the thickness of the introduced carbon foam fabrics was varied to acquire optimum design. Simulation was conducted for a square planar 2D geometry of the clothing comprising of different fabric layers and a double precision pressure-based implicit solver, under transient state condition was used. The new anticipated thermal protection system was tested under harsh thermal environmental conditions that firefighters are exposed to. The parametric study showed that employing carbon foam fabric with one set of designed parameters, weight reduction of 33 % in the outer shell, 56 % in the thermal liner and a temperature reduction of 2 % at the inner edge of the garment was achieved when compared to the traditional firefighter garment model used by Song et al. (Int J Occup Saf Ergon 14:89-106, 2008). Also, carbon foam fabric with another set of designed parameters resulted in a weight reduction of 25 % in the outer shell, 28 % in the thermal liner and a temperature reduction of 6 % at the inner edge of the garment. As a result, carbon foam fabrics make the firefighter's garment more protective, durable, and lighter in weight.

  6. Incorporating Vibration Test Results for the Advanced Stirling Convertor into the System Dynamic Model

    NASA Technical Reports Server (NTRS)

    Meer, David W.; Lewandowski, Edward J.

    2010-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin Corporation (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Convertors (ASC) at NASA GRC undergo a vibration test sequence intended to simulate the vibration history that an ASC would experience when used in an ASRG for a space mission. During these tests, a data system collects several performance-related parameters from the convertor under test for health monitoring and analysis. Recently, an additional sensor recorded the slip table position during vibration testing to qualification level. The System Dynamic Model (SDM) integrates Stirling cycle thermodynamics, heat flow, mechanical mass, spring, damper systems, and electrical characteristics of the linear alternator and controller. This Paper presents a comparison of the performance of the ASC when exposed to vibration to that predicted by the SDM when exposed to the same vibration.

  7. Modelling how incorporation of divalent cations affects calcite wettability–implications for biomineralisation and oil recovery

    PubMed Central

    Andersson, M. P.; Dideriksen, K.; Sakuma, H.; Stipp, S. L. S.

    2016-01-01

    Using density functional theory and geochemical speciation modelling, we predicted how solid-fluid interfacial energy is changed, when divalent cations substitute into a calcite surface. The effect on wettability can be dramatic. Trace metal uptake can impact organic compound adsorption, with effects for example, on the ability of organisms to control crystal growth and our ability to predict the wettability of pore surfaces. Wettability influences how easily an organic phase can be removed from a surface, either organic compounds from contaminated soil or crude oil from a reservoir. In our simulations, transition metals substituted exothermically into calcite and more favourably into sites at the surface than in the bulk, meaning that surface properties are more strongly affected than results from bulk experiments imply. As a result of divalent cation substitution, calcite-fluid interfacial energy is significantly altered, enough to change macroscopic contact angle by tens of degrees. Substitution of Sr, Ba and Pb makes surfaces more hydrophobic. With substitution of Mg and the transition metals, calcite becomes more hydrophilic, weakening organic compound adsorption. For biomineralisation, this provides a switch for turning on and off the activity of organic crystal growth inhibitors, thereby controlling the shape of the associated mineral phase. PMID:27352933

  8. Development of Advanced Continuum Models that Incorporate Nanomechanical Deformation into Engineering Analysis.

    SciTech Connect

    Zimmerman, Jonathan A.; Jones, Reese E.; Templeton, Jeremy Alan; McDowell, David L.; Mayeur, Jason R.; Tucker, Garritt J.; Bammann, Douglas J.; Gao, Huajian

    2008-09-01

    Materials with characteristic structures at nanoscale sizes exhibit significantly different mechani-cal responses from those predicted by conventional, macroscopic continuum theory. For example,nanocrystalline metals display an inverse Hall-Petch effect whereby the strength of the materialdecreases with decreasing grain size. The origin of this effect is believed to be a change in defor-mation mechanisms from dislocation motion across grains and pileup at grain boundaries at mi-croscopic grain sizes to rotation of grains and deformation within grain boundary interface regionsfor nanostructured materials. These rotational defects are represented by the mathematical conceptof disclinations. The ability to capture these effects within continuum theory, thereby connectingnanoscale materials phenomena and macroscale behavior, has eluded the research community.The goal of our project was to develop a consistent theory to model both the evolution ofdisclinations and their kinetics. Additionally, we sought to develop approaches to extract contin-uum mechanical information from nanoscale structure to verify any developed continuum theorythat includes dislocation and disclination behavior. These approaches yield engineering-scale ex-pressions to quantify elastic and inelastic deformation in all varieties of materials, even those thatpossess highly directional bonding within their molecular structures such as liquid crystals, cova-lent ceramics, polymers and biological materials. This level of accuracy is critical for engineeringdesign and thermo-mechanical analysis is performed in micro- and nanosystems. The researchproposed here innovates on how these nanoscale deformation mechanisms should be incorporatedinto a continuum mechanical formulation, and provides the foundation upon which to develop ameans for predicting the performance of advanced engineering materials.4 AcknowledgmentThe authors acknowledge helpful discussions with Farid F. Abraham, Youping Chen, Terry J

  9. Frequency domain system identification of helicopter rotor dynamics incorporating models with time periodic coefficients

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan

    1997-08-01

    One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial

  10. Photocatalytic degradation and reactor modeling of 17α-ethynylestradiol employing titanium dioxide-incorporated foam concrete.

    PubMed

    Wang, Yuming; Li, Yi; Zhang, Wenlong; Wang, Qing; Wang, Dawei

    2015-03-01

    Photocatalytic degradation of 17α-ethynylestradiol (EE2) using TiO2 photocatalysts incorporated with foam concrete (TiO2/FC) was investigated for the first time. Scanning electron microscopy (SEM) study of the samples revealed a narrow air void size distribution on the surface of FC cubes on with 5 wt% addition of P25 TiO2, and TiO2 particles were distributed heterogeneously on the surface of TiO2/FC samples. The sorption and photocatalytic degradation of EE2 with UV-light irradiation by TiO2/FC cubes were investigated. Adsorption capacity of EE2 by the TiO2/FC and blank foam concrete (FC) samples were similar, while the degradation rates showed a great difference. More than 50 % of EE2 was removed by TiO2/FC within 3.5 h, compared with 5 % by blank FC. The EE2 removal process was then studied in a photoreactor modified from ultraviolet disinfection pool and constructed with TiO2/FC materials. An integrated model including a plate adsorption-scattering model and a modified flow diffusion model was established to simulate the photocatalytic degradation process with different radiation fields, contaminant load, and flow velocity. A satisfactory agreement was observed between the model simulations and experimental results, showing a potential for the design and scale-up of the modified photocatalytic reactor. PMID:25242591

  11. Global Hopf bifurcation analysis of an susceptible-infective-removed epidemic model incorporating media coverage with time delay.

    PubMed

    Zhao, Huitao; Zhao, Miaochan

    2017-12-01

    An susceptible-infective-removed epidemic model incorporating media coverage with time delay is proposed. The stability of the disease-free equilibrium and endemic equilibrium is studied. And then, the conditions which guarantee the existence of local Hopf bifurcation are given. Furthermore, we show that the local Hopf bifurcation implies the global Hopf bifurcation after the second critical value of delay. The obtained results show that the time delay in media coverage can not affect the stability of the disease-free equilibrium when the basic reproduction number is less than unity. However, the time delay affects the stability of the endemic equilibrium and produces limit cycle oscillations while the basic reproduction number is greater than unity. Finally, some examples for numerical simulations are included to support the theoretical prediction. PMID:27627694

  12. Segmenting multiple overlapping objects via a hybrid active contour model incorporating shape priors: applications to digital pathology

    NASA Astrophysics Data System (ADS)

    Ali, Sahirzeeshan; Madabhushi, Anant

    2011-03-01

    Active contours and active shape models (ASM) have been widely employed in image segmentation. A major limitation of active contours, however, is in their (a) inability to resolve boundaries of intersecting objects and to (b) handle occlusion. Multiple overlapping objects are typically segmented out as a single object. On the other hand, ASMs are limited by point correspondence issues since object landmarks need to be identified across multiple objects for initial object alignment. ASMs are also are constrained in that they can usually only segment a single object in an image. In this paper, we present a novel synergistic boundary and region-based active contour model that incorporates shape priors in a level set formulation. We demonstrate an application of these synergistic active contour models using multiple level sets to segment nuclear and glandular structures on digitized histopathology images of breast and prostate biopsy specimens. Unlike previous related approaches, our model is able to resolve object overlap and separate occluded boundaries of multiple objects simultaneously. The energy functional of the active contour is comprised of three terms. The first term comprises the prior shape term, modeled on the object of interest, thereby constraining the deformation achievable by the active contour. The second term, a boundary based term detects object boundaries from image gradients. The third term drives the shape prior and the contour towards the object boundary based on region statistics. The results of qualitative and quantitative evaluation on 100 prostate and 14 breast cancer histology images for the task of detecting and segmenting nuclei, lymphocytes, and glands reveals that the model easily outperforms two state of the art segmentation schemes (Geodesic Active Contour (GAC) and Roussons shape based model) and resolves up to 92% of overlapping/occluded lymphocytes and nuclei on prostate and breast cancer histology images.

  13. Incorporation of ABCB1-mediated transport into a physiologically-based pharmacokinetic model of docetaxel in mice

    PubMed Central

    Hudachek, Susan F.

    2015-01-01

    Docetaxel is one of the most widely used anticancer agents. While this taxane has proven to be an effective chemotherapeutic drug, noteworthy challenges exist in relation to docetaxel administration due to the considerable interindividual variability in efficacy and toxicity associated with the use of this compound, largely attributable to differences between individuals in their ability to metabolize and eliminate docetaxel. Regarding the latter, the ATP-binding cassette transporter B1 (ABCB1, PGP, MDR1) is primarily responsible for docetaxel elimination. To further understand the role of ABCB1 in the biodistribution of docetaxel in mice, we utilized physiologically-based pharmacokinetic (PBPK) modeling that included ABCB1-mediated transport in relevant tissues. Transporter function was evaluated by studying docetaxel pharmacokinetics in wild-type FVB and Mdr1a/b constitutive knockout (KO) mice and incorporating this concentration–time data into a PBPK model comprised of eight tissue compartments (plasma, brain, heart, lung, kidney, intestine, liver and slowly perfused tissues) and, in addition to ABCB1-mediated transport, included intravenous drug administration, specific binding to intracellular tubulin, intestinal and hepatic metabolism, glomerular filtration and tubular reabsorption. For all tissues in both the FVB and KO cohorts, the PBPK model simulations closely mirrored the observed data. Furthermore, both models predicted AUC values that were with 15 % of the observed AUC values, indicating that our model-simulated drug exposures accurately reflected the observed tissue exposures. Overall, our PBPK model furthers the understanding of the role of ABCB1 in the biodistribution of docetaxel. Additionally, this exemplary model structure can be applied to investigate the pharmacokinetics of other ABCB1 transporter substrates. PMID:23616082

  14. Towards a Predictive Thermodynamic Model of Oxidation States of Uranium Incorporated in Fe (hydr) oxides

    SciTech Connect

    Bagus, Paul S.

    2013-01-01

    -Level Excited States: Consequences For X-Ray Absorption Spectroscopy”, J. Elec. Spectros. and Related Phenom., 200, 174 (2015) describes our first application of these methods. As well as applications to problems and materials of direct interest for our PNNL colleagues, we have pursued applications of fundamental theoretical significance for the analysis and interpretation of XPS and XAS spectra. These studies are important for the development of the fields of core-level spectroscopies as well as to advance our capabilities for applications of interest to our PNNL colleagues. An excellent example is our study of the surface core-level shifts, SCLS, for the surface and bulk atoms of an oxide that provides a new approach to understanding how the surface electronic of oxides differs from that in the bulk of the material. This work has the potential to lead to a new key to understanding the reactivity of oxide surfaces. Our theoretical studies use cluster models with finite numbers of atoms to describe the properties of condensed phases and crystals. This approach has allowed us to focus on the local atomistic, chemical interactions. For these clusters, we obtain orbitals and spinors through the solution of the Hartree-Fock, HF, and the fully relativistic Dirac HF equations. These orbitals are used to form configuration mixing wavefunctions which treat the many-body effects responsible for the open shell angular momentum coupling and for the satellites of the core-level spectra. Our efforts have been in two complementary directions. As well as the applications described above, we have placed major emphasis on the enhancement and extension of our theoretical and computational capabilities so that we can treat complex systems with a greater range of many-body effects. Noteworthy accomplishments in terms of method development and enhancement have included: (1) An improvement in our treatment of the large matrices that must be handled when many-body effects are treated. (2

  15. KINETIC MODELING OF A FISCHER-TROPSCH REACTION OVER A COBALT CATALYST IN A SLURRY BUBBLE COLUMN REACTOR FOR INCORPORATION INTO A COMPUTATIONAL MULTIPHASE FLUID DYNAMICS MODEL

    SciTech Connect

    Anastasia Gribik; Doona Guillen, PhD; Daniel Ginosar, PhD

    2008-09-01

    Currently multi-tubular fixed bed reactors, fluidized bed reactors, and slurry bubble column reactors (SBCRs) are used in commercial Fischer Tropsch (FT) synthesis. There are a number of advantages of the SBCR compared to fixed and fluidized bed reactors. The main advantage of the SBCR is that temperature control and heat recovery are more easily achieved. The SBCR is a multiphase chemical reactor where a synthesis gas, comprised mainly of H2 and CO, is bubbled through a liquid hydrocarbon wax containing solid catalyst particles to produce specialty chemicals, lubricants, or fuels. The FT synthesis reaction is the polymerization of methylene groups [-(CH2)-] forming mainly linear alkanes and alkenes, ranging from methane to high molecular weight waxes. The Idaho National Laboratory is developing a computational multiphase fluid dynamics (CMFD) model of the FT process in a SBCR. This paper discusses the incorporation of absorption and reaction kinetics into the current hydrodynamic model. A phased approach for incorporation of the reaction kinetics into a CMFD model is presented here. Initially, a simple kinetic model is coupled to the hydrodynamic model, with increasing levels of complexity added in stages. The first phase of the model includes incorporation of the absorption of gas species from both large and small bubbles into the bulk liquid phase. The driving force for the gas across the gas liquid interface into the bulk liquid is dependent upon the interfacial gas concentration in both small and large bubbles. However, because it is difficult to measure the concentration at the gas-liquid interface, coefficients for convective mass transfer have been developed for the overall driving force between the bulk concentrations in the gas and liquid phases. It is assumed that there are no temperature effects from mass transfer of the gas phases to the bulk liquid phase, since there are only small amounts of dissolved gas in the liquid phase. The product from the

  16. Incorporation of Decision and Game Theories in Early-Stage Complex Product Design to Model End-Use

    NASA Astrophysics Data System (ADS)

    Mesmer, Bryan L.

    The need for design models that accurately capture the complexities of products increase as products grow ever more complicated. The accuracies of these models depend upon the inputs and the methods used on those inputs to determine an output. Product designers must determine the dominant inputs and make assumptions concerning inputs that have less of an effect. Properly capturing the important inputs in the early design stages, where designs are being simulated, allows for modifications of the design at a relatively low cost. In this dissertation, an input that has a high impact on product performance but is usually neglected until later design stages is examined. The end-users of a product interact with the product and with each other in ways that affect the performance of that product. End-users are typically brought in at the later design stages, or as representations on the design team. They are rarely used as input variables in the product models. By incorporating the end-users in the early models and simulations, the end-users' impact on performance are captured when modifications to the designs are cheaper. The methodology of capturing end-user decision making in product models, developed in this dissertation, is created using the methods of decision and game theory. These theories give a mathematical basis for decision making based on the end-users' beliefs and preferences. Due to the variations that are present in end-users' preferences, their interactions with the product cause variations in the performance. This dissertation shows that capturing the end-user interactions in simulations enables the designer to create products that are more robust to the variations of the end-users. The manipulation of a game that an individual plays to drive an outcome desired by a designer is referred to as mechanism design. This dissertation also shows how a designer can influence the end-users' decisions to optimize the designer's goals. How product controlled

  17. Incorporation of expert variability into breast cancer treatment recommendation in designing clinical protocol guided fuzzy rule system models.

    PubMed

    Garibaldi, Jonathan M; Zhou, Shang-Ming; Wang, Xiao-Ying; John, Robert I; Ellis, Ian O

    2012-06-01

    It has been often demonstrated that clinicians exhibit both inter-expert and intra-expert variability when making difficult decisions. In contrast, the vast majority of computerized models that aim to provide automated support for such decisions do not explicitly recognize or replicate this variability. Furthermore, the perfect consistency of computerized models is often presented as a de facto benefit. In this paper, we describe a novel approach to incorporate variability within a fuzzy inference system using non-stationary fuzzy sets in order to replicate human variability. We apply our approach to a decision problem concerning the recommendation of post-operative breast cancer treatment; specifically, whether or not to administer chemotherapy based on assessment of five clinical variables: NPI (the Nottingham Prognostic Index), estrogen receptor status, vascular invasion, age and lymph node status. In doing so, we explore whether such explicit modeling of variability provides any performance advantage over a more conventional fuzzy approach, when tested on a set of 1310 unselected cases collected over a fourteen year period at the Nottingham University Hospitals NHS Trust, UK. The experimental results show that the standard fuzzy inference system (that does not model variability) achieves overall agreement to clinical practice around 84.6% (95% CI: 84.1-84.9%), while the non-stationary fuzzy model can significantly increase performance to around 88.1% (95% CI: 88.0-88.2%), p<0.001. We conclude that non-stationary fuzzy models provide a valuable new approach that may be applied to clinical decision support systems in any application domain. PMID:22265814

  18. Thermal remote sensing of surface soil water content with partial vegetation cover for incorporation into climate models

    NASA Technical Reports Server (NTRS)

    Gillies, Robert R.; Carlson, Toby N.

    1995-01-01

    This study outlines a method for the estimation of regional patterns of surface moisture availability (M(sub 0)) and fractional vegetation (Fr) in the presence of spatially variable vegetation cover. The method requires relating variations in satellite-derived (NOAA, Advanced Very High Resolution Radiometer (AVHRR)) surface radiant temperature to a vegetation index (computed from satellite visible and near-infrared data) while coupling this association to an inverse modeling scheme. More than merely furnishing surface soil moisture values, the method constitues a new conceptual and practical approach for combining thermal infrared and vegetation index measurements for incorporating the derived values of M(sub 0) into hydrologic and atmospheric prediction models. Application of the technique is demonstrated for a region in and around the city of Newcastle upon Tyne situated in the northeast of England. A regional estimate of M(sub 0) is derived and is probabbly good for fractional vegetation cover up to 80% before errors in the estimated soil water content become unacceptably large. Moreover, a normalization scheme is suggested from which a nomogram, `universal triangle,' is constructed and is seen to fit the observed data well. The universal triangle also simplifies the inclusion of remotely derived M(sub 0) in hydrology and meteorological models and is perhaps a practicable step toward integrating derived data from satellite measurements in weather forecasting.

  19. Early-stage hypogene karstification in a mountain hydrologic system: A coupled thermohydrochemical model incorporating buoyant convection

    NASA Astrophysics Data System (ADS)

    Chaudhuri, A.; Rajaram, H.; Viswanathan, H.

    2013-09-01

    The early stage of hypogene karstification is investigated using a coupled thermohydrochemical model of a mountain hydrologic system, in which water enters along a water table and descends to significant depth (˜1 km) before ascending through a central high-permeability fracture. The model incorporates reactive alteration driven by dissolution/precipitation of limestone in a carbonic acid system, due to both temperature- and pressure-dependent solubility, and kinetics. Simulations were carried out for homogeneous and heterogeneous initial fracture aperture fields, using the FEHM (Finite Element Heat and Mass Transfer) code. Initially, retrograde solubility is the dominant mechanism of fracture aperture growth. As the fracture transmissivity increases, a critical Rayleigh number value is exceeded at some stage. Buoyant convection is then initiated and controls the evolution of the system thereafter. For an initially homogeneous fracture aperture field, deep well-organized buoyant convection rolls form. For initially heterogeneous aperture fields, preferential flow suppresses large buoyant convection rolls, although a large number of smaller rolls form. Even after the onset of buoyant convection, dissolution in the fracture is sustained along upward flow paths by retrograde solubility and by additional "mixing corrosion" effects closer to the surface. Aperture growth patterns in the fracture are very different from those observed in simulations of epigenic karst systems, and retain imprints of both buoyant convection and preferential flow. Both retrograde solubility and buoyant convection contribute to these differences. The paper demonstrates the potential value of coupled models as tools for understanding the evolution and behavior of hypogene karst systems.

  20. Thermal Remote Sensing of Surface Soil Water Content with Partial Vegetation Cover for Incorporation into Climate Models.

    NASA Astrophysics Data System (ADS)

    Gillies, Robert R.; Carlson, Toby N.

    1995-04-01

    This study outlines a method for the estimation of regional patterns of surface moisture availability (M0) and fractional vegetation (Fr) in the presence of spatially variable vegetation cover. The method requires relating variations in satellite-derived (NOAA, Advanced Very High Resolution Radiometer) surface radiant temperature to a vegetation index (computed from satellite visible and near-infrared data) while coupling this association to an inverse modeling scheme. More than merely furnishing surface soil moisture values, the method constitutes a new conceptual and practical approach for combining thermal infrared and vegetation index measurements for incorporating the derived values of M0 into hydrologic and atmospheric prediction models.Application of the technique is demonstrated for a region in and around the city of Newcastle upon Tyne situated in the northeast of England. A regional estimate of M0 is derived and is probably good for fractional vegetation cover up to 80% before errors in the estimated soil water content become unacceptably large. Moreover, a normalization scheme is suggested from which a nomogram, `universal triangle,' is constructed and is seen to fit the observed data well. The universal triangle also simplifies the inclusion of remotely derived M0 in hydrology and meteorological models and is perhaps a practicable step toward integrating derived data from satellite measurements in weather forecasting.

  1. A Global Model of Mantle Convection that Incorporates Plate Bending Forces, Slab Pull, and Seismic Constraints on the Plate Stress.

    NASA Astrophysics Data System (ADS)

    Lewis, K.; Buffett, B.; Becker, T.

    2008-12-01

    We introduce a global mantle convection model employing mantle density anomalies inferred from seismic tomography to determine present day plate motions. Our approach addresses two aspects that are not usually considered in previous work. First, we include forces associated with the bending of subducting plates. The bending forces oppose the plate motion, and may be comparable in magnitude to other important forces at subduction zones, including slab pull. Second, our model incorporates data from the Global CMT Catalog. We use the focal mechanisms of earthquakes associated with subducting slabs to estimate the relative occurrence of compressional and tensional axes in the down-dip direction of subducting slabs. This information is used to infer the state of stress in the subducting slab, which we use to calculate slab pull forces. We investigate regional variations in slab pull by comparing plate motions derived using seismic constraints with those derived using slab pull forces based solely on the age of subducting plates. Furthermore, we constrain the rheology of subducted plates by comparing plate motions predicted with and without bending forces. Although our current model uses only radial variations in mantle viscosity, we include the capability of permitting lateral variations in viscosity by calculating buoyancy and plate-driven flows using Citcom

  2. Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions

    PubMed Central

    Fee, Michale S.

    2012-01-01

    In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current “time” in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources. PMID:22754501

  3. Incorporating Field Studies into Species Distribution and Climate Change Modelling: A Case Study of the Koomal Trichosurus vulpecula hypoleucus (Phalangeridae)

    PubMed Central

    Davis, Robert A.; van Etten, Eddie J. B.

    2016-01-01

    Species distribution models (SDMs) are an effective way of predicting the potential distribution of species and their response to environmental change. Most SDMs apply presence data to a relatively generic set of predictive variables such as climate. However, this weakens the modelling process by overlooking the responses to more cryptic predictive variables. In this paper we demonstrate a means by which data gathered from an intensive animal trapping study can be used to enhance SDMs by combining field data with bioclimatic modelling techniques to determine the future potential distribution for the koomal (Trichosurus vulpecula hypoleucus). The koomal is a geographically isolated subspecies of the common brushtail possum, endemic to south-western Australia. Since European settlement this taxon has undergone a significant reduction in distribution due to its vulnerability to habitat fragmentation, introduced predators and tree/shrub dieback caused by a virulent group of plant pathogens of the genus Phytophthora. An intensive field study found: 1) the home range for the koomal rarely exceeded 1 km in in length at its widest point; 2) areas heavily infested with dieback were not occupied; 3) gap crossing between patches (>400 m) was common behaviour; 4) koomal presence was linked to the extent of suitable vegetation; and 5) where the needs of koomal were met, populations in fragments were demographically similar to those found in contiguous landscapes. We used this information to resolve a more accurate SDM for the koomal than that created from bioclimatic data alone. Specifically, we refined spatial coverages of remnant vegetation and dieback, to develop a set of variables that we combined with selected bioclimatic variables to construct models. We conclude that the utility value of an SDM can be enhanced and given greater resolution by identifying variables that reflect observed, species-specific responses to landscape parameters and incorporating these responses

  4. Incorporating Field Studies into Species Distribution and Climate Change Modelling: A Case Study of the Koomal Trichosurus vulpecula hypoleucus (Phalangeridae).

    PubMed

    Molloy, Shaun W; Davis, Robert A; van Etten, Eddie J B

    2016-01-01

    Species distribution models (SDMs) are an effective way of predicting the potential distribution of species and their response to environmental change. Most SDMs apply presence data to a relatively generic set of predictive variables such as climate. However, this weakens the modelling process by overlooking the responses to more cryptic predictive variables. In this paper we demonstrate a means by which data gathered from an intensive animal trapping study can be used to enhance SDMs by combining field data with bioclimatic modelling techniques to determine the future potential distribution for the koomal (Trichosurus vulpecula hypoleucus). The koomal is a geographically isolated subspecies of the common brushtail possum, endemic to south-western Australia. Since European settlement this taxon has undergone a significant reduction in distribution due to its vulnerability to habitat fragmentation, introduced predators and tree/shrub dieback caused by a virulent group of plant pathogens of the genus Phytophthora. An intensive field study found: 1) the home range for the koomal rarely exceeded 1 km in in length at its widest point; 2) areas heavily infested with dieback were not occupied; 3) gap crossing between patches (>400 m) was common behaviour; 4) koomal presence was linked to the extent of suitable vegetation; and 5) where the needs of koomal were met, populations in fragments were demographically similar to those found in contiguous landscapes. We used this information to resolve a more accurate SDM for the koomal than that created from bioclimatic data alone. Specifically, we refined spatial coverages of remnant vegetation and dieback, to develop a set of variables that we combined with selected bioclimatic variables to construct models. We conclude that the utility value of an SDM can be enhanced and given greater resolution by identifying variables that reflect observed, species-specific responses to landscape parameters and incorporating these responses

  5. Establishment of an integrated model incorporating standardised uptake value and N-classification for predicting metastasis in nasopharyngeal carcinoma

    PubMed Central

    Mao, Yan-Ping; Zhou, Guan-Qun; Peng, Hao; Sun, Ying; Liu, Qing; Chen, Lei; Ma, Jun

    2016-01-01

    Background Previous studies reported a correlation between the maximum standardised uptake value (SUVmax) obtained by 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) and distant metastasis in nasopharyngeal carcinoma (NPC). However, an integrated model incorporating SUVmax and anatomic staging for stratifying metastasis risk has not been reported. Results The median SUVmax for primary tumour (SUV-T) and cervical lymph nodes (SUV-N) was 13.6 (range, 2.2 to 39.3) and 8.4 (range, 2.6 to 40.9), respectively. SUV-T (HR, 3.396; 95% CI, 1.451-7.947; P = 0.005), SUV-N (HR, 2.688; 95%CI, 1.250-5.781; P = 0.011) and N-classification (HR, 2.570; 95%CI, 1.422-4.579; P = 0.001) were identified as independent predictors for DMFS from multivariate analysis. Three valid risk groups were derived by RPA: low risk (N0-1 + SUV-T <10.45), medium risk (N0-1 + SUV-T >10.45) and high risk (N2-3). The three risk groups contained 100 (22.3%), 226 (50.3%), and 123 (27.4%) patients, respectively, with corresponding 3-year DMFS rates of 99.0%, 91.5%, and 77.5% (P <0.001). Moreover, multivariate analysis confirmed the RPA-based prognostic grouping as the only significant prognostic indicator for DMFS (HR, 3.090; 95%CI, 1.975-4.835; P <0.001). Methods Data from 449 patients with with histologically-confirmed, stage I-IVB NPC treated with radiotherapy or chemoradiotherapy were retrospectively analysed. A prognostic model for distant metastasis-free survival (DMFS) was derived by recursive partitioning analysis (RPA) combining independent predictors identified by multivariate analysis. Conclusion SUV-T, SUV-N and N-classification were identified as independent predictors for DMFS. An integrated RPA-based prognostic model for DMFS incorporating SUV-N and N-classification was proposed. PMID:26871291

  6. Evaluating the Capacity of Global CO2 Flux and Atmospheric Transport Models to Incorporate New Satellite Observations

    NASA Technical Reports Server (NTRS)

    Kawa, S. R.; Collatz, G. J.; Erickson, D. J.; Denning, A. S.; Wofsy, S. C.; Andrews, A. E.

    2007-01-01

    As we enter the new era of satellite remote sensing for CO2 and other carbon cyclerelated quantities, advanced modeling and analysis capabilities are required to fully capitalize on the new observations. Model estimates of CO2 surface flux and atmospheric transport are required for initial constraints on inverse analyses, to connect atmospheric observations to the location of surface sources and sinks, and ultimately for future projections of carbon-climate interactions. For application to current, planned, and future remotely sensed CO2 data, it is desirable that these models are accurate and unbiased at time scales from less than daily to multi-annual and at spatial scales from several kilometers or finer to global. Here we focus on simulated CO2 fluxes from terrestrial vegetation and atmospheric transport mutually constrained by analyzed meteorological fields from the Goddard Modeling and Assimilation Office for the period 1998 through 2006. Use of assimilated meteorological data enables direct model comparison to observations across a wide range of scales of variability. The biospheric fluxes are produced by the CASA model at lxi degrees on a monthly mean basis, modulated hourly with analyzed temperature and sunlight. Both physiological and biomass burning fluxes are derived using satellite observations of vegetation, burned area (as in GFED-2), and analyzed meteorology. For the purposes of comparison to CO2 data, fossil fuel and ocean fluxes are also included in the transport simulations. In this presentation we evaluate the model's ability to simulate CO2 flux and mixing ratio variability in comparison to in situ observations at sites in Northern mid latitudes and the continental tropics. The influence of key process representations is inferred. We find that the model can resolve much of the hourly to synoptic variability in the observations, although there are limits imposed by vertical resolution of boundary layer processes. The seasonal cycle and its

  7. Incorporating a Generic Model of Subcutaneous Insulin Absorption into the AIDA v4 Diabetes Simulator 3. Early Plasma Insulin Determinations

    PubMed Central

    Lehmann, Eldon D.; Tarín, Cristina; Bondia, Jorge; Teufel, Edgar; Deutsch, Tibor

    2009-01-01

    Introduction AIDA is an interactive educational diabetes simulator that has been available without charge via the Internet for over 12 years. Recent articles have described the incorporation of a novel generic model of insulin absorption into AIDA as a way of enhancing its capabilities. The basic model components to be integrated have been overviewed, with the aim being to provide simulations of regimens utilizing insulin analogues, as well as insulin doses greater than 40 IU (the current upper limit within the latest release of AIDA [v4.3a]). Some preliminary calculated insulin absorption results have also recently been described. Methods This article presents the first simulated plasma insulin profiles from the integration of the generic subcutaneous insulin absorption model, and the currently implemented model in AIDA for insulin disposition. Insulin absorption has been described by the physiologically based model of Tarín and colleagues. A single compartment modeling approach has been used to specify how absorbed insulin is distributed in, and eliminated from, the human body. To enable a numerical solution of the absorption model, a spherical subcutaneous depot for the injected insulin dose has been assumed and spatially discretized into shell compartments with homogeneous concentrations, having as its center the injection site. The number of these compartments will depend on the dose and type of insulin. Insulin inflow arises as the sum of contributions to the different shells. For this report the first bench testing of plasma insulin determinations has been done. Results Simulated plasma insulin profiles are provided for currently available insulin preparations, including a rapidly acting insulin analogue (e.g., lispro/Humalog or aspart/Novolog), a short-acting (regular) insulin preparation (e.g., Actrapid), intermediate-acting insulins (both Semilente and neutral protamine Hagedorn types), and a very long-acting insulin analogue (e.g., glargine/Lantus), as

  8. Incorporating moisture content in modeling the surface energy balance of debris-covered Changri Nup Glacier, Nepal

    NASA Astrophysics Data System (ADS)

    Giese, Alexandra; Boone, Aaron; Morin, Samuel; Lejeune, Yves; Wagnon, Patrick; Dumont, Marie; Hawley, Robert

    2016-04-01

    Glaciers whose ablation zones are covered in supraglacial debris comprise a significant portion of glaciers in High Mountain Asia and two-thirds in the South Central Himalaya. Such glaciers evade traditional proxies for mass balance because they are difficult to delineate remotely and because they lose volume via thinning rather than via retreat. Additionally, their surface energy balance is significantly more complicated than their clean counterparts' due to a conductive heat flux from the debris-air interface to the ice-debris boundary, where melt occurs. This flux is a function of the debris' thickness; thermal, radiative, and physical properties; and moisture content. To date, few surface energy balance models have accounted for debris moisture content and phase changes despite the fact that they are well-known to affect fluxes of mass, latent heat, and conduction. In this study, we introduce a new model, ISBA-DEB, which is capable of solving not only the heat equation but also moisture transport and retention in the debris. The model is based upon Meteo-France's Interactions between Soil, Biosphere, and Atmosphere (ISBA) soil and vegetation model, significantly adapted for debris and coupled with the snowpack model Crocus within the SURFEX platform. We drive the model with continuous ERA-Interim reanalysis data, adapted to the local topography (i.e. considering local elevation and shadowing) and downscaled and de-biased using 5 years of in-situ meteorological data at Changri Nup glacier [(27.859N, 86.847E)] in the Khumbu Himal. The 1-D model output is then evaluated through comparison with measured temperature in and ablation under a 10-cm thick debris layer on Changri Nup. We have found that introducing a non-equilibrium model for water flow, rather than using the mixed-form Richard's equation alone, promotes greater consistency with moisture observations. This explicit incorporation of moisture processes improves simulation of the snow-debris-ice column

  9. Known unknowns in an imperfect world: incorporating uncertainty in recruitment estimates using multi-event capture–recapture models

    PubMed Central

    Desprez, Marine; McMahon, Clive R; Hindell, Mark A; Harcourt, Robert; Gimenez, Olivier

    2013-01-01

    Studying the demography of wild animals remains challenging as several of the critical parts of their life history may be difficult to observe in the field. In particular, determining with certainty when an individual breeds for the first time is not always obvious. This can be problematic because uncertainty about the transition from a prebreeder to a breeder state – recruitment – leads to uncertainty in vital rate estimates and in turn in population projection models. To avoid this issue, the common practice is to discard imperfect data from the analyses. However, this practice can generate a bias in vital rate estimates if uncertainty is related to a specific component of the population and reduces the sample size of the dataset and consequently the statistical power to detect effects of biological interest. Here, we compared the demographic parameters assessed from a standard multistate capture–recapture approach to the estimates obtained from the newly developed multi-event framework that specifically accounts for uncertainty in state assessment. Using a comprehensive longitudinal dataset on southern elephant seals, we demonstrated that the multi-event model enabled us to use all the data collected (6639 capture–recapture histories vs. 4179 with the multistate model) by accounting for uncertainty in breeding states, thereby increasing the precision and accuracy of the demographic parameter estimates. The multi-event model allowed us to incorporate imperfect data into demographic analyses. The gain in precision obtained has important implications in the conservation and management of species because limiting uncertainty around vital rates will permit predicting population viability with greater accuracy. PMID:24363895

  10. Recent Progresses in Incorporating Human Land-Water Management into Global Land Surface Models Toward Their Integration into Earth System Models

    NASA Technical Reports Server (NTRS)

    Pokhrel, Yadu N.; Hanasaki, Naota; Wada, Yoshihide; Kim, Hyungjun

    2016-01-01

    The global water cycle has been profoundly affected by human land-water management. As the changes in the water cycle on land can affect the functioning of a wide range of biophysical and biogeochemical processes of the Earth system, it is essential to represent human land-water management in Earth system models (ESMs). During the recent past, noteworthy progress has been made in large-scale modeling of human impacts on the water cycle but sufficient advancements have not yet been made in integrating the newly developed schemes into ESMs. This study reviews the progresses made in incorporating human factors in large-scale hydrological models and their integration into ESMs. The study focuses primarily on the recent advancements and existing challenges in incorporating human impacts in global land surface models (LSMs) as a way forward to the development of ESMs with humans as integral components, but a brief review of global hydrological models (GHMs) is also provided. The study begins with the general overview of human impacts on the water cycle. Then, the algorithms currently employed to represent irrigation, reservoir operation, and groundwater pumping are discussed. Next, methodological deficiencies in current modeling approaches and existing challenges are identified. Furthermore, light is shed on the sources of uncertainties associated with model parameterizations, grid resolution, and datasets used for forcing and validation. Finally, representing human land-water management in LSMs is highlighted as an important research direction toward developing integrated models using ESM frameworks for the holistic study of human-water interactions within the Earths system.

  11. Modeling regional cropland GPP by empirically incorporating sun-induced chlorophyll fluorescence into a coupled photosynthesis-fluorescence model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Guanter, L.; Van der Tol, C.; Joiner, J.; Berry, J. A.

    2015-12-01

    Global sun-induced chlorophyll fluorescence (SIF) retrievals are currently available from several satellites. SIF is intrinsically linked to photosynthesis, so the new data sets allow to link remotely-sensed vegetation parameters and the actual photosynthetic activity of plants. In this study, we used space measurements of SIF together with the Soil-Canopy Observation of Photosynthesis and Energy (SCOPE) balance model in order to simulate regional photosynthetic uptake of croplands in the US corn belt. SCOPE couples fluorescence and photosynthesis at leaf and canopy levels. To do this, we first retrieved a key parameter of photosynthesis model, the maximum rate of carboxylation (Vcmax), from field measurements of CO2 and water flux during 2007-2012 at some crop eddy covariance flux sites in the Midwestern US. Then we empirically calibrated Vcmax with apparent fluorescence yield which is SIF divided by PAR. SIF retrievals are from the European GOME-2 instrument onboard the MetOp-A platform. The resulting apparent fluorescence yield shows a stronger relationship with Vcmax during the growing season than widely-used vegetation index, EVI and NDVI. New seasonal and regional Vcmax maps were derived based on the calibration model for the cropland of the corn belt. The uncertainties of Vcmax were also estimated through Gaussian error propagation. With the newly derived Vcmax maps, we modeled regional cropland GPP during the growing season for the Midwestern USA, with meteorological data from MERRA reanalysis data and LAI from MODIS product (MCD15A2). The results show the improvement in the seasonal and spatial patterns of cropland productivity in comparisons with both flux tower and agricultural inventory data.

  12. Incorporating soil variability in continental soil water modelling: a trade-off between data availability and model complexity

    NASA Astrophysics Data System (ADS)

    Peeters, L.; Crosbie, R. S.; Doble, R.; van Dijk, A. I. J. M.

    2012-04-01

    Developing a continental land surface model implies finding a balance between the complexity in representing the system processes and the availability of reliable data to drive, parameterise and calibrate the model. While a high level of process understanding at plot or catchment scales may warrant a complex model, such data is not available at the continental scale. This data sparsity is especially an issue for the Australian Water Resources Assessment system, AWRA-L, a land-surface model designed to estimate the components of the water balance for the Australian continent. This study focuses on the conceptualization and parametrization of the soil drainage process in AWRA-L. Traditionally soil drainage is simulated with Richards' equation, which is highly non-linear. As general analytic solutions are not available, this equation is usually solved numerically. In AWRA-L however, we introduce a simpler function based on simulation experiments that solve Richards' equation. In the simplified function soil drainage rate, the ratio of drainage (D) over storage (S), decreases exponentially with relative water content. This function is controlled by three parameters, the soil water storage at field capacity (SFC), the drainage fraction at field capacity (KFC) and a drainage function exponent (β). [ ] D- -S- S = KF C exp - β (1 - SFC ) To obtain spatially variable estimates of these three parameters, the Atlas of Australian Soils is used, which lists soil hydraulic properties for each soil profile type. For each soil profile type in the Atlas, 10 days of draining an initially fully saturated, freely draining soil is simulated using HYDRUS-1D. With field capacity defined as the volume of water in the soil after 1 day, the remaining parameters can be obtained by fitting the AWRA-L soil drainage function to the HYDRUS-1D results. This model conceptualisation fully exploits the data available in the Atlas of Australian Soils, without the need to solve the non

  13. First steps in incorporating data-driven modelling to flood early warning in Norway's Flood Forecasting Service

    NASA Astrophysics Data System (ADS)

    Borsányi, Péter; Hamududu, Byman; Wong Kwok, Wai; Magnusson, Jan; Shi, Min

    2016-04-01

    -describers will be refined to help identifying seasonal, geographical or other physical or temporal features, which will allow the preparation for hybrid use of data-driven and conceptual models. This way low forecast performance is expected to be improved by incorporating DDM to the existing conceptual modelling framework.

  14. Incorporating Uncertainty Into the Ranking of SPARROW Model Nutrient Yields From Mississippi/Atchafalaya River Basin Watersheds1

    PubMed Central

    Robertson, Dale M; Schwarz, Gregory E; Saad, David A; Alexander, Richard B

    2009-01-01

    Excessive loads of nutrients transported by tributary rivers have been linked to hypoxia in the Gulf of Mexico. Management efforts to reduce the hypoxic zone in the Gulf of Mexico and improve the water quality of rivers and streams could benefit from targeting nutrient reductions toward watersheds with the highest nutrient yields delivered to sensitive downstream waters. One challenge is that most conventional watershed modeling approaches (e.g., mechanistic models) used in these management decisions do not consider uncertainties in the predictions of nutrient yields and their downstream delivery. The increasing use of parameter estimation procedures to statistically estimate model coefficients, however, allows uncertainties in these predictions to be reliably estimated. Here, we use a robust bootstrapping procedure applied to the results of a previous application of the hybrid statistical/mechanistic watershed model SPARROW (Spatially Referenced Regression On Watershed attributes) to develop a statistically reliable method for identifying “high priority” areas for management, based on a probabilistic ranking of delivered nutrient yields from watersheds throughout a basin. The method is designed to be used by managers to prioritize watersheds where additional stream monitoring and evaluations of nutrient-reduction strategies could be undertaken. Our ranking procedure incorporates information on the confidence intervals of model predictions and the corresponding watershed rankings of the delivered nutrient yields. From this quantified uncertainty, we estimate the probability that individual watersheds are among a collection of watersheds that have the highest delivered nutrient yields. We illustrate the application of the procedure to 818 eight-digit Hydrologic Unit Code watersheds in the Mississippi/Atchafalaya River basin by identifying 150 watersheds having the highest delivered nutrient yields to the Gulf of Mexico. Highest delivered yields were from

  15. Can comprehensive background knowledge be incorporated into substitution models to improve phylogenetic analyses? A case study on major arthropod relationships

    PubMed Central

    von Reumont, Björn M; Meusemann, Karen; Szucsich, Nikolaus U; Dell'Ampio, Emiliano; Gowri-Shankar, Vivek; Bartel, Daniela; Simon, Sabrina; Letsch, Harald O; Stocsits, Roman R; Luan, Yun-xia; Wägele, Johann Wolfgang; Pass, Günther; Hadrys, Heike; Misof, Bernhard

    2009-01-01

    Background Whenever different data sets arrive at conflicting phylogenetic hypotheses, only testable causal explanations of sources of errors in at least one of the data sets allow us to critically choose among the conflicting hypotheses of relationships. The large (28S) and small (18S) subunit rRNAs are among the most popular markers for studies of deep phylogenies. However, some nodes supported by this data are suspected of being artifacts caused by peculiarities of the evolution of these molecules. Arthropod phylogeny is an especially controversial subject dotted with conflicting hypotheses which are dependent on data set and method of reconstruction. We assume that phylogenetic analyses based on these genes can be improved further i) by enlarging the taxon sample and ii) employing more realistic models of sequence evolution incorporating non-stationary substitution processes and iii) considering covariation and pairing of sites in rRNA-genes. Results We analyzed a large set of arthropod sequences, applied new tools for quality control of data prior to tree reconstruction, and increased the biological realism of substitution models. Although the split-decomposition network indicated a high noise content in the data set, our measures were able to both improve the analyses and give causal explanations for some incongruities mentioned from analyses of rRNA sequences. However, misleading effects did not completely disappear. Conclusion Analyses of data sets that result in ambiguous phylogenetic hypotheses demand for methods, which do not only filter stochastic noise, but likewise allow to differentiate phylogenetic signal from systematic biases. Such methods can only rely on our findings regarding the evolution of the analyzed data. Analyses on independent data sets then are crucial to test the plausibility of the results. Our approach can easily be extended to genomic data, as well, whereby layers of quality assessment are set up applicable to phylogenetic

  16. Incorporation of the C-GOLDSTEIN efficient climate model into the GENIE framework: "eb_go_gs" configurations of GENIE

    NASA Astrophysics Data System (ADS)

    Marsh, R.; Müller, S. A.; Yool, A.; Edwards, N. R.

    2011-11-01

    A computationally efficient, intermediate complexity ocean-atmosphere-sea ice model (C-GOLDSTEIN) has been incorporated into the Grid ENabled Integrated Earth system modelling (GENIE) framework. This involved decoupling of the three component modules that were re-coupled in a modular way, to allow replacement with alternatives and coupling of further components within the framework. The climate model described here (referred to as "eb_go_gs" for short) is the most basic version of GENIE in which atmosphere, ocean and sea ice all play an active role. Among improvements on the original C-GOLDSTEIN model, latitudinal grid resolution is generalized to allow a wider range of surface grids to be used. The ocean, atmosphere and sea-ice components of the "eb_go_gs" configuration of GENIE are individually described, along with details of their coupling. The setup and results from simulations using four different meshes are presented. The four alternative meshes comprise the widely-used 36 × 36 equal-area-partitioning of the Earth surface with 16 depth layers in the ocean, a version in which horizontal and vertical resolution are doubled, a setup matching the horizontal resolution of the dynamic atmospheric component available in the GENIE framework, and a setup with enhanced resolution in high-latitude areas. Results are presented for a spin-up experiment with a baseline parameter set and wind forcing typically used for current studies in which "eb_go_gs" is coupled with the ocean biogeochemistry module of GENIE, as well as for an experiment with a modified parameter set, revised wind forcing, and additional cross-basin transport pathways (Indonesian and Bering Strait throughflows). The latter experiment is repeated with the four mesh variants, with common parameter settings throughout, except for time-step length. Selected state variables and diagnostics are compared in two regards: (i) between simulations at lowest resolution that are obtained with the baseline and

  17. A neural network for incorporating the thermal effect on the magnetic hysteresis of the 3F3 material using the Jiles-Atherton model

    NASA Astrophysics Data System (ADS)

    Nouicer, A.; Nouicer, E.; Feliachi, Mouloud

    2015-01-01

    The present paper deals with the temperature dependent modeling approach for the generation of hysteresis loops of ferromagnetic materials. The physical model is developed to study the effect of temperature on the magnetic hysteresis loop using the Jiles-Atherton (J-A) model. The thermal effects were incorporated through temperature dependent hysteresis parameters of JA model. The temperature-dependent J-A model was validated by measurements made on the ferrite material. The results of proposed model were in good agreement with the measurements.

  18. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    SciTech Connect

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C.; Loudos, George; Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris

    2013-11-15

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines with

  19. Predicting Secretory Proteins of Malaria Parasite by Incorporating Sequence Evolution Information into Pseudo Amino Acid Composition via Grey System Model

    PubMed Central

    Lin, Wei-Zhong; Fang, Jian-An; Xiao, Xuan; Chou, Kuo-Chen

    2012-01-01

    The malaria disease has become a cause of poverty and a major hindrance to economic development. The culprit of the disease is the parasite, which secretes an array of proteins within the host erythrocyte to facilitate its own survival. Accordingly, the secretory proteins of malaria parasite have become a logical target for drug design against malaria. Unfortunately, with the increasing resistance to the drugs thus developed, the situation has become more complicated. To cope with the drug resistance problem, one strategy is to timely identify the secreted proteins by malaria parasite, which can serve as potential drug targets. However, it is both expensive and time-consuming to identify the secretory proteins of malaria parasite by experiments alone. To expedite the process for developing effective drugs against malaria, a computational predictor called “iSMP-Grey” was developed that can be used to identify the secretory proteins of malaria parasite based on the protein sequence information alone. During the prediction process a protein sample was formulated with a 60D (dimensional) feature vector formed by incorporating the sequence evolution information into the general form of PseAAC (pseudo amino acid composition) via a grey system model, which is particularly useful for solving complicated problems that are lack of sufficient information or need to process uncertain information. It was observed by the jackknife test that iSMP-Grey achieved an overall success rate of 94.8%, remarkably higher than those by the existing predictors in this area. As a user-friendly web-server, iSMP-Grey is freely accessible to the public at http://www.jci-bioinfo.cn/iSMP-Grey. Moreover, for the convenience of most experimental scientists, a step-by-step guide is provided on how to use the web-server to get the desired results without the need to follow the complicated mathematical equations involved in this paper. PMID:23189138

  20. Precursory surface deformation expected from a strike-slip fault model into which rheological properties of the lithosphere are incorporated

    NASA Astrophysics Data System (ADS)

    Yamashita, Teruo; Ohnaka, Mitiyasu

    1992-09-01

    Yamashita, T. and Ohnaka, M., 1992. Precursory surface deformation expected from a strike-slip fault model into which rheological properties of the lithosphere are incorporated. In: T. Mikumo, K. Aki, M. Ohnaka, L.J. Ruff and P.K.P. Spudich (Editors), Earthquake Source Physics and Earthquake Precursors. Tectonophysics, 211: 179-199. Earthquake prediction is one of the important problems with which seismologists are confronted. Much observational effort has been made to detect precursory surface deformation before earthquake occurrence. However, the physical mechanism to generate such precursory deformation is not fully understood. We, in this paper, theoretically study the growth process of strike-slip fault from nucleation to instability and a possibility to detect precursory surface deformation. Analyses are made on the basis of a breakdown zone crack model, which has been successfully applied in many aspects of earthquake rupture. We specifically attempt to simulate earthquake occurrence at the San Andreas fault, California, taking account of geological and geophysical conditions there. The most important parameters of the breakdown zone crack model will be the peak shear stress σ p near the crack tip, the sliding frictional stress σ f, and the critical slip displacement Dc. For the depth variation of these parameters we assume a three-layer model, which is composed of a brittle upper layer, a plastic lower layer and an intervening semibrittle layer. We model the depth variations of σ p and σ f, modifying the shear resistance profile appropriate for the San Andreas fault obtained by Sibson. The depth distribution of Dc is assumed to be constant D0 in the brittle layer and to increase exponentially with depth in the semibrittle and plastic layers on the basis of the study of Ohnaka; the depth distribution of Dc is described by two parameters, D0 and S, the latter standing for the increase rate of Dc in the lower two layers. Since there appears to exist much

  1. From Numerical Problem Solving to Model-Based Experimentation Incorporating Computer-Based Tools of Various Scales into the ChE Curriculum

    ERIC Educational Resources Information Center

    Shacham, Mordechai; Cutlip, Michael B.; Brauner, Neima

    2009-01-01

    A continuing challenge to the undergraduate chemical engineering curriculum is the time-effective incorporation and use of computer-based tools throughout the educational program. Computing skills in academia and industry require some proficiency in programming and effective use of software packages for solving 1) single-model, single-algorithm…

  2. Molecular view modeling of atmospheric organic particulate matter: Incorporating molecular structure and co-condensation of water

    NASA Astrophysics Data System (ADS)

    Pankow, James F.; Marks, Marguerite C.; Barsanti, Kelley C.; Mahmud, Abdullah; Asher, William E.; Li, Jingyi; Ying, Qi; Jathar, Shantanu H.; Kleeman, Michael J.

    2015-12-01

    Most urban and regional models used to predict levels of organic particulate matter (OPM) are based on fundamental equations for gas/particle partitioning, but make the highly simplifying, anonymized-view (AV) assumptions that OPM levels are not affected by either: a) the molecular characteristics of the condensing organic compounds (other than simple volatility); or b) co-condensation of water as driven by non-zero relative humidity (RH) values. The simplifying assumptions have allowed parameterized chamber results for formation of secondary organic aerosol (SOA) (e.g., "two-product" (2p) coefficients) to be incorporated in chemical transport models. However, a return towards a less simplistic (and more computationally demanding) molecular view (MV) is needed that acknowledges that atmospheric OPM is a mixture of organic compounds with differing polarities, water, and in some cases dissolved salts. The higher computational cost of MV modeling results from a need for iterative calculations of the composition-dependent gas/particle partition coefficient values. MV modeling of OPM that considered water uptake (but not dissolved salts) was carried out for the southeast United States for the period August 29 through September 7, 2006. Three model variants were used at three universities: CMAQ-RH-2p (at PSU), UCD/CIT-RH-2p (at UCD), and CMAQ-RH-MCM (at TAMU). With the first two, MV structural characteristics (carbon number and numbers of functional groups) were assigned to each of the 2p products used in CMAQv.4.7.1 such that resulting predicted Kp,i values matched those in CMAQv.4.7.1. When water uptake was allowed, most runs assumed that uptake occurred only into the SOA portion, and imposed immiscibility of SOA with primary organic aerosol (POA). (POA is often viewed as rather non-polar, while SOA is commonly viewed as moderately-to-rather polar. Some runs with UCD/CIT-RH-2p were used to investigate the effects of POA/SOA miscibility.) CMAQ-RH-MCM used MCM to

  3. Zinc Incorporation Into Hydroxylapatite

    SciTech Connect

    Tang, Y.; Chappell, H; Dove, M; Reeder, R; Lee, Y

    2009-01-01

    By theoretical modeling and X-ray absorption spectroscopy, the local coordination structure of Zn incorporated into hydroxylapatite was examined. Density function theory (DFT) calculations show that Zn favors the Ca2 site over the Ca1 site, and favors tetrahedral coordination. X-ray absorption near edge structure (XANES) spectroscopy results suggest one dominant coordination environment for the incorporated Zn, and no evidence was observed for other Zn-containing phases. Extended X-ray absorption fine structure (EXAFS) fitting of the synthetic samples confirms that Zn occurs in tetrahedral coordination, with two P shells at 2.85-3.07 {angstrom}, and two higher Ca shells at 3.71-4.02 {angstrom}. These fit results are consistent with the most favored DFT model for Zn substitution in the Ca2 site.

  4. Incorporation model of N into GaInNAs alloys grown by radio-frequency plasma-assisted molecular beam epitaxy

    SciTech Connect

    Aho, A.; Korpijärvi, V.-M.; Tukiainen, A.; Puustinen, J.; Guina, M.

    2014-12-07

    We present a Maxwell-Boltzmann electron energy distribution based model for the incorporation rate of nitrogen into GaInNAs grown by molecular beam epitaxy (MBE) using a radio frequency plasma source. Nitrogen concentration is predicted as a function of radio-frequency system primary resistance, N flow, and RF power, and group III growth rate. The semi-empirical model is shown to be repeatable with a maximum error of 6%. The model was validated for two different MBE systems by growing GaInNAs on GaAs(100) with variable nitrogen composition of 0%–6%.

  5. Modelling fires in the terrestrial carbon balance by incorporating SPITFIRE into the global vegetation model ORCHIDEE - Part 1: Simulating historical global burned area and fire regime

    NASA Astrophysics Data System (ADS)

    Yue, C.; Ciais, P.; Cadule, P.; Thonicke, K.; Archibald, S.; Poulter, B.; Hao, W. M.; Hantson, S.; Mouillot, F.; Friedlingstein, P.; Maignan, F.; Viovy, N.

    2014-04-01

    Fire is an important global ecological process that determines the distribution of biomes, with consequences for carbon, water, and energy budgets. The modelling of fire is critical for understanding its role in both historical and future changes in terrestrial ecosystems and the climate system. This study incorporates the process-based prognostic fire module SPITFIRE into the global vegetation model ORCHIDEE, which was then used to simulate the historical burned area and the fire regime for the 20th century. For 2001-2006, the simulated global spatial extent of fire occurrence agrees well with that given by the satellite-derived burned area datasets (L3JRC, GLOBCARBON, GFED3.1) and captures 78-92% of global total burned area depending on which dataset is used for comparison. The simulated global annual burned area is 329 Mha yr-1, which falls within the range of 287-384 Mha yr-1 given by the three global observation datasets and is close to the 344 Mha yr-1 given by GFED3.1 data when crop fires are excluded. The simulated long-term trends of burned area agree best with the observation data in regions where fire is mainly driven by the climate variation, such as boreal Russia (1920-2009), and the US state of Alaska and Canada (1950-2009). At the global scale, the simulated decadal fire trend over the 20th century is in moderate agreement with the historical reconstruction, possibly because of the uncertainties of past estimates, and because land-use change fires and fire suppression are not explicitly included in the model. Over the globe, the size of large fires (the 95th quantile fire size) is systematically underestimated by the model compared with the fire patch data as reconstructed from MODIS 500 m burned area data. Two case studies of fire size distribution in boreal North America and southern Africa indicate that both the number and the size of big fires are underestimated, which could be related with too low fire spread rate (in the case of static

  6. Quantitative measurements of regional glucose utilization and rate of valine incorporation into proteins by double-tracer autoradiography in the rat brain tumor model

    SciTech Connect

    Kirikae, M.; Diksic, M.; Yamamoto, Y.L.

    1989-02-01

    We examined the rate of glucose utilization and the rate of valine incorporation into proteins using 2-(/sup 18/F)fluoro-2-deoxyglucose and L-(1-14C)-valine in a rat brain tumor model by quantitative double-tracer autoradiography. We found that in the implanted tumor the rate of valine incorporation into proteins was about 22 times and the rate of glucose utilization was about 1.5 times that in the contralateral cortex. (In the ipsilateral cortex, the tumor had a profound effect on glucose utilization but no effect on the rate of valine incorporation into proteins.) Our findings suggest that it is more useful to measure protein synthesis than glucose utilization to assess the effectiveness of antitumor agents and their toxicity to normal brain tissue. We compared two methods to estimate the rate of valine incorporation: kinetic (quantitation done using an operational equation and the average brain rate coefficients) and washed slices (unbound labeled valine removed by washing brain slices in 10% trichloroacetic acid). The results were the same using either method. It would seem that the kinetic method can thus be used for quantitative measurement of protein synthesis in brain tumors and normal brain tissue using (/sup 11/C)-valine with positron emission tomography.

  7. A flexible statistical model for alignment of label-free proteomics data – incorporating ion mobility and product ion information

    PubMed Central

    2013-01-01

    Background The goal of many proteomics experiments is to determine the abundance of proteins in biological samples, and the variation thereof in various physiological conditions. High-throughput quantitative proteomics, specifically label-free LC-MS/MS, allows rapid measurement of thousands of proteins, enabling large-scale studies of various biological systems. Prior to analyzing these information-rich datasets, raw data must undergo several computational processing steps. We present a method to address one of the essential steps in proteomics data processing - the matching of peptide measurements across samples. Results We describe a novel method for label-free proteomics data alignment with the ability to incorporate previously unused aspects of the data, particularly ion mobility drift times and product ion information. We compare the results of our alignment method to PEPPeR and OpenMS, and compare alignment accuracy achieved by different versions of our method utilizing various data characteristics. Our method results in increased match recall rates and similar or improved mismatch rates compared to PEPPeR and OpenMS feature-based alignment. We also show that the inclusion of drift time and product ion information results in higher recall rates and more confident matches, without increases in error rates. Conclusions Based on the results presented here, we argue that the incorporation of ion mobility drift time and product ion information are worthy pursuits. Alignment methods should be flexible enough to utilize all available data, particularly with recent advancements in experimental separation methods. PMID:24341404

  8. Macro-level pedestrian and bicycle crash analysis: Incorporating spatial spillover effects in dual state count models.

    PubMed

    Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed

    2016-08-01

    This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. PMID:27153525

  9. Modification of a method-of-characteristics solute-transport model to incorporate decay and equilibrium-controlled sorption or ion exchange

    USGS Publications Warehouse

    Goode, D.J.; Konikow, L.F.

    1989-01-01

    The U.S. Geological Survey computer model of two-dimensional solute transport and dispersion in ground water (Konikow and Bredehoeft, 1978) has been modified to incorporate the following types of chemical reactions: (1) first-order irreversible rate-reaction, such as radioactive decay; (2) reversible equilibrium-controlled sorption with linear, Freundlich, or Langmuir isotherms; and (3) reversible equilibrium-controlled ion exchange for monovalent or divalent ions. Numerical procedures are developed to incorporate these processes in the general solution scheme that uses method-of- characteristics with particle tracking for advection and finite-difference methods for dispersion. The first type of reaction is accounted for by an exponential decay term applied directly to the particle concentration. The second and third types of reactions are incorporated through a retardation factor, which is a function of concentration for nonlinear cases. The model is evaluated and verified by comparison with analytical solutions for linear sorption and decay, and by comparison with other numerical solutions for nonlinear sorption and ion exchange.

  10. Optimizing stem cell functions and antibacterial properties of TiO2 nanotubes incorporated with ZnO nanoparticles: experiments and modeling

    PubMed Central

    Liu, Wenwen; Su, Penglei; Gonzales, Arthur; Chen, Su; Wang, Na; Wang, Jinshu; Li, Hongyi; Zhang, Zhenting; Webster, Thomas J

    2015-01-01

    To optimize mesenchymal stem cell differentiation and antibacterial properties of titanium (Ti), nano-sized zinc oxide (ZnO) particles with tunable concentrations were incorporated into TiO2 nanotubes (TNTs) using a facile hydrothermal strategy. It is revealed here for the first time that the TNTs incorporated with ZnO nanoparticles exhibited better biocompatibility compared with pure Ti samples (controls) and that the amount of ZnO (tailored by the concentration of Zn(NO3)2 in the precursor) introduced into TNTs played a crucial role on their osteogenic properties. Not only was the alkaline phosphatase activity improved to about 13.8 U/g protein, but the osterix, collagen-I, and osteocalcin gene expressions was improved from mesenchymal stem cells compared to controls. To further explore the mechanism of TNTs decorated with ZnO on cell functions, a response surface mathematical model was used to optimize the concentration of ZnO incorporation into the Ti nanotubes for stem cell differentiation and antibacterial properties for the first time. Both experimental and modeling results confirmed (R2 values of 0.8873–0.9138 and 0.9596–0.9941, respectively) that Ti incorporated with appropriate concentrations (with an initial concentration of Zn(NO3)2 at 0.015 M) of ZnO can provide exceptional osteogenic properties for stem cell differentiation in bone cells with strong antibacterial effects, properties important for improving dental and orthopedic implant efficacy. PMID:25792833

  11. A statistical learning approach to the modeling of chromatographic retention of oligonucleotides incorporating sequence and secondary structure data

    PubMed Central

    Sturm, Marc; Quinten, Sascha; Huber, Christian G.; Kohlbacher, Oliver

    2007-01-01

    We propose a new model for predicting the retention time of oligonucleotides. The model is based on ν support vector regression using features derived from base sequence and predicted secondary structure of oligonucleotides. Because of the secondary structure information, the model is applicable even at relatively low temperatures where the secondary structure is not suppressed by thermal denaturing. This makes the prediction of oligonucleotide retention time for arbitrary temperatures possible, provided that the target temperature lies within the temperature range of the training data. We describe different possibilities of feature calculation from base sequence and secondary structure, present the results and compare our model to existing models. PMID:17567619

  12. Evaluation of Boundless Biogeochemical Cycle through Development of Process-Based Eco-Hydrological and Biogeochemical Cycle Model to Incorporate Terrestrial-Aquatic Continuum

    NASA Astrophysics Data System (ADS)

    Nakayama, T.; Maksyutov, S. S.

    2014-12-01

    Inland water might act as important transport pathway for continental biogeochemical cycle although its contribution has remained uncertain yet due to a paucity of data (Battin et al. 2009). The author has developed process-based National Integrated Catchment-based Eco-hydrology (NICE) model (Nakayama, 2008a-b, 2010, 2011a-b, 2012a-c, 2013; Nakayama and Fujita, 2010; Nakayama and Hashimoto, 2011; Nakayama and Shankman, 2013a-b; Nakayama and Watanabe, 2004, 2006, 2008a-b; Nakayama et al., 2006, 2007, 2010, 2012), which incorporates surface-groundwater interactions, includes up- and down-scaling processes between local-regional-global scales, and can simulate iteratively nonlinear feedback between hydrologic-geomorphic-ecological processes. Because NICE incorporates 3-D groundwater sub-model and expands from previous 1- or 2-D or steady state, the model can simulate the lateral transport pronounced at steeper-slope or riparian/floodplain with surface-groundwater connectivity. River discharge and groundwater level simulated by NICE agreed reasonably with those in previous researches (Niu et al., 2007; Fan et al., 2013) and extended to clarify lateral subsurface also has important role on global hydrologic cycle (Nakayama, 2011b; Nakayama and Shankman, 2013b) though the resolution was coarser. NICE was further developed to incorporate biogeochemical cycle including reaction between inorganic and organic carbons in terrestrial and aquatic ecosystems. The missing role of carbon cycle simulated by NICE, for example, CO2 evasion from inland water (global total flux was estimated as about 1.0 PgC/yr), was relatively in good agreement in that estimated by empirical relation using previous pCO2 data (Aufdenkampe et al., 2011; Laruelle et al., 2013). The model would play important role in identification of greenhouse gas balance of the biosphere and spatio-temporal hot spots, and bridging gap between top-down and bottom-up approaches (Cole et al. 2007; Frei et al. 2012).

  13. Modeling responses of large-river fish populations to global climate change through downscaling and incorporation of predictive uncertainty

    USGS Publications Warehouse

    Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima

    2012-01-01

    Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.

  14. Evaluation of the Doraiswamy-Thompson winter wheat crop calendar model incorporating a modified spring restart sequence

    NASA Technical Reports Server (NTRS)

    Taylor, T. W.; Ravet, F. W.; Smika, D. (Principal Investigator)

    1981-01-01

    The Robertson phenology was used to provide growth stage information to a wheat stress indicator mode. A stress indicator model demands two acurate predictions from a crop calendar: date of spring growth initiation; and crop calendar stage at growth initiation. Several approaches for restarting the Robertson phenology model at spring growth initiation were studied. Although best results were obtained with a solar thermal unit method, an alternate approach which indicates soil temperature as the controlling parameter for spring growth initiation was selected and tested. The modified model (Doraiswamy-Thompson) is compared to LACIE-Robertson model predictions.

  15. Incorporating landscape depressions and tile drainages of a northern German lowland catchment into a semi-distributed model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrological models need to be adapted to specific hydrological characteristics of the catchment in which they are applied. In the lowland region of northern Germany, tile drains and depressions are prominent features of the landscape though are often neglected in hydrological modelling on the catch...

  16. Incorporation of Reaction Kinetics into a Multiphase, Hydrodynamic Model of a Fischer Tropsch Slurry Bubble Column Reactor

    SciTech Connect

    Donna Guillen, PhD; Anastasia Gribik; Daniel Ginosar, PhD; Steven P. Antal, PhD

    2008-11-01

    This paper describes the development of a computational multiphase fluid dynamics (CMFD) model of the Fischer Tropsch (FT) process in a Slurry Bubble Column Reactor (SBCR). The CMFD model is fundamentally based which allows it to be applied to different industrial processes and reactor geometries. The NPHASE CMFD solver [1] is used as the robust computational platform. Results from the CMFD model include gas distribution, species concentration profiles, and local temperatures within the SBCR. This type of model can provide valuable information for process design, operations and troubleshooting of FT plants. An ensemble-averaged, turbulent, multi-fluid solution algorithm for the multiphase, reacting flow with heat transfer was employed. Mechanistic models applicable to churn turbulent flow have been developed to provide a fundamentally based closure set for the equations. In this four-field model formulation, two of the fields are used to track the gas phase (i.e., small spherical and large slug/cap bubbles), and the other two fields are used for the liquid and catalyst particles. Reaction kinetics for a cobalt catalyst is based upon values reported in the published literature. An initial, reaction kinetics model has been developed and exercised to demonstrate viability of the overall solution scheme. The model will continue to be developed with improved physics added in stages.

  17. A model model: a commentary on DiFrancesco and Noble (1985) ‘A model of cardiac electrical activity incorporating ionic pumps and concentration changes’

    PubMed Central

    Dibb, Katharine; Trafford, Andrew; Zhang, Henggui; Eisner, David

    2015-01-01

    This paper summarizes the advances made by the DiFrancesco and Noble (DFN) model of cardiac cellular electrophysiology, which was published in Philosophical Transactions B in 1985. This model was developed at a time when the introduction of new techniques and provision of experimental data had resulted in an explosion of knowledge about the cellular and biophysical properties of the heart. It advanced the cardiac modelling field from a period when computer models considered only the voltage-dependent channels in the surface membrane. In particular, it included a consideration of changes of both intra- and extracellular ionic concentrations. In this paper, we summarize the most important contributions of the DiFrancesco and Noble paper. We also describe how computer modelling has developed subsequently with the extension from the single cell to the whole heart as well as its use in understanding disease and predicting the effects of pharmaceutical interventions. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society. PMID:25750236

  18. Incorporation of measured photosynthetic rate in a mathematical model for calculation of non-structural saccharide concentration

    NASA Technical Reports Server (NTRS)

    Lim, J. T.; Raper, C. D. Jr; Gold, H. J.; Wilkerson, G. G.; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    A simple mathematical model for calculating the concentration of mobile carbon skeletons in the shoot of soya bean plants [Glycine max (L.) Merrill cv. Ransom] was built to examine the suitability of measured net photosynthetic rates (PN) for calculation of saccharide flux into the plant. The results suggest that either measurement of instantaneous PN overestimated saccharide influx or respiration rates utilized in the model were underestimated. If neither of these is the case, end-product inhibition of photosynthesis or waste respiration through the alternative pathway should be included in modelling of CH2O influx or efflux; and even if either of these is the case, the model output at a low coefficient of leaf activity indicates that PN still may be controlled by either end-product inhibition or alternative respiration.

  19. Dynamics of a producer-grazer model incorporating the effects of excess food nutrient content on grazer's growth.

    PubMed

    Peace, Angela; Wang, Hao; Kuang, Yang

    2014-09-01

    Modeling under the framework of ecological stoichiometric allows the investigation of the effects of food quality on food web population dynamics. Recent discoveries in ecological stoichiometry suggest that grazer dynamics are affected by insufficient food nutrient content (low phosphorus (P)/carbon (C) ratio) as well as excess food nutrient content (high P:C). This phenomenon is known as the "stoichiometric knife edge." While previous models have captured this phenomenon, they do not explicitly track P in the producer or in the media that supports the producer, which brings questions to the validity of their predictions. Here, we extend a Lotka-Volterra-type stoichiometric model by mechanistically deriving and tracking P in the producer and free P in the environment in order to investigate the growth response of Daphnia to algae of varying P:C ratios. Bifurcation analysis and numerical simulations of the full model, that explicitly tracks phosphorus, lead to quantitative different predictions than previous models that neglect to track free nutrients. The full model shows that the fate of the grazer population can be very sensitive to excess nutrient concentrations. Dynamical free nutrient pool seems to induce extreme grazer population density changes when total nutrient is in an intermediate range. PMID:25124765

  20. Computational model of the fathead minnow hypothalamic-pituitary-gonadal axis: Incorporating protein synthesis in improving predictability of responses to endocrine active chemicals.

    PubMed

    Breen, Miyuki; Villeneuve, Daniel L; Ankley, Gerald T; Bencic, David; Breen, Michael S; Watanabe, Karen H; Lloyd, Alun L; Conolly, Rory B

    2016-01-01

    There is international concern about chemicals that alter endocrine system function in humans and/or wildlife and subsequently cause adverse effects. We previously developed a mechanistic computational model of the hypothalamic-pituitary-gonadal (HPG) axis in female fathead minnows exposed to a model aromatase inhibitor, fadrozole (FAD), to predict dose-response and time-course behaviors for apical reproductive endpoints. Initial efforts to develop a computational model describing adaptive responses to endocrine stress providing good fits to empirical plasma 17β-estradiol (E2) data in exposed fish were only partially successful, which suggests that additional regulatory biology processes need to be considered. In this study, we addressed short-comings of the previous model by incorporating additional details concerning CYP19A (aromatase) protein synthesis. Predictions based on the revised model were evaluated using plasma E2 concentrations and ovarian cytochrome P450 (CYP) 19A aromatase mRNA data from two fathead minnow time-course experiments with FAD, as well as from a third 4-day study. The extended model provides better fits to measured E2 time-course concentrations, and the model accurately predicts CYP19A mRNA fold changes and plasma E2 dose-response from the 4-d concentration-response study. This study suggests that aromatase protein synthesis is an important process in the biological system to model the effects of FAD exposure. PMID:26875912

  1. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    USGS Publications Warehouse

    Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  2. Incorporating Anthropogenic Influences into Fire Probability Models: Effects of Human Activity and Climate Change on Fire Activity in California

    PubMed Central

    Batllori, Enric; Moritz, Max A.; Waller, Eric K.; Berck, Peter; Flint, Alan L.; Flint, Lorraine E.; Dolfi, Emmalee

    2016-01-01

    The costly interactions between humans and wildfires throughout California demonstrate the need to understand the relationships between them, especially in the face of a changing climate and expanding human communities. Although a number of statistical and process-based wildfire models exist for California, there is enormous uncertainty about the location and number of future fires, with previously published estimates of increases ranging from nine to fifty-three percent by the end of the century. Our goal is to assess the role of climate and anthropogenic influences on the state’s fire regimes from 1975 to 2050. We develop an empirical model that integrates estimates of biophysical indicators relevant to plant communities and anthropogenic influences at each forecast time step. Historically, we find that anthropogenic influences account for up to fifty percent of explanatory power in the model. We also find that the total area burned is likely to increase, with burned area expected to increase by 2.2 and 5.0 percent by 2050 under climatic bookends (PCM and GFDL climate models, respectively). Our two climate models show considerable agreement, but due to potential shifts in rainfall patterns, substantial uncertainty remains for the semiarid inland deserts and coastal areas of the south. Given the strength of human-related variables in some regions, however, it is clear that comprehensive projections of future fire activity should include both anthropogenic and biophysical influences. Previous findings of substantially increased numbers of fires and burned area for California may be tied to omitted variable bias from the exclusion of human influences. The omission of anthropogenic variables in our model would overstate the importance of climatic ones by at least 24%. As such, the failure to include anthropogenic effects in many models likely overstates the response of wildfire to climatic change. PMID:27124597

  3. A New Model Incorporating Variably Saturated Flow That Accounts for Capillary-Fringe Elongation in Unconfined-Aquifer Tests

    NASA Astrophysics Data System (ADS)

    Moench, A. F.

    2006-12-01

    A seven-day, constant-rate aquifer test conducted by University of Waterloo researchers at Canadian Forces Base Borden in Ontario, Canada is useful for advancing understanding of fluid flow processes in response to pumping from an unconfined aquifer. Measured data included detailed water content in the unsaturated zone through time and space and drawdown in the saturated zone. The water content data reveal downward translation of the soil-moisture profiles and simultaneous elongations of the capillary fringe. Estimates of capillary-fringe thicknesses made use of use of model-calculated water-table elevations. Using drawdown data only, parameter estimation with a numerical model that solves Richards' equation for fluid flow and uses Brooks and Corey functional relations to represent unsaturated-zone characteristics yielded simulated drawdowns in the saturated zone that compared favorably with measured drawdowns. However, the modeled soil-moisture profile bore no resemblance to measured soil- moisture profiles and the model did not accurately simulate capillary-fringe elongation. I propose a modified model that largely decouples the Brooks and Corey soil-moisture and relative hydraulic conductivity functions by using two pore-size distribution functions, one for each functional relation. With the proposed model the general shape of the measured soil-moisture profiles was reproduced, there were time-varying vertical extensions of the capillary fringe consistent with observations, and there was satisfactory agreement between simulated and measured drawdowns in the saturated zone. The model does not account for appreciable radial variations in the thickness of the capillary fringe. For example, in seven days of pumping the capillary fringe grew from 35 to 58 cm at a distance of 1 m and 41 to 50 cm at a distance of 20 m. The analysis shows that drawdown measurements in the saturated zone by themselves without supporting soil-moisture measurements are not sufficient to

  4. Incorporating Anthropogenic Influences into Fire Probability Models: Effects of Human Activity and Climate Change on Fire Activity in California.

    PubMed

    Mann, Michael L; Batllori, Enric; Moritz, Max A; Waller, Eric K; Berck, Peter; Flint, Alan L; Flint, Lorraine E; Dolfi, Emmalee

    2016-01-01

    The costly interactions between humans and wildfires throughout California demonstrate the need to understand the relationships between them, especially in the face of a changing climate and expanding human communities. Although a number of statistical and process-based wildfire models exist for California, there is enormous uncertainty about the location and number of future fires, with previously published estimates of increases ranging from nine to fifty-three percent by the end of the century. Our goal is to assess the role of climate and anthropogenic influences on the state's fire regimes from 1975 to 2050. We develop an empirical model that integrates estimates of biophysical indicators relevant to plant communities and anthropogenic influences at each forecast time step. Historically, we find that anthropogenic influences account for up to fifty percent of explanatory power in the model. We also find that the total area burned is likely to increase, with burned area expected to increase by 2.2 and 5.0 percent by 2050 under climatic bookends (PCM and GFDL climate models, respectively). Our two climate models show considerable agreement, but due to potential shifts in rainfall patterns, substantial uncertainty remains for the semiarid inland deserts and coastal areas of the south. Given the strength of human-related variables in some regions, however, it is clear that comprehensive projections of future fire activity should include both anthropogenic and biophysical influences. Previous findings of substantially increased numbers of fires and burned area for California may be tied to omitted variable bias from the exclusion of human influences. The omission of anthropogenic variables in our model would overstate the importance of climatic ones by at least 24%. As such, the failure to include anthropogenic effects in many models likely overstates the response of wildfire to climatic change. PMID:27124597

  5. Development of a New Analog Test System Capable of Modeling Tectonic Deformation Incorporating the Effects of Pore Fluid Pressure

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Nakajima, H.; Takeda, M.; Aung, T. T.

    2005-12-01

    Understanding and predicting the tectonic deformation within geologic strata has been a very important research subject in many fields such as structural geology and petroleum geology. In recent years, such research has also become a fundamental necessity for the assessment of active fault migration, site selection for geological disposal of radioactive nuclear waste and exploration for methane hydrate. Although analog modeling techniques have played an important role in the elucidation of the tectonic deformation mechanisms, traditional approaches have typically used dry materials and ignored the effects of pore fluid pressure. In order for analog models to properly depict the tectonic deformation of the targeted, large-prototype system within a small laboratory-scale configuration, physical properties of the models, including geometry, force, and time, must be correctly scaled. Model materials representing brittle rock behavior require an internal friction identical to the prototype rock and virtually zero cohesion. Granular materials such as sand, glass beads, or steel beads of dry condition have been preferably used for this reason in addition to their availability and ease of handling. Modeling protocols for dry granular materials have been well established but such model tests cannot account for the pore fluid effects. Although the concept of effective stress has long been recognized and the role of pore-fluid pressure in tectonic deformation processes is evident, there have been few analog model studies that consider the effects of pore fluid movement. Some new applications require a thorough understanding of the coupled deformation and fluid flow processes within the strata. Taking the field of waste management as an example, deep geological disposal of radioactive waste has been thought to be an appropriate methodology for the safe isolation of the wastes from the human environment until the toxicity of the wastes decays to non-hazardous levels. For the

  6. Incorporating positive body image into the treatment of eating disorders: A model for attunement and mindful self-care.

    PubMed

    Cook-Cottone, Catherine P

    2015-06-01

    This article provides a model for understanding the role positive body image can play in the treatment of eating disorders and methods for guiding patients away from symptoms and toward flourishing. The Attuned Representational Model of Self (Cook-Cottone, 2006) and a conceptual model detailing flourishing in the context of body image and eating behavior (Cook-Cottone et al., 2013) are discussed. The flourishing inherent in positive body image comes hand-in-hand with two critical ways of being: (a) having healthy, embodied awareness of the internal and external aspects of self (i.e., attunement) and (b) engaging in mindful self-care. Attunement and mindful self-care thus are considered as potential targets of actionable therapeutic work in the cultivation of positive body image among those with disordered eating. For context, best-practices in eating disorder treatment are also reviewed. Limitations in current research are detailed and directions for future research are explicated. PMID:25886712

  7. A cortical folding model incorporating stress-dependent growth explains gyral wavelengths and stress patterns in the developing brain

    PubMed Central

    Bayly, PV; Okamoto, RJ; Xu, G.; Shi, Y; Taber, LA

    2013-01-01

    In humans and many other mammals, the cortex (the outer layer of the brain) folds during development. The mechanics of folding are not well understood; leading explanations are either incomplete or at odds with physical measurements. We propose a mathematical model in which (i) folding is driven by tangential expansion of the cortex and (ii) deeper layers grow in response to the resulting stress. In this model the wavelength of cortical folds depends predictably on the rate of cortical growth relative to the rate of stress-induced growth. We show analytically and in simulations that faster cortical expansion leads to shorter gyral wavelengths; slower cortical expansion leads to long wavelengths or even smooth (lissencephalic) surfaces. No inner or outer (skull) constraint is needed to produce folding, but initial shape and mechanical heterogeneity influence the final shape. The proposed model predicts patterns of stress in the tissue that are consistent with experimental observations. PMID:23357794

  8. A production-inventory model with permissible delay incorporating learning effect in random planning horizon using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kar, Mohuya B.; Bera, Shankar; Das, Debasis; Kar, Samarjit

    2015-10-01

    This paper presents a production-inventory model for deteriorating items with stock-dependent demand under inflation in a random planning horizon. The supplier offers the retailer fully permissible delay in payment. It is assumed that the time horizon of the business period is random in nature and follows exponential distribution with a known mean. Here learning effect is also introduced for the production cost and setup cost. The model is formulated as profit maximization problem with respect to the retailer and solved with the help of genetic algorithm (GA) and PSO. Moreover, the convergence of two methods—GA and PSO—is studied against generation numbers and it is seen that GA converges rapidly than PSO. The optimum results from methods are compared both numerically and graphically. It is observed that the performance of GA is marginally better than PSO. We have provided some numerical examples and some sensitivity analyses to illustrate the model.

  9. Incorporating 3-D Subsurface Hydrologic Processes within the Community Land Surface Model (CLM): Coupling PFLOTRAN and CLM

    NASA Astrophysics Data System (ADS)

    Bisht, G.; Mills, R. T.; Hoffman, F. M.; Thornton, P. E.; Lichtner, P. C.; Hammond, G. E.

    2011-12-01

    Numerous studies have shown a positive soil moisture-rainfall feedback through observational data, as well as, modeling studies. Spatial variability of topography, soils, and vegetation play a significant role in determining the response of land surface states (soil moisture) and fluxes (runoff, evapotranspirtiaon); but their explicit accounting within Land Surface Models (LSMs) is computa- tionally expensive. Additionally, anthropogenic climate change is altering the hydrologic cycle at global and regional scales. Characterizing the sensitivity of groundwater recharge is critical for understanding the effects of climate change on water resources. In order to explicitly represent lateral redistribution of soil moisture and unified treatment of the unsaturated-saturated zone in the subsurface within the CLM, we propose coupling PFLOTRAN and CLM. PFLOTRAN is a parallel multiphase-multicomponent subsurface reactive flow and transport code for modeling subsurface processes and has been devel- oped under a DOE SciDAC-2 project. PFLOTRAN is written in Fortran 90 using a modular, object-oriented approach. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). The PFLOTRAN model is capable of simulating fluid flow through porous media with fluid phases of air, water, and supercritical CO2. PFLOTRAN has been successfully employed on up to 131,072 cores on Jaguar, the massively parallel Cray XT4/XT5 at ORNL, for problems composed of up to 2 billion degrees of freedom. In this work, we will present a strategy of coupling the two models, CLM and PFLOTRAN, along with a few preliminary results obtained from the coupled model.

  10. Incorporation of modified dynamic inverse Jiles-Atherton model in finite volume time domain for nonlinear electromagnetic field computation

    NASA Astrophysics Data System (ADS)

    Hamimid, M.; Mimoune, S. M.; Feliachi, M.

    2013-01-01

    In this paper, a time stepping finite volume method (FVM) associated with the modified inverse Jiles-Atherton model for the nonlinear electromagnetic field computation is presented. To describe the dynamic behavior in the conducting media, the effective field is modified by adding two counter-fields associated respectively to the eddy current and excess losses. The hysteresis loss can be estimated by the integration over the obtained hysteresis loop at each frequency. To examine the validity of the proposed dynamic model coupled with FVM, the computed total losses and hysteresis loops are compared to experiments.

  11. Incorporating cold-air pooling into downscaled climate models increases potential refugia for snow-dependent species within the Sierra Nevada Ecoregion, CA

    USGS Publications Warehouse

    Curtis, Jennifer A.; Flint, Lorraine E.; Flint, Alan L.; Lundquist, Jessica D.; Hudgens, Brian; Boydston, Erin E.; Young, Julie K.

    2014-01-01

    We present a unique water-balance approach for modeling snowpack under historic, current and future climates throughout the Sierra Nevada Ecoregion. Our methodology uses a finer scale (270 m) than previous regional studies and incorporates cold-air pooling, an atmospheric process that sustains cooler temperatures in topographic depressions thereby mitigating snowmelt. Our results are intended to support management and conservation of snow-dependent species, which requires characterization of suitable habitat under current and future climates. We use the wolverine (Gulo gulo) as an example species and investigate potential habitat based on the depth and extent of spring snowpack within four National Park units with proposed wolverine reintroduction programs. Our estimates of change in spring snowpack conditions under current and future climates are consistent with recent studies that generally predict declining snowpack. However, model development at a finer scale and incorporation of cold-air pooling increased the persistence of April 1st snowpack. More specifically, incorporation of cold-air pooling into future climate projections increased April 1st snowpack by 6.5% when spatially averaged over the study region and the trajectory of declining April 1st snowpack reverses at mid-elevations where snow pack losses are mitigated by topographic shading and cold-air pooling. Under future climates with sustained or increased precipitation, our results indicate a high likelihood for the persistence of late spring snowpack at elevations above approximately 2,800 m and identify potential climate refugia sites for snow-dependent species at mid-elevations, where significant topographic shading and cold-air pooling potential exist.

  12. Incorporating cold-air pooling into downscaled climate models increases potential refugia for snow-dependent species within the Sierra Nevada Ecoregion, CA.

    PubMed

    Curtis, Jennifer A; Flint, Lorraine E; Flint, Alan L; Lundquist, Jessica D; Hudgens, Brian; Boydston, Erin E; Young, Julie K

    2014-01-01

    We present a unique water-balance approach for modeling snowpack under historic, current and future climates throughout the Sierra Nevada Ecoregion. Our methodology uses a finer scale (270 m) than previous regional studies and incorporates cold-air pooling, an atmospheric process that sustains cooler temperatures in topographic depressions thereby mitigating snowmelt. Our results are intended to support management and conservation of snow-dependent species, which requires characterization of suitable habitat under current and future climates. We use the wolverine (Gulo gulo) as an example species and investigate potential habitat based on the depth and extent of spring snowpack within four National Park units with proposed wolverine reintroduction programs. Our estimates of change in spring snowpack conditions under current and future climates are consistent with recent studies that generally predict declining snowpack. However, model development at a finer scale and incorporation of cold-air pooling increased the persistence of April 1st snowpack. More specifically, incorporation of cold-air pooling into future climate projections increased April 1st snowpack by 6.5% when spatially averaged over the study region and the trajectory of declining April 1st snowpack reverses at mid-elevations where snow pack losses are mitigated by topographic shading and cold-air pooling. Under future climates with sustained or increased precipitation, our results indicate a high likelihood for the persistence of late spring snowpack at elevations above approximately 2,800 m and identify potential climate refugia sites for snow-dependent species at mid-elevations, where significant topographic shading and cold-air pooling potential exist. PMID:25188379

  13. Incorporating Cold-Air Pooling into Downscaled Climate Models Increases Potential Refugia for Snow-Dependent Species within the Sierra Nevada Ecoregion, CA

    PubMed Central

    Curtis, Jennifer A.; Flint, Lorraine E.; Flint, Alan L.; Lundquist, Jessica D.; Hudgens, Brian; Boydston, Erin E.; Young, Julie K.

    2014-01-01

    We present a unique water-balance approach for modeling snowpack under historic, current and future climates throughout the Sierra Nevada Ecoregion. Our methodology uses a finer scale (270 m) than previous regional studies and incorporates cold-air pooling, an atmospheric process that sustains cooler temperatures in topographic depressions thereby mitigating snowmelt. Our results are intended to support management and conservation of snow-dependent species, which requires characterization of suitable habitat under current and future climates. We use the wolverine (Gulo gulo) as an example species and investigate potential habitat based on the depth and extent of spring snowpack within four National Park units with proposed wolverine reintroduction programs. Our estimates of change in spring snowpack conditions under current and future climates are consistent with recent studies that generally predict declining snowpack. However, model development at a finer scale and incorporation of cold-air pooling increased the persistence of April 1st snowpack. More specifically, incorporation of cold-air pooling into future climate projections increased April 1st snowpack by 6.5% when spatially averaged over the study region and the trajectory of declining April 1st snowpack reverses at mid-elevations where snow pack losses are mitigated by topographic shading and cold-air pooling. Under future climates with sustained or increased precipitation, our results indicate a high likelihood for the persistence of late spring snowpack at elevations above approximately 2,800 m and identify potential climate refugia sites for snow-dependent species at mid-elevations, where significant topographic shading and cold-air pooling potential exist. PMID:25188379

  14. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    PubMed

    Malik, Rajat; Deardon, Rob; Kwong, Grace P S

    2016-01-01

    A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs), are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC) framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD) epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets. PMID:26731666

  15. Development of a Quantitative Model Incorporating Key Events in a Hepatoxic Mode of Action to Predict Tumor Incidence

    EPA Science Inventory

    Biologically-Based Dose Response (BBDR) modeling of environmental pollutants can be utilized to inform the mode of action (MOA) by which compounds elicit adverse health effects. Chemicals that produce tumors are typically described as either genotoxic or non-genotoxic. One common...

  16. Implication of remotely sensed data to incorporate land cover effect into a linear reservoir-based rainfall-runoff model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study investigates the effect of land use on the Geomorphological Cascade of unequal Linear Reservoirs (GCUR) model. We use the Normalized Difference Vegetation Index (NDVI) derived from remotely sensed data as a measure of land use. Our approach has two important aspects: (i) it considers the ...

  17. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations

    PubMed Central

    Malik, Rajat; Deardon, Rob; Kwong, Grace P. S.

    2016-01-01

    A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs), are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC) framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD) epidemic in the U.K. Our results indicate that substantial computation savings can be obtained—albeit, of course, with some information loss—suggesting that such techniques may be of use in the analysis of very large epidemic data sets. PMID:26731666

  18. Incorporating Modeling and Simulations in Undergraduate Biophysical Chemistry Course to Promote Understanding of Structure-Dynamics-Function Relationships in Proteins

    ERIC Educational Resources Information Center

    Hati, Sanchita; Bhattacharyya, Sudeep

    2016-01-01

    A project-based biophysical chemistry laboratory course, which is offered to the biochemistry and molecular biology majors in their senior year, is described. In this course, the classroom study of the structure-function of biomolecules is integrated with the discovery-guided laboratory study of these molecules using computer modeling and…

  19. Incorporating human activities into an earth system model of the Northeastern United States: socio-hydrology at the regional scale

    NASA Astrophysics Data System (ADS)

    Rosenzweig, B.; Vorosmarty, C. J.; Miara, A.; Stewart, R.; Wollheim, W. M.; Lu, X.; Kicklighter, D. W.; Ehsani, N.; Shikhmacheva, K.; Yang, P.

    2013-12-01

    The Northeastern United States is one of the most urbanized regions of the world and its 70 million residents will be challenged by climate change as well as competing demands for land and water through the remainder of the 21st Century. The strategic management decisions made in the next few years will have major impacts on the region's future water resources, but planners have had limited quantitative information to support their decision-making. We have developed a Northeast Regional Earth System Model (NE-RESM), which allows for the testing of future scenarios of climate change, land use change and infrastructure management to better understand their implications for the region's water resources and ecosystem services. Human features of the water cycle - including thermoelectric power plants, wastewater treatment plants interbasin transfers and changes in impervious cover with different patterns of urban development - are explicitly represented in our modeling. We are currently engaged in a novel, participatory scenario design process with regional stakeholders to ensure the policy relevancy of our modeling experiments. The NE-RESM hydrologic modeling domain. Figure by Stanley Glidden and Rob Stewart

  20. Incorporation of Predictive Population Modeling into the AOP Famework: A Case Study with White Suckers Exposed to Pulp Effluent

    EPA Science Inventory

    A need in ecological risk assessment is the ability to create linkages between chemically-induced alterations at molecular and biochemical levels of organization with adverse outcomes in whole organisms and populations. A predictive model was developed to translate changes in th...

  1. Learning-Goals-Driven Design Model: Developing Curriculum Materials that Align with National Standards and Incorporate Project-Based Pedagogy

    ERIC Educational Resources Information Center

    Krajcik, Joseph; McNeill, Katherine L.; Reiser, Brian J.

    2008-01-01

    Reform efforts in science education emphasize the importance of rigorous treatment of science standards and use of innovative pedagogical approaches to make science more meaningful and successful. In this paper, we present a learning-goals-driven design model for developing curriculum materials, which combines national standards and a…

  2. Incorporating spatial patterns into a state and transition model for arid grasslands and shrublands in southern New Mexico

    Technology Transfer Automated Retrieval System (TEKTRAN)

    State and transition models synthesize and communicate information about alternative states in arid rangelands and other ecosystems but often do not adequately account for processes interacting across a range of temporal and spatial scales. Grassland to shrubland transitions have occurred as patchy ...

  3. Analysis of the gait generation principle by a simulated quadruped model with a CPG incorporating vestibular modulation.

    PubMed

    Fukuoka, Yasuhiro; Habu, Yasushi; Fukui, Takahiro

    2013-12-01

    This study aims to understand the principles of gait generation in a quadrupedal model. It is difficult to determine the essence of gait generation simply by observation of the movement of complicated animals composed of brains, nerves, muscles, etc. Therefore, we build a planar quadruped model with simplified nervous system and mechanisms, in order to observe its gaits under simulation. The model is equipped with a mathematical central pattern generator (CPG), consisting of four coupled neural oscillators, basically producing a trot pattern. The model also contains sensory feedback to the CPG, measuring the body tilt (vestibular modulation). This spontaneously gives rise to an unprogrammed lateral walk at low speeds, a transverse gallop while running, in addition to trotting at a medium speed. This is because the body oscillation exhibits a double peak per leg frequency at low speeds, no peak (little oscillation) at medium speeds, and a single peak while running. The body oscillation autonomously adjusts the phase differences between the neural oscillators via the feedback. We assume that the oscillations of the four legs produced by the CPG and the body oscillation varying according to the current speed are synchronized along with the varied phase differences to keep balance during locomotion through postural adaptation via the vestibular modulation, resulting in each gait. We succeeded in determining a single simple principle that accounts for gait transition from walking to trotting to galloping, even without brain control, complicated leg mechanisms, or a flexible trunk. PMID:24132783

  4. Incorporating Cancer Stem Cells in Radiation Therapy Treatment Response Modeling and the Implication in Glioblastoma Multiforme Treatment Resistance

    SciTech Connect

    Yu, Victoria Y.; Nguyen, Dan; Pajonk, Frank; Kupelian, Patrick; Kaprealian, Tania; Selch, Michael; Low, Daniel A.; Sheng, Ke

    2015-03-15

    Purpose: To perform a preliminary exploration with a simplistic mathematical cancer stem cell (CSC) interaction model to determine whether the tumor-intrinsic heterogeneity and dynamic equilibrium between CSCs and differentiated cancer cells (DCCs) can better explain radiation therapy treatment response with a dual-compartment linear-quadratic (DLQ) model. Methods and Materials: The radiosensitivity parameters of CSCs and DCCs for cancer cell lines including glioblastoma multiforme (GBM), non–small cell lung cancer, melanoma, osteosarcoma, and prostate, cervical, and breast cancer were determined by performing robust least-square fitting using the DLQ model on published clonogenic survival data. Fitting performance was compared with the single-compartment LQ (SLQ) and universal survival curve models. The fitting results were then used in an ordinary differential equation describing the kinetics of DCCs and CSCs in response to 2- to 14.3-Gy fractionated treatments. The total dose to achieve tumor control and the fraction size that achieved the least normal biological equivalent dose were calculated. Results: Smaller cell survival fitting errors were observed using DLQ, with the exception of melanoma, which had a low α/β = 0.16 in SLQ. Ordinary differential equation simulation indicated lower normal tissue biological equivalent dose to achieve the same tumor control with a hypofractionated approach for 4 cell lines for the DLQ model, in contrast to SLQ, which favored 2 Gy per fraction for all cells except melanoma. The DLQ model indicated greater tumor radioresistance than SLQ, but the radioresistance was overcome by hypofractionation, other than the GBM cells, which responded poorly to all fractionations. Conclusion: The distinct radiosensitivity and dynamics between CSCs and DCCs in radiation therapy response could perhaps be one possible explanation for the heterogeneous intertumor response to hypofractionation and in some cases superior outcome from

  5. Imaging the lithosphere beneath NE Tibet: teleseismic P and S body wave tomography incorporating surface wave starting models

    NASA Astrophysics Data System (ADS)

    Nunn, Ceri; Roecker, Steven W.; Tilmann, Frederik J.; Priestley, Keith F.; Heyburn, Ross; Sandvol, Eric A.; Ni, James F.; Chen, Yongshun John; Zhao, Wenjin; Team, The Indepth

    2014-03-01

    The northeastern margin of the Tibetan Plateau, which includes the Qiangtang and Songpan-Ganzi terranes as well as the Kunlun Shan and the Qaidam Basin, continues to deform in response to the ongoing India-Eurasia collision. To test competing hypotheses concerning the mechanisms for this deformation, we assembled a high-quality data set of approximately 14 000 P- and 4000 S-wave arrival times from earthquakes at teleseismic distances from the International Deep Profiling of Tibet and the Himalaya, Phase IV broad-band seismometer deployments. We analyse these arrival times to determine tomographic images of P- and S-wave velocities in the upper mantle beneath this part of the plateau. To account for the effects of major heterogeneity in crustal and uppermost mantle wave velocities in Tibet, we use recent surface wave models to construct a starting model for our teleseismic body wave inversion. We compare the results from our model with those from simpler starting models, and find that while the reduction in residuals and results for deep structure are similar between models, the results for shallow structure are different. Checkerboard tests indicate that features of ˜125 km length scale are reliably imaged throughout the study region. Using synthetic tests, we show that the best recovery is below ˜300 km, and that broad variations in shallow structure can also be recovered. We also find that significant smearing can occur, especially at the edges of the model. We observe a shallow dipping seismically fast structure at depths of ˜140-240 km, which dies out gradually between 33°N and 35°N. Based on the lateral continuity of this structure (from the surface waves) we interpret it as Indian lithosphere. Alternatively, the entire area could be thickened by pure shear, or the northern part could be an underthrust Lhasa Terrane lithospheric slab with only the southern part from India. We see a deep fast wave velocity anomaly (below 300 km), that is consistent with

  6. Incorporating the Role of Nitrogen in the Noah-MP Land Surface Model for Climate and Environmental Studies

    NASA Astrophysics Data System (ADS)

    Cai, X.; Yang, Z. L.; Fisher, J. B.

    2014-12-01

    Noah-MP (Niu et al., 2011; Yang et al., 2011) is the next generation land surface model for the Weather Research and Forecasting (WRF) meteorological model and the Climate Forecast Systems in the National Centers for Environmental Prediction. While Noah-MP does not currently contain a dynamic nitrogen cycle, this can readily be updated with the interactive vegetation canopy option. In this study, Noah-MP is coupled with the Fixation & Uptake of Nitrogen (FUN) model (Fisher et al., 2010) for the above ground processes and the soil nitrogen model from the Soil and Water Assessment Tool (SWAT) for the below ground processes. This combines FUN's state-of-the-art concept of the carbon cost theory and SWAT's strength in representing the anthropogenic effects on the nitrogen cycle. The processes employed from FUN are the nitrogen uptake and fixation of plants, both of which are directly linked to the plant productivity. If passive nitrogen uptake cannot meet the nitrogen demand, plants have to spend part of the photosynthesized carbon production on nitrogen acquisition. The processes employed from SWAT are nitrogen mineralization, nitrification, immobilization, volatilization, atmospheric deposition, and leaching. In addition, the modified universal soil loss equation is used to more accurately account for the nitrogen removal in sediment caused by surface runoff. Because human input of nitrogen greatly changes the nitrogen cycle, a simple nitrogen fertilization approach is also applied to crops. Preliminary results show that Noah-MP is capable of simulating the dynamics of the major nitrogen pools. Further comprehensive evaluation of the new model will be conducted at one or more experimental sites.

  7. Modeling the contribution of individual proteins to mixed skeletal muscle protein synthetic rates over increasing periods of label incorporation

    PubMed Central

    Wolff, Christopher A.; Peelor, Fredrick F.; Shipman, Patrick D.; Hamilton, Karyn L.

    2015-01-01

    Advances in stable isotope approaches, primarily the use of deuterium oxide (2H2O), allow for long-term measurements of protein synthesis, as well as the contribution of individual proteins to tissue measured protein synthesis rates. Here, we determined the influence of individual protein synthetic rates, individual protein content, and time of isotopic labeling on the measured synthesis rate of skeletal muscle proteins. To this end, we developed a mathematical model, applied the model to an established data set collected in vivo, and, to experimentally test the impact of different isotopic labeling periods, used 2H2O to measure protein synthesis in cultured myotubes over periods of 2, 4, and 7 days. We first demonstrated the influence of both relative protein content and individual protein synthesis rates on measured synthesis rates over time. When expanded to include 286 individual proteins, the model closely approximated protein synthetic rates measured in vivo. The model revealed a 29% difference in measured synthesis rates from the slowest period of measurement (20 min) to the longest period of measurement (6 wk). In support of these findings, culturing of C2C12 myotubes with isotopic labeling periods of 2, 4, or 7 days revealed up to a doubling of the measured synthesis rate in the shorter labeling period compared with the longer period of labeling. From our model, we conclude that a 4-wk period of labeling is ideal for considering all proteins in a mixed-tissue fraction, while minimizing the slowing effect of fully turned-over proteins. In addition, we advocate that careful consideration must be paid to the period of isotopic labeling when comparing mixed protein synthetic rates between studies. PMID:25593288

  8. Incorporation of crop phenology in Simple Biosphere Model (SiBcrop) to improve land-atmosphere carbon exchanges from croplands

    NASA Astrophysics Data System (ADS)

    Lokupitiya, E.; Denning, S.; Paustian, K.; Baker, I.; Schaefer, K.; Verma, S.; Meyers, T.; Bernacchi, C. J.; Suyker, A.; Fischer, M.

    2009-06-01

    Croplands are man-made ecosystems that have high net primary productivity during the growing season of crops, thus impacting carbon and other exchanges with the atmosphere. These exchanges play a major role in nutrient cycling and climate change related issues. An accurate representation of crop phenology and physiology is important in land-atmosphere carbon models being used to predict these exchanges. To better estimate time-varying exchanges of carbon, water, and energy of croplands using the Simple Biosphere (SiB) model, we developed crop-specific phenology models and coupled them to SiB. The coupled SiB-phenology model (SiBcrop) replaces remotely-sensed NDVI information, on which SiB originally relied for deriving Leaf Area Index (LAI) and the fraction of Photosynthetically Active Radiation (fPAR) for estimating carbon dynamics. The use of the new phenology scheme within SiB substantially improved the prediction of LAI and carbon fluxes for maize, soybean, and wheat crops, as compared with the observed data at several AmeriFlux eddy covariance flux tower sites in the US mid continent region. SiBcrop better predicted the onset and end of the growing season, harvest, interannual variability associated with crop rotation, day time carbon uptake (especially for maize) and day to day variability in carbon exchange. Biomass predicted by SiBcrop had good agreement with the observed biomass at field sites. In the future, we will predict fine resolution regional scale carbon and other exchanges by coupling SiBcrop with RAMS (the Regional Atmospheric Modeling System).

  9. Incorporation of crop phenology in Simple Biosphere Model (SiBcrop) to improve land-atmosphere carbon exchanges from croplands

    NASA Astrophysics Data System (ADS)

    Lokupitiya, E.; Denning, S.; Paustian, K.; Baker, I.; Schaefer, K.; Verma, S.; Meyers, T.; Bernacchi, C.; Suyker, A.; Fischer, M.

    2009-02-01

    Croplands are man-made ecosystems that have high net primary productivity during the growing season of crops, thus impacting carbon and other exchanges with the atmosphere. These exchanges play a~major role in nutrient cycling and climate change related issues. An accurate representation of crop phenology and physiology is important in land-atmosphere carbon models being used to predict these exchanges. To better estimate time-varying exchanges of carbon, water, and energy of croplands using the Simple Biosphere (SiB) model, we developed crop-specific phenology models and coupled them to SiB. The coupled SiB-phenology model (SiBcrop) replaces remotely-sensed NDVI information, on which SiB originally relied for deriving Leaf Area Index (LAI) and the fraction of Photosynthetically Active Radiation (fPAR) for estimating carbon dynamics. The use of the new phenology scheme within SiB substantially improved the prediction of LAI and carbon fluxes for maize, soybean, and wheat crops, as compared with the observed data at several AmeriFlux eddy covariance flux tower sites in the US mid continent region. SiBcrop better predicted the onset and end of the growing season, harvest, interannual variability associated with crop rotation, day time carbon uptake (especially for maize) and day to day variability in carbon exchange. Biomass predicted by SiBcrop had good agreement with the observed biomass at field sites. In the future, we will predict fine resolution regional scale carbon and other exchanges by coupling SiBcrop with RAMS (the Regional Atmospheric Modeling System).

  10. Implications of incorporating N cycling and N limitations on primary production in an individual-based dynamic vegetation model

    NASA Astrophysics Data System (ADS)

    Smith, B.; Wårlind, D.; Arneth, A.; Hickler, T.; Leadley, P.; Siltberg, J.; Zaehle, S.

    2013-11-01

    The LPJ-GUESS dynamic vegetation model uniquely combines an individual- and patch-based representation of vegetation dynamics with ecosystem biogeochemical cycling from regional to global scales. We present an updated version that includes plant and soil N dynamics, analysing the implications of accounting for C-N interactions on predictions and performance of the model. Stand structural dynamics and allometric scaling of tree growth suggested by global databases of forest stand structure and development were well-reproduced by the model in comparison to an earlier multi-model study. Accounting for N cycle dynamics improved the goodness-of-fit for broadleaved forests. N limitation associated with low N mineralisation rates reduces productivity of cold-climate and dry-climate ecosystems relative to mesic temperate and tropical ecosystems. In a model experiment emulating free-air CO2 enrichment (FACE) treatment for forests globally, N-limitation associated with low N mineralisation rates of colder soils reduces CO2-enhancement of NPP for boreal forests, while some temperate and tropical forests exhibit increased NPP enhancement. Under a business-as-usual future climate and emissions scenario, ecosystem C storage globally was projected to increase by c. 10%; additional N requirements to match this increasing ecosystem C were within the high N supply limit estimated on stoichiometric grounds in an earlier study. Our results highlight the importance of accounting for C-N interactions not only in studies of global terrestrial C cycling, but to understand underlying mechanisms on local scales and in different regional contexts.

  11. Implications of incorporating N cycling and N limitations on primary production in an individual-based dynamic vegetation model

    NASA Astrophysics Data System (ADS)

    Smith, B.; Wårlind, D.; Arneth, A.; Hickler, T.; Leadley, P.; Siltberg, J.; Zaehle, S.

    2014-04-01

    The LPJ-GUESS dynamic vegetation model uniquely combines an individual- and patch-based representation of vegetation dynamics with ecosystem biogeochemical cycling from regional to global scales. We present an updated version that includes plant and soil N dynamics, analysing the implications of accounting for C-N interactions on predictions and performance of the model. Stand structural dynamics and allometric scaling of tree growth suggested by global databases of forest stand structure and development were well reproduced by the model in comparison to an earlier multi-model study. Accounting for N cycle dynamics improved the goodness of fit for broadleaved forests. N limitation associated with low N-mineralisation rates reduces productivity of cold-climate and dry-climate ecosystems relative to mesic temperate and tropical ecosystems. In a model experiment emulating free-air CO2 enrichment (FACE) treatment for forests globally, N limitation associated with low N-mineralisation rates of colder soils reduces CO2 enhancement of net primary production (NPP) for boreal forests, while some temperate and tropical forests exhibit increased NPP enhancement. Under a business-as-usual future climate and emissions scenario, ecosystem C storage globally was projected to increase by ca. 10%; additional N requirements to match this increasing ecosystem C were within the high N supply limit estimated on stoichiometric grounds in an earlier study. Our results highlight the importance of accounting for C-N interactions in studies of global terrestrial N cycling, and as a basis for understanding mechanisms on local scales and in different regional contexts.

  12. A multi-scale model of dislocation plasticity in α-Fe: Incorporating temperature, strain rate and non-Schmid effects

    SciTech Connect

    Lim, H.; Hale, L. M.; Zimmerman, J. A.; Battaile, C. C.; Weinberger, C. R.

    2015-01-05

    In this study, we develop an atomistically informed crystal plasticity finite