Using Models that Incorporate Uncertainty
ERIC Educational Resources Information Center
Caulkins, Jonathan P.
2002-01-01
In this article, the author discusses the use in policy analysis of models that incorporate uncertainty. He believes that all models should consider incorporating uncertainty, but that at the same time it is important to understand that sampling variability is not usually the dominant driver of uncertainty in policy analyses. He also argues that…
Concerns have been raised regarding the safety of young children contacting arsenic and chromium residues while playing on and around Chromated Copper Arsenate (CCA) treated wood playground structures and decks. Although CCA registrants voluntarily canceled treated wood for resi...
Incorporating opponent models into adversary search
Carmel, D.; Markovitch, S.
1996-12-31
This work presents a generalized theoretical framework that allows incorporation of opponent models into adversary search. We present the M* algorithm, a generalization of minimax that uses an arbitrary opponent model to simulate the opponent`s search. The opponent model is a recursive structure consisting of the opponent`s evaluation function and its model of the player. We demonstrate experimentally the potential benefit of using an opponent model. Pruning in M* is impossible in the general case. We prove a sufficient condition for pruning and present the {alpha}{beta}* algorithm which returns the M* value of a tree while searching only necessary branches.
Incorporating interfacial phenomena in solidification models
NASA Technical Reports Server (NTRS)
Beckermann, Christoph; Wang, Chao Yang
1994-01-01
A general methodology is available for the incorporation of microscopic interfacial phenomena in macroscopic solidification models that include diffusion and convection. The method is derived from a formal averaging procedure and a multiphase approach, and relies on the presence of interfacial integrals in the macroscopic transport equations. In a wider engineering context, these techniques are not new, but their application in the analysis and modeling of solidification processes has largely been overlooked. This article describes the techniques and demonstrates their utility in two examples in which microscopic interfacial phenomena are of great importance.
Incorporation of RAM techniques into simulation modeling
NASA Astrophysics Data System (ADS)
Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.
1995-01-01
This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.
Incorporating neurophysiological concepts in mathematical thermoregulation models
NASA Astrophysics Data System (ADS)
Kingma, Boris R. M.; Vosselman, M. J.; Frijns, A. J. H.; van Steenhoven, A. A.; van Marken Lichtenbelt, W. D.
2014-01-01
Skin blood flow (SBF) is a key player in human thermoregulation during mild thermal challenges. Various numerical models of SBF regulation exist. However, none explicitly incorporates the neurophysiology of thermal reception. This study tested a new SBF model that is in line with experimental data on thermal reception and the neurophysiological pathways involved in thermoregulatory SBF control. Additionally, a numerical thermoregulation model was used as a platform to test the function of the neurophysiological SBF model for skin temperature simulation. The prediction-error of the SBF-model was quantified by root-mean-squared-residual (RMSR) between simulations and experimental measurement data. Measurement data consisted of SBF (abdomen, forearm, hand), core and skin temperature recordings of young males during three transient thermal challenges (1 development and 2 validation). Additionally, ThermoSEM, a thermoregulation model, was used to simulate body temperatures using the new neurophysiological SBF-model. The RMSR between simulated and measured mean skin temperature was used to validate the model. The neurophysiological model predicted SBF with an accuracy of RMSR < 0.27. Tskin simulation results were within 0.37 °C of the measured mean skin temperature. This study shows that (1) thermal reception and neurophysiological pathways involved in thermoregulatory SBF control can be captured in a mathematical model, and (2) human thermoregulation models can be equipped with SBF control functions that are based on neurophysiology without loss of performance. The neurophysiological approach in modelling thermoregulation is favourable over engineering approaches because it is more in line with the underlying physiology.
Incorporation of salinity in Water Availability Modeling
NASA Astrophysics Data System (ADS)
Wurbs, Ralph A.; Lee, Chihun
2011-10-01
SummaryNatural salt pollution from geologic formations in the upper watersheds of several large river basins in the Southwestern United States severely constrains the use of otherwise available major water supply sources. The Water Rights Analysis Package modeling system has been routinely applied in Texas since the late 1990s in regional and statewide planning studies and administration of the state's water rights permit system, but without consideration of water quality. The modeling system was recently expanded to incorporate salinity considerations in assessments of river/reservoir system capabilities for supplying water for environmental, municipal, agricultural, and industrial needs. Salinity loads and concentrations are tracked through systems of river reaches and reservoirs to develop concentration frequency statistics that augment flow frequency and water supply reliability metrics at pertinent locations for alternative water management strategies. Flexible generalized capabilities are developed for using limited observed salinity data to model highly variable concentrations imposed upon complex river regulation infrastructure and institutional water allocation/management practices.
Incorporating process variability into stormwater quality modelling.
Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha
2015-11-15
Process variability in pollutant build-up and wash-off generates inherent uncertainty that affects the outcomes of stormwater quality models. Poor characterisation of process variability constrains the accurate accounting of the uncertainty associated with pollutant processes. This acts as a significant limitation to effective decision making in relation to stormwater pollution mitigation. The study undertaken developed three theoretical scenarios based on research findings that variations in particle size fractions <150 μm and >150 μm during pollutant build-up and wash-off primarily determine the variability associated with these processes. These scenarios, which combine pollutant build-up and wash-off processes that takes place on a continuous timeline, are able to explain process variability under different field conditions. Given the variability characteristics of a specific build-up or wash-off event, the theoretical scenarios help to infer the variability characteristics of the associated pollutant process that follows. Mathematical formulation of the theoretical scenarios enables the incorporation of variability characteristics of pollutant build-up and wash-off processes in stormwater quality models. The research study outcomes will contribute to the quantitative assessment of uncertainty as an integral part of the interpretation of stormwater quality modelling outcomes. PMID:26179783
Incorporating uncertainty in predictive species distribution modelling
Beale, Colin M.; Lennon, Jack J.
2012-01-01
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates. PMID:22144387
SAI (SYSTEMS APPLICATIONS, INCORPORATED) URBAN AIRSHED MODEL
The magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air quality simulation model that is well suited for predicting the spatial and temporal distribution of photoch...
A Financial Market Model Incorporating Herd Behaviour
2016-01-01
Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents’ accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents’ accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the
A Financial Market Model Incorporating Herd Behaviour.
Wray, Christopher M; Bishop, Steven R
2016-01-01
Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market
Incorporating model uncertainty into spatial predictions
Handcock, M.S.
1996-12-31
We consider a modeling approach for spatially distributed data. We are concerned with aspects of statistical inference for Gaussian random fields when the ultimate objective is to predict the value of the random field at unobserved locations. However the exact statistical model is seldom known before hand and is usually estimated from the very same data relative to which the predictions are made. Our objective is to assess the effect of the fact that the model is estimated, rather than known, on the prediction and the associated prediction uncertainty. We describe a method for achieving this objective. We, in essence, consider the best linear unbiased prediction procedure based on the model within a Bayesian framework. These ideas are implemented for the spring temperature over the region in the northern United States based on the stations in the United States historical climatological network reported in Karl, Williams, Quinlan & Boden.
Incorporating evolutionary processes into population viability models.
Pierson, Jennifer C; Beissinger, Steven R; Bragg, Jason G; Coates, David J; Oostermeijer, J Gerard B; Sunnucks, Paul; Schumaker, Nathan H; Trotter, Meredith V; Young, Andrew G
2015-06-01
We examined how ecological and evolutionary (eco-evo) processes in population dynamics could be better integrated into population viability analysis (PVA). Complementary advances in computation and population genomics can be combined into an eco-evo PVA to offer powerful new approaches to understand the influence of evolutionary processes on population persistence. We developed the mechanistic basis of an eco-evo PVA using individual-based models with individual-level genotype tracking and dynamic genotype-phenotype mapping to model emergent population-level effects, such as local adaptation and genetic rescue. We then outline how genomics can allow or improve parameter estimation for PVA models by providing genotypic information at large numbers of loci for neutral and functional genome regions. As climate change and other threatening processes increase in rate and scale, eco-evo PVAs will become essential research tools to evaluate the effects of adaptive potential, evolutionary rescue, and locally adapted traits on persistence. PMID:25494697
Incorporating 3-dimensional models in online articles
Cevidanes, Lucia H. S.; Ruellasa, Antonio C. O.; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz
2015-01-01
Introduction The aims of this article were to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Methods Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. Results All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article’s online version for viewing and downloading using the reader’s software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader’s software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. Conclusions When submitting manuscripts, authors can
Incorporating RTI in a Hybrid Model of Reading Disability.
Spencer, Mercedes; Wagner, Richard K; Schatschneider, Christopher; Quinn, Jamie; Lopez, Danielle; Petscher, Yaacov
2014-08-01
The present study seeks to evaluate a hybrid model of identification that incorporates response-to-intervention (RTI) as a one of the key symptoms of reading disability. The one-year stability of alternative operational definitions of reading disability was examined in a large scale sample of students who were followed longitudinally from first to second grade. The results confirmed previous findings of limited stability for single-criterion based operational definitions of reading disability. However, substantially greater stability was obtained for a hybrid model of reading disability that incorporates RTI with other common symptoms of reading disability. PMID:25422531
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Incorporating transient storage in conjunctive stream-aquifer modeling
NASA Astrophysics Data System (ADS)
Lin, Yi-Chang; Medina, Miguel A.
2003-09-01
There has been growing interest in incorporating the transient storage effect into modeling solute transport in streams. In particular, for a smaller mountain stream where flow is fast and the flow field is irregular (a favorable environment to induce dead zones along the stream), long tails are normally observed in the stream tracer data, and adding transient storage terms in the advection-dispersion transport equation can result in more accurate simulation. While previous studies on transient storage modeling account for temporary, localized exchange between the stream and the shallow groundwater in the hyporheic zone, larger-scale exchange with the groundwater in the underlying aquifer has rarely been included or properly coupled to surface water modeling. In this paper, we complement previous modeling efforts by incorporating the transient storage concept in a conjunctive stream-aquifer model. Three well-documented and widely used USGS models have been coupled to form the core of this conjunctive model: MODFLOW handles the groundwater flow in the aquifer; DAFLOW accurately computes unsteady streamflow by means of the diffusive wave routing technique, as well as stream-aquifer exchange simulated as streambed leakage; and MOC3D computes solute transport in the groundwater zone. In addition, an explicit finite difference package was developed to incorporate the one-dimensional transient storage equations for solute transport in streams. The quadratic upstream interpolation (QUICK) algorithm is employed to improve the accuracy of spatial differencing. An adaptive stepsize control algorithm for the Runge-Kutta method is incorporated to increase overall model efficiency. Results show that the conjunctive stream-aquifer model with transient storage can handle well the bank storage effect under a flooding event. When it is applied over a stream network, the results also show that the stream-aquifer interaction acts as a strong source or sink along the stream and is too
Incorporating RTI in a Hybrid Model of Reading Disability
ERIC Educational Resources Information Center
Spencer, Mercedes; Wagner, Richard K.; Schatschneider, Christopher; Quinn, Jamie M.; Lopez, Danielle; Petscher, Yaacov
2014-01-01
The present study seeks to evaluate a hybrid model of identification that incorporates response to instruction and intervention (RTI) as one of the key symptoms of reading disability. The 1-year stability of alternative operational definitions of reading disability was examined in a large-scale sample of students who were followed longitudinally…
"Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process
Sanfilippo, Antonio P.; Nibbs, Faith G.
2007-08-24
While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.
How to incorporate generic refraction models into multistatic tracking algorithms
NASA Astrophysics Data System (ADS)
Crouse, D. F.
The vast majority of literature published on target tracking ignores the effects of atmospheric refraction. When refraction is considered, the solutions are generally tailored to a simple exponential atmospheric refraction model. This paper discusses how arbitrary refraction models can be incorporated into tracking algorithms. Attention is paid to multistatic tracking problems, where uncorrected refractive effects can worsen track accuracy and consistency in centralized tracking algorithms, and can lead to difficulties in track-to-track association in distributed tracking filters. Monostatic and bistatic track initialization using refraction-corrupted measurements is discussed. The results are demonstrated using an exponential refractive model, though an arbitrary refraction profile can be substituted.
Incorporating Field Intelligence Into Conceptual Rainfall-runoff Models
NASA Astrophysics Data System (ADS)
Vache, K.; McDonnell, J.; McGuire, K.
2003-12-01
A major challenge in the hydrological sciences is to incorporate observed physical processes into general hydrological models with minimal data requirements and limited model complexity. One approach is to move away from discharge-based calibration schemes, which often assume model structures to be correct, and allow field observations to inform and test new model structures. The use of this knowledge will contribute to (1) the development of an expanded set of variables to verify hydrological model performance and reflect the overall watershed function and (2) provide useful information regarding the development of model structures and landscape discretizations. We identify a set of three variables that focus on the composition of stream water, using artificial hydrograph separations to provide estimates of the time source (e.g., event vs. pre-event) and the geographic source (e.g., hillslope vs. riparian) of streamflow, and explicitly accounting for mass transfer to provide estimates of residence time. In addition to these variables, we present a set of methods and data designed to incorporate experimental understanding directly into the model structure and catchment discretization. These ideas are illustrated through application at the H.J. Andrews Experimental Forest's Lookout Creek watershed in the western Cascades of Oregon.
Incorporating nitrogen fixing cyanobacteria in the global biogeochemical model HAMOCC
NASA Astrophysics Data System (ADS)
Paulsen, Hanna; Ilyina, Tatiana; Six, Katharina
2015-04-01
Nitrogen fixation by marine diazotrophs plays a fundamental role in the oceanic nitrogen and carbon cycle as it provides a major source of 'new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Since most global biogeochemical models include nitrogen fixation only diagnostically, they are not able to capture its spatial pattern sufficiently. Here we present the incorporation of an explicit, dynamic representation of diazotrophic cyanobacteria and the corresponding nitrogen fixation in the global ocean biogeochemical model HAMOCC (Hamburg Ocean Carbon Cycle model), which is part of the Max Planck Institute for Meteorology Earth system model (MPI-ESM). The parameterization of the diazotrophic growth is thereby based on available knowledge about the cyanobacterium Trichodesmium spp., which is considered as the most significant pelagic nitrogen fixer. Evaluation against observations shows that the model successfully reproduces the main spatial distribution of cyanobacteria and nitrogen fixation, covering large parts of the tropical and subtropical oceans. Besides the role of cyanobacteria in marine biogeochemical cycles, their capacity to form extensive surface blooms induces a number of bio-physical feedback mechanisms in the Earth system. The processes driving these interactions, which are related to the alteration of heat absorption, surface albedo and momentum input by wind, are incorporated in the biogeochemical and physical model of the MPI-ESM in order to investigate their impacts on a global scale. First preliminary results will be shown.
Methods improvements incorporated into the SAPHIRE ASP models
Sattison, M.B.; Blackman, H.S.; Novack, S.D.
1995-04-01
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.
Importance of incorporating agriculture in conceptual rainfall-runoff models
NASA Astrophysics Data System (ADS)
de Boer-Euser, Tanja; Hrachowitz, Markus; Winsemius, Hessel; Savenije, Hubert
2016-04-01
Incorporating spatially variable information is a frequently discussed option to increase the performance of (semi-)distributed conceptual rainfall-runoff models. One of the methods to do this is by using this spatially variable information to delineate Hydrological Response Units (HRUs) within a catchment. In large parts of Europe the original forested land cover is replaced by an agricultural land cover. This change in land cover probably affects the dominant runoff processes in the area, for example by increasing the Hortonian overland flow component, especially on the flatter and higher elevated parts of the catchment. A change in runoff processes implies a change in HRUs as well. A previous version of our model distinguished wetlands (areas close to the stream) from the remainder of the catchment. However, this configuration was not able to reproduce all fast runoff processes, both in summer as in winter. Therefore, this study tests whether the reproduction of fast runoff processes can be improved by incorporating a HRU which explicitly accounts for the effect of agriculture. A case study is carried out in the Ourthe catchment in Belgium. For this case study the relevance of different process conceptualisations is tested stepwise. Among the conceptualisations are Hortonian overland flow in summer and winter, reduced infiltration capacity due to a partly frozen soil and the relative effect of rainfall and snow smelt in case of this frozen soil. The results show that the named processes can make a large difference on event basis, especially the Hortonian overland flow in summer and the combination of rainfall and snow melt on (partly) frozen soil in winter. However, differences diminish when the modelled period of several years is evaluated based on standard metrics like Nash-Sutcliffe Efficiency. These results emphasise on one hand the importance of incorporating the effects of agricultural in conceptual models and on the other hand the importance of more event
Incorporation of Hysteresis Effects into Magnetc Finite Element Modeling
NASA Astrophysics Data System (ADS)
Lee, J. Y.; Lee, S. J.; Melikhov, Y.; Jiles, D. C.; Garton, M.; Lopez, R.; Brasche, L.
2004-02-01
Hysteresis effects have usually been ignored in magnetic modeling due to the multi-valued property causing difficulty in its incorporation into numerical calculations such as those based on finite elements. A linear approximation of magnetic permeability or a nonlinear B-H curve formed by connecting the tips of the hysteresis loops has been widely used in magnetic modeling for these types of calculations. We have employed the Jiles-Atherton (J-A) hysteresis model for development of a finite element method algorithm incorporating hysteresis effects. J-A model is suited for numerical analysis such as finite element modeling because of the small number of degrees of freedom and its simple form of equation. A finite element method algorithm for hysteretic materials has been developed for estimation of the volume and the distribution of retained magnetic particles around a defect site. The volume of retained magnetic particles was found to depend not only on the existing current source strength but also on the remaining magnetization of a hysteretic material. Detailed algorithm and simulation results are presented.
Stochastic Human Exposure and Dose Simulation Model for Wood Preservatives
SHEDS-Wood (Stochastic Human Exposure and Dose Simulation Model for Wood Preservatives) is a physically-based stochastic model that was developed to quantify exposure and dose of children to wood preservatives on treated playsets and residential decks. Probabilistic inputs are co...
USEPA SHEDS MODEL: METHODOLOGY FOR EXPOSURE ASSESSMENT FOR WOOD PRESERVATIVES
A physically-based, Monte Carlo probabilistic model (SHEDS-Wood: Stochastic Human Exposure and Dose Simulation model for wood preservatives) has been applied to assess the exposure and dose of children to arsenic (As) and chromium (Cr) from contact with chromated copper arsenat...
Cirrus cloud model parameterizations: Incorporating realistic ice particle generation
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Dodd, G. C.; Starr, David OC.
1990-01-01
Recent cirrus cloud modeling studies have involved the application of a time-dependent, two dimensional Eulerian model, with generalized cloud microphysical parameterizations drawn from experimental findings. For computing the ice versus vapor phase changes, the ice mass content is linked to the maintenance of a relative humidity with respect to ice (RHI) of 105 percent; ice growth occurs both with regard to the introduction of new particles and the growth of existing particles. In a simplified cloud model designed to investigate the basic role of various physical processes in the growth and maintenance of cirrus clouds, these parametric relations are justifiable. In comparison, the one dimensional cloud microphysical model recently applied to evaluating the nucleation and growth of ice crystals in cirrus clouds explicitly treated populations of haze and cloud droplets, and ice crystals. Although these two modeling approaches are clearly incompatible, the goal of the present numerical study is to develop a parametric treatment of new ice particle generation, on the basis of detailed microphysical model findings, for incorporation into improved cirrus growth models. For example, the relation between temperature and the relative humidity required to generate ice crystals from ammonium sulfate haze droplets, whose probability of freezing through the homogeneous nucleation mode are a combined function of time and droplet molality, volume, and temperature. As an example of this approach, the results of cloud microphysical simulations are presented showing the rather narrow domain in the temperature/humidity field where new ice crystals can be generated. The microphysical simulations point out the need for detailed CCN studies at cirrus altitudes and haze droplet measurements within cirrus clouds, but also suggest that a relatively simple treatment of ice particle generation, which includes cloud chemistry, can be incorporated into cirrus cloud growth.
Geomagnetic field models incorporating physical constraints on the secular variation
NASA Technical Reports Server (NTRS)
Constable, Catherine; Parker, Robert L.
1993-01-01
This proposal has been concerned with methods for constructing geomagnetic field models that incorporate physical constraints on the secular variation. The principle goal that has been accomplished is the development of flexible algorithms designed to test whether the frozen flux approximation is adequate to describe the available geomagnetic data and their secular variation throughout this century. These have been applied to geomagnetic data from both the early and middle part of this century and convincingly demonstrate that there is no need to invoke violations of the frozen flux hypothesis in order to satisfy the available geomagnetic data.
Incorporating Statistical Topic Models in the Retrieval of Healthcare Documents
Caballero, Karla; Akella, Ram
2015-01-01
Patients often search for information on the web about treatments and diseases after they are discharged from the hospital. However, searching for medical information on the web poses challenges due to related terms and synonymy for the same disease and treatment. In this paper, we present a method that combines Statistical Topics Models, Language Models and Natural Language Processing to retrieve healthcare related documents. In addition, we test if the incorporation of terms extracted from the patient’s discharge summary improves the retrieval performance. We show that the proposed framework outperformed the winner of the retrieval CLEF eHealth 2013 challenge by 68% in the MAP measure (0:5226 vs 0:3108), and by 13% in NDCG (0:5202 vs 0:3637). Compared with standard language models, we obtain an improvement of 92% in MAP (0:2666) and 45% in NDCG. (0:3637) PMID:26306280
Incorporating groundwater-surface water interaction into river management models.
Valerio, Allison; Rajaram, Harihar; Zagona, Edith
2010-01-01
Accurate representation of groundwater-surface water interactions is critical to modeling low river flows in the semi-arid southwestern United States. Although a number of groundwater-surface water models exist, they are seldom integrated with river operation/management models. A link between the object-oriented river and reservoir operations model, RiverWare, and the groundwater model, MODFLOW, was developed to incorporate groundwater-surface water interaction processes, such as river seepage/gains, riparian evapotranspiration, and irrigation return flows, into a rule-based water allocations model. An explicit approach is used in which the two models run in tandem, exchanging data once in each computational time step. Because the MODFLOW grid is typically at a finer resolution than RiverWare objects, the linked model employs spatial interpolation and summation for compatible communication of exchanged variables. The performance of the linked model is illustrated through two applications in the Middle Rio Grande Basin in New Mexico where overappropriation impacts endangered species habitats. In one application, the linked model results are compared with historical data; the other illustrates use of the linked model for determining management strategies needed to attain an in-stream flow target. The flows predicted by the linked model at gauge locations are reasonably accurate except during a few very low flow periods when discrepancies may be attributable to stream gaging uncertainties or inaccurate documentation of diversions. The linked model accounted for complex diversions, releases, groundwater pumpage, irrigation return flows, and seepage between the groundwater system and canals/drains to achieve a schedule of releases that satisfied the in-stream target flow. PMID:20412319
SAI (Systems Applications, Incorporated) Urban Airshed Model. Model
Schere, K.L.
1985-06-01
This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air-quality simulation model that is well suited for predicting the spatial and temporal distribution of photochemical pollutant concentrations in an urban area. The model is based on the equations of conservation of mass for a set of reactive pollutants in a turbulent-flow field. To solve these equations, the UAM uses numerical techniques set in a 3-D finite-difference grid array of cells, each about 1 to 10 kilometers wide and 10 to several hundred meters deep. As output, the model provides the calculated pollutant concentrations in each cell as a function of time. The chemical species of prime interest included in the UAM simulations are O3, NO, NO/sub 2/ and several organic compounds and classes of compounds. The UAM system contains at its core the Airshed Simulation Program that accesses input data consisting of 10 to 14 files, depending on the program options chosen. Each file is created by a separate data-preparation program. There are 17 programs in the entire UAM system. The services of a qualified dispersion meteorologist, a chemist, and a computer programmer will be necessary to implement and apply the UAM and to interpret the results. Software Description: The program is written in the FORTRAN programming language for implementation on a UNIVAC 1110 computer under the UNIVAC 110 0 operating system level 38R5A. Memory requirement is 80K.
A mathematical model for incorporating biofeedback into human postural control
2013-01-01
Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1) to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2) to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP) to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional torque to the postural
Active shape models incorporating isolated landmarks for medical image annotation
NASA Astrophysics Data System (ADS)
Norajitra, Tobias; Meinzer, Hans-Peter; Stieltjes, Bram; Maier-Hein, Klaus H.
2014-03-01
Apart from their robustness in anatomic surface segmentation, purely surface based 3D Active Shape Models lack the ability to automatically detect and annotate non-surface key points of interest. However, annotation of anatomic landmarks is desirable, as it yields additional anatomic and functional information. Moreover, landmark detection might help to further improve accuracy during ASM segmentation. We present an extension of surface-based 3D Active Shape Models incorporating isolated non-surface landmarks. Positions of isolated and surface landmarks are modeled conjoint within a point distribution model (PDM). Isolated landmark appearance is described by a set of haar-like features, supporting local landmark detection on the PDM estimates using a kNN-Classi er. Landmark detection was evaluated in a leave-one-out cross validation on a reference dataset comprising 45 CT volumes of the human liver after shape space projection. Depending on the anatomical landmark to be detected, our experiments have shown in about 1/4 up to more than 1/2 of all test cases a signi cant improvement in detection accuracy compared to the position estimates delivered by the PDM. Our results encourage further research with regard to the combination of shape priors and machine learning for landmark detection within the Active Shape Model Framework.
Incorporation of shuttle CCT parameters in computer simulation models
NASA Technical Reports Server (NTRS)
Huntsberger, Terry
1990-01-01
Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported.
Incorporation of multiple cloud layers for ultraviolet radiation modeling studies
NASA Technical Reports Server (NTRS)
Charache, Darryl H.; Abreu, Vincent J.; Kuhn, William R.; Skinner, Wilbert R.
1994-01-01
Cloud data sets compiled from surface observations were used to develop an algorithm for incorporating multiple cloud layers into a multiple-scattering radiative transfer model. Aerosol extinction and ozone data sets were also incorporated to estimate the seasonally averaged ultraviolet (UV) flux reaching the surface of the Earth in the Detroit, Michigan, region for the years 1979-1991, corresponding to Total Ozone Mapping Spectrometer (TOMS) version 6 ozone observations. The calculated UV spectrum was convolved with an erythema action spectrum to estimate the effective biological exposure for erythema. Calculations show that decreasing the total column density of ozone by 1% leads to an increase in erythemal exposure by approximately 1.1-1.3%, in good agreement with previous studies. A comparison of the UV radiation budget at the surface between a single cloud layer method and a multiple cloud layer method presented here is discussed, along with limitations of each technique. With improved parameterization of cloud properties, and as knowledge of biological effects of UV exposure increase, inclusion of multiple cloud layers may be important in accurately determining the biologically effective UV budget at the surface of the Earth.
Tantalum strength model incorporating temperature, strain rate and pressure
NASA Astrophysics Data System (ADS)
Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt
Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Incorporating Plant Phenology Dynamics in a Biophysical Canopy Model
NASA Technical Reports Server (NTRS)
Barata, Raquel A.; Drewry, Darren
2012-01-01
The Multi-Layer Canopy Model (MLCan) is a vegetation model created to capture plant responses to environmental change. Themodel vertically resolves carbon uptake, water vapor and energy exchange at each canopy level by coupling photosynthesis, stomatal conductance and leaf energy balance. The model is forced by incoming shortwave and longwave radiation, as well as near-surface meteorological conditions. The original formulation of MLCan utilized canopy structural traits derived from observations. This project aims to incorporate a plant phenology scheme within MLCan allowing these structural traits to vary dynamically. In the plant phenology scheme implemented here, plant growth is dependent on environmental conditions such as air temperature and soil moisture. The scheme includes functionality that models plant germination, growth, and senescence. These growth stages dictate the variation in six different vegetative carbon pools: storage, leaves, stem, coarse roots, fine roots, and reproductive. The magnitudes of these carbon pools determine land surface parameters such as leaf area index, canopy height, rooting depth and root water uptake capacity. Coupling this phenology scheme with MLCan allows for a more flexible representation of the structure and function of vegetation as it responds to changing environmental conditions.
Incorporation of chemical kinetic models into process control
Herget, C.J.; Frazer, J.W.
1981-07-08
An important consideration in chemical process control is to determine the precise rationing of reactant streams, particularly when a large time delay exists between the mixing of the reactants and the measurement of the product. In this paper, a method is described for incorporating chemical kinetic models into the control strategy in order to achieve optimum operating conditions. The system is first characterized by determining a reaction rate surface as a function of all input reactant concentrations over a feasible range. A nonlinear constrained optimization program is then used to determine the combination of reactants which produces the specified yield at minimum cost. This operating condition is then used to establish the nominal concentrations of the reactants. The actual operation is determined through a feedback control system employing a Smith predictor. The method is demonstrated on a laboratory bench scale enzyme reactor.
A dengue model incorporating saturation incidence and human migration
NASA Astrophysics Data System (ADS)
Gakkhar, S.; Mishra, A.
2015-03-01
In this paper, a non-linear model has been proposed to investigate the effects of human migration on dengue dynamics. Human migration has been considered between two patches having different dengue strains. Due to migration secondary infection is possible. Further, the secondary infection is considered in patch-2 only as strain-2 in patch-2 is considered to be more severe than that of strain-1 in patch-1. The saturation incidence rate has been considered to incorporate the behavioral changes towards epidemic in human population. The basic reproduction number has been computed. Four Equilibrium states have been found and analyzed. Increasing saturation rate decreases the threshold thereby enhancing the stability of disease-free state in both the patches. Control on migration may lead to change in infection level of patches.
Incorporating Functional Gene Quantification into Traditional Decomposition Models
NASA Astrophysics Data System (ADS)
Todd-Brown, K. E.; Zhou, J.; Yin, H.; Wu, L.; Tiedje, J. M.; Schuur, E. A. G.; Konstantinidis, K.; Luo, Y.
2014-12-01
Incorporating new genetic quantification measurements into traditional substrate pool models represents a substantial challenge. These decomposition models are built around the idea that substrate availablity, with environmental drivers, limit carbon dioxide respiration rates. In this paradigm, microbial communities optimally adapt to a given substrate and environment on much shorter time scales then the carbon flux of interest. By characterizing the relative shift in biomass of these microbial communities, we informed previously poorly constrained parameters in traditional decomposition models. In this study we coupled a 9 month laboratory incubation study with quantitative gene measurements with traditional CO2 flux measurements plus initial soil organic carbon quantification. GeoChip 5.0 was used to quantify the functional genes associated with carbon cycling at 2 weeks, 3 months and 9 months. We then combined the genes which 'collapsed' over the experiment and assumed that this tracked the relative change in the biomass associated with the 'fast' pool. We further assumed that this biomass was proportional to the 'fast' SOC pool and thus were able to constrain the relative change in the fast SOC pool in our 3-pool decomposition model. We found that biomass quantification described above, combined with traditional CO2 flux and SOC measurements, improve the transfer coefficient estimation in traditional decomposition models. Transfer coefficients are very difficult to characterized using traditional CO2 flux measurements, thus DNA quantification provides new and significant information about the system. Over a 100 year simulation, these new biologically informed parameters resulted in an additional 10% of SOC loss over the traditionally informed parameters.
Incorporation of Helium Demixing in Interior Structure Models of Saturn
NASA Astrophysics Data System (ADS)
Tian, Bob; Stanley, Sabine; Valencia, Diana
2015-04-01
Experiments and ab initio calculations of hydrogen-helium mixtures predict a phase separation at pressure-temperature conditions relevant to Saturn's interior. At depths where this occurs, droplets of helium form out of the mixture and sink towards the deep interiors where it re-mixes again, thereby depleting the helium above the layer over time while enriching the concentration below the layer. In dynamo modelling, the axisymmetric nature of Saturn's magnetic field is so far best explained by the inclusion of a stably stratified layer just below the depth at which hydrogen metallizes (approximately 0.65RS). Stable stratification at that depth could occur if the compositional gradients produced by the helium rain process described above is great enough to suppress convection in the de-mixing layers. Thus, we first developed a range of interior structure models consistent with available constraints of the gravity field and atmospheric composition. The hydrogen-helium de-mixing curve was then incorporated in calculations of some of these models to assess its feasibility in compositionally stratifying the top of the dynamo source region. We found that when helium rain is taken into account, a stably stratified layer approximately 0.1 - 0.15RS in thickness can exist atop the dynamo source region, consistent with thicknesses needed in dynamo models to axisymmetrize the observable magnetic field. Furthermore, inertial gravity waves could be excited in such thick stably stratified regions. These may be detectable by asteroseismology techniques, or by analysis of wave modes' gravitational interaction with Saturn's ring particles. Thus, profiles of sound speed and Brunt-Vaisala frequencies were also calculated for all of the interior structures models studied to be used for comparison with possible seismic studies in the future.
Digital terrain model generalization incorporating scale, semantic and cognitive constraints
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Papadogiorgaki, Maria
2014-05-01
Cartographic generalization is a well-known process accommodating spatial data compression, visualization and comprehension under various scales. In the last few years, there are several international attempts to construct tangible GIS systems, forming real 3D surfaces using a vast number of mechanical parts along a matrix formation (i.e., bars, pistons, vacuums). Usually, moving bars upon a structured grid push a stretching membrane resulting in a smooth visualization for a given surface. Most of these attempts suffer either in their cost, accuracy, resolution and/or speed. Under this perspective, the present study proposes a surface generalization process that incorporates intrinsic constrains of tangible GIS systems including robotic-motor movement and surface stretching limitations. The main objective is to provide optimized visualizations of 3D digital terrain models with minimum loss of information. That is, to minimize the number of pixels in a raster dataset used to define a DTM, while reserving the surface information. This neighborhood type of pixel relations adheres to the basics of Self Organizing Map (SOM) artificial neural networks, which are often used for information abstraction since they are indicative of intrinsic statistical features contained in the input patterns and provide concise and characteristic representations. Nevertheless, SOM remains more like a black box procedure not capable to cope with possible particularities and semantics of the application at hand. E.g. for coastal monitoring applications, the near - coast areas, surrounding mountains and lakes are more important than other features and generalization should be "biased"-stratified to fulfill this requirement. Moreover, according to the application objectives, we extend the SOM algorithm to incorporate special types of information generalization by differentiating the underlying strategy based on topologic information of the objects included in the application. The final
4-D Subduction Models Incorporating an Upper Plate
NASA Astrophysics Data System (ADS)
Stegman, D.; Capitanio, F. A.; Moresi, L.; Mueller, D.; Clark, S.
2007-12-01
Thus far, relatively simplistic models of free subduction have been employed in which the trench and plate kinematics are emergent features completely driven by the negative buoyancy of the slab. This has allowed us to build a fundamental understanding of subduction processes such as the kinematics of subduction zones, the strength of slabs, and mantle flow-plate coupling. Additionaly, these efforts have helped to develop appreciable insight into subduction processes when considering the energetics of subduction, in particular how energy is dissipated in various parts of the system such as generating mantle flow and bending the plate. We are now in a position to build upon this knowledge and shift our focus towards the dynamic controls of deformation in the upper plate (vertical motions, extension, shortening, and dynamic topography). Here, the state of stress in the overriding plate is the product of the delicate balance of large tectonic forces in a highly-coupled system, and must therefore include all components of the system: the subducting plate, the overriding plate, and the underlying mantle flow which couples everything together. We will present some initial results of the fully dynamic 3-D models of free subduction which incorporate an overriding plate and systematically investigate how variations in the style and strength of subduction are expressed by the tectonics of the overriding plate. Deformation is driven in the overriding plate by the forces generated from the subducting plate and the type of boundary condition on the non-subducting side of the overriding plate (either fixed or free). Ultimately, these new models will help to address a range of issues: how the overriding plate influences the plate and trench kinematics; the formation and evolution of back-arc basins; the variation of tractions on the base of the overriding plate; the nature of forces which drive plates; and the dynamics controls on seismic coupling at the plate boundary.
Incorporating spatial correlations into multispecies mean-field models
NASA Astrophysics Data System (ADS)
Markham, Deborah C.; Simpson, Matthew J.; Maini, Philip K.; Gaffney, Eamonn A.; Baker, Ruth E.
2013-11-01
In biology, we frequently observe different species existing within the same environment. For example, there are many cell types in a tumour, or different animal species may occupy a given habitat. In modeling interactions between such species, we often make use of the mean-field approximation, whereby spatial correlations between the locations of individuals are neglected. Whilst this approximation holds in certain situations, this is not always the case, and care must be taken to ensure the mean-field approximation is only used in appropriate settings. In circumstances where the mean-field approximation is unsuitable, we need to include information on the spatial distributions of individuals, which is not a simple task. In this paper, we provide a method that overcomes many of the failures of the mean-field approximation for an on-lattice volume-excluding birth-death-movement process with multiple species. We explicitly take into account spatial information on the distribution of individuals by including partial differential equation descriptions of lattice site occupancy correlations. We demonstrate how to derive these equations for the multispecies case and show results specific to a two-species problem. We compare averaged discrete results to both the mean-field approximation and our improved method, which incorporates spatial correlations. We note that the mean-field approximation fails dramatically in some cases, predicting very different behavior from that seen upon averaging multiple realizations of the discrete system. In contrast, our improved method provides excellent agreement with the averaged discrete behavior in all cases, thus providing a more reliable modeling framework. Furthermore, our method is tractable as the resulting partial differential equations can be solved efficiently using standard numerical techniques.
Incorporating seepage processes into a streambank stability model
Technology Transfer Automated Retrieval System (TEKTRAN)
Seepage processes are usually neglected in bank stability analyses although they can become a prominent failure mechanism under certain field conditions. This study incorporated the effects of seepage (i.e., seepage gradient forces and seepage erosion undercutting) into the Bank Stability and Toe Er...
The U.S. Environmental Protection Agency has conducted a probabilistic exposure and dose assessment on the arsenic (As) and chromium (Cr) components of Chromated Copper Arsenate (CCA) using the Stochastic Human Exposure and Dose Simulation model for wood preservatives (SHEDS-Wood...
Implementing the Standards: Incorporating Mathematical Modeling into the Curriculum.
ERIC Educational Resources Information Center
Swetz, Frank
1991-01-01
Following a brief historical review of the mechanism of mathematical modeling, examples are included that associate a mathematical model with given data (changes in sea level) and that model a real-life situation (process of parallel parking). Also provided is the rationale for the curricular implementation of mathematical modeling. (JJK)
A Measurement Model for Likert Responses that Incorporates Response Time
ERIC Educational Resources Information Center
Ferrando, Pere J.; Lorenzo-Seva, Urbano
2007-01-01
This article describes a model for response times that is proposed as a supplement to the usual factor-analytic model for responses to graded or more continuous typical-response items. The use of the proposed model together with the factor model provides additional information about the respondent and can potentially increase the accuracy of the…
A new nonlinear Muskingum flood routing model incorporating lateral flow
NASA Astrophysics Data System (ADS)
Karahan, Halil; Gurarslan, Gurhan; Geem, Zong Woo
2015-06-01
A new nonlinear Muskingum flood routing model taking the contribution from lateral flow into consideration was developed in the present study. The cuckoo search algorithm, a quite novel and robust algorithm, was used in the calibration and verification of the model parameters. The success and the dependability of the proposed model were tested on five different sets of synthetic and real flood data. The optimal solutions for the test cases were determined by the currently proposed model rather than by different models taken from the literature, indicating that this model could be suitable for use in flood routing problems.
Seasonal variation in survival and reproduction can be a large source of prediction uncertainty in models used for conservation and management. A seasonally varying matrix population model is developed that incorporates temperature-driven differences in mortality and reproduction...
Incorporating uncertainty into high-resolution groundwater supply models
Rahman, A.; Hartono, S.; Carlson, D.; Willson, C.S.
2003-01-01
Groundwater modeling is a useful tool for evaluating whether an acquifer system is capable of supporting groundwater withdrawals over long periods of time and what effect, if any, such activity will have on the regional flow dynamics as well as on specific public water, agricultural and industrial supplies. An overview is given of an ongoing groundwater modeling study of the Chicot Aquifer in southwestern Louisiana where a low-resolution groundwater model is being used to study the regional flow in the Chicot acquifer and to provide boundary conditions for higher-resolution inset models created using telescopic mesh refinement (TMR).
Incorporation of the planetary boundary layer in atmospheric models
NASA Technical Reports Server (NTRS)
Moeng, Chin-Hoh; Wyngaard, John; Pielke, Roger; Krueger, Steve
1993-01-01
The topics discussed include the following: perspectives on planetary boundary layer (PBL) measurements; current problems of PBL parameterization in mesoscale models; and convective cloud-PBL interactions.
Progressive evaluation of incorporating information into a model building process
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Gao, Hongkai; Gupta, Hoshin; Savenije, Huub
2014-05-01
Catchments are open systems meaning that it is impossible to find out the exact boundary conditions of the real system spatially and temporarily. Therefore models are essential tools in capturing system behaviour spatially and extrapolating it temporarily for prediction. In recent years conceptual models have been in the center of attention rather than so called physically based models which are often over-parameterized and encounter difficulties for up-scaling of small scale processes. Conceptual models however are heavily dependent on calibration as one or more of their parameter values can typically not be physically measured at the catchment scale. The general understanding is based on the fact that increasing the complexity of conceptual model for better representation of hydrological process heterogeneity typically makes parameter identification more difficult however the evaluation of the amount of information given by each of the model elements, control volumes (so called buckets), interconnecting fluxes, parameterization (constitutive functions) and finally parameter values are rather unknown. Each of the mentioned components of a model contains information on the transformation of forcing (precipitation) into runoff, however the effect of each of them solely and together is not well understood. In this study we follow hierarchal steps for model building, firstly the model structure is built by its building blocks (control volumes) as well as interconnecting fluxes. The effect of adding every control volumes and the architecture of the model (formation of control volumes and fluxes) can be evaluated in this level. In the second layer the parameterization of model is evaluated. As an example the effect of a specific type of stage-discharge relation for a control volume can be explored. Finally in the last step of the model building the information gained by parameter values are quantified. In each development level the value of information which are added
A quantum model of exaptation: incorporating potentiality into evolutionary theory.
Gabora, Liane; Scott, Eric O; Kauffman, Stuart
2013-09-01
The phenomenon of preadaptation, or exaptation (wherein a trait that originally evolved to solve one problem is co-opted to solve a new problem) presents a formidable challenge to efforts to describe biological phenomena using a classical (Kolmogorovian) mathematical framework. We develop a quantum framework for exaptation with examples from both biological and cultural evolution. The state of a trait is written as a linear superposition of a set of basis states, or possible forms the trait could evolve into, in a complex Hilbert space. These basis states are represented by mutually orthogonal unit vectors, each weighted by an amplitude term. The choice of possible forms (basis states) depends on the adaptive function of interest (e.g., ability to metabolize lactose or thermoregulate), which plays the role of the observable. Observables are represented by self-adjoint operators on the Hilbert space. The possible forms (basis states) corresponding to this adaptive function (observable) are called eigenstates. The framework incorporates key features of exaptation: potentiality, contextuality, nonseparability, and emergence of new features. However, since it requires that one enumerate all possible contexts, its predictive value is limited, consistent with the assertion that there exists no biological equivalent to "laws of motion" by which we can predict the evolution of the biosphere. PMID:23567156
Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies
ERIC Educational Resources Information Center
Smith, Carrie E.; Cribbie, Robert A.
2013-01-01
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…
A Model for Library Book Circulations Incorporating Loan Periods.
ERIC Educational Resources Information Center
Burrell, Quentin L.; Fenton, Michael R.
1994-01-01
Proposes and explains a modification of the mixed Poisson model for library circulations which takes into account the periods when a book is out on loan and therefore unavailable for borrowing. Highlights include frequency of circulation distributions; negative binomial distribution; and examples of the model at two universities. (Contains 34…
Incorporating model uncertainty into attribution of observed temperature change
NASA Astrophysics Data System (ADS)
Huntingford, Chris; Stott, Peter A.; Allen, Myles R.; Lambert, F. Hugo
2006-03-01
Optimal detection analyses have been used to determine the causes of past global warming, leading to the conclusion by the Third Assessment Report of the IPCC that ``most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations''. To date however, these analyses have not taken full account of uncertainty in the modelled patterns of climate response due to differences in basic model formulation. To address this current ``perfect model'' assumption, we extend the optimal detection method to include, simultaneously, output from more than one GCM by introducing inter-model variance as an extra uncertainty. Applying the new analysis to three climate models we find that the effects of both anthropogenic and natural factors are detected. We find that greenhouse gas forcing would very likely have resulted in greater warming than observed during the past half century if there had not been an offsetting cooling from aerosols and other forcings.
A transient stochastic weather generator incorporating climate model uncertainty
NASA Astrophysics Data System (ADS)
Glenis, Vassilis; Pinamonti, Valentina; Hall, Jim W.; Kilsby, Chris G.
2015-11-01
Stochastic weather generators (WGs), which provide long synthetic time series of weather variables such as rainfall and potential evapotranspiration (PET), have found widespread use in water resources modelling. When conditioned upon the changes in climatic statistics (change factors, CFs) predicted by climate models, WGs provide a useful tool for climate impacts assessment and adaption planning. The latest climate modelling exercises have involved large numbers of global and regional climate models integrations, designed to explore the implications of uncertainties in the climate model formulation and parameter settings: so called 'perturbed physics ensembles' (PPEs). In this paper we show how these climate model uncertainties can be propagated through to impact studies by testing multiple vectors of CFs, each vector derived from a different sample from a PPE. We combine this with a new methodology to parameterise the projected time-evolution of CFs. We demonstrate how, when conditioned upon these time-dependent CFs, an existing, well validated and widely used WG can be used to generate non-stationary simulations of future climate that are consistent with probabilistic outputs from the Met Office Hadley Centre's Perturbed Physics Ensemble. The WG enables extensive sampling of natural variability and climate model uncertainty, providing the basis for development of robust water resources management strategies in the context of a non-stationary climate.
Incorporating tissue absorption and scattering in rapid ultrasound beam modeling
NASA Astrophysics Data System (ADS)
Christensen, Douglas; Almquist, Scott
2013-02-01
We have developed a new approach for modeling the propagation of an ultrasound beam in inhomogeneous tissues such as encountered with high-intensity focused ultrasound (HIFU) for treatment of various diseases. This method, called the hybrid angular spectrum (HAS) approach, alternates propagation steps between the space and the spatial frequency domains throughout the inhomogeneous regions of the body; the use of spatial Fourier transforms makes this technique considerably faster than other modeling approaches (about 10 sec for a 141 x 141 x 121 model). In HIFU thermal treatments, the acoustic absorption property of the tissues is of prime importance since it leads to temperature rise and the achievement of desired thermal dose at the treatment site. We have recently added to the HAS method the capability of independently modeling tissue absorption and scattering, the two components of acoustic attenuation. These additions improve the predictive value of the beam modeling and more accurately describes the thermal conditions expected during a therapeutic ultrasound exposure. Two approaches to explicitly model scattering were developed: one for scattering sizes smaller than a voxel, and one when the scattering scale is several voxels wide. Some anatomically realistic examples that demonstrate the importance of independently modeling absorption and scattering are given, including propagation through the human skull for noninvasive brain therapy and in the human breast for treatment of breast lesions.
Incorporating Uncoupled Stress Effects into FEHM Modeling of HDR Reservoirs
Birdsell, Stephen A.
1988-07-01
Thermal and pressure-induced stress effects are extremely important aspects of modeling HDR reservoirs because these effects will control the transient behavior of reservoir flow impedance, water loss and flow distribution. Uncoupled stress effects will be added to the existing three-dimensional Finite Element Heat and Mass Transfer (FEHM) model (Birdsell, 1988) in order to more realistically simulate HDR reservoirs. Stress effects will be uncoupled in the new model since a fully-coupled code will not be available for some time.
Incorporating principal component analysis into air quality model evaluation
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...
Viral dynamics model with CTL immune response incorporating antiretroviral therapy.
Wang, Yan; Zhou, Yicang; Brauer, Fred; Heffernan, Jane M
2013-10-01
We present two HIV models that include the CTL immune response, antiretroviral therapy and a full logistic growth term for uninfected CD4+ T-cells. The difference between the two models lies in the inclusion or omission of a loss term in the free virus equation. We obtain critical conditions for the existence of one, two or three steady states, and analyze the stability of these steady states. Through numerical simulation we find substantial differences in the reproduction numbers and the behaviour at the infected steady state between the two models, for certain parameter sets. We explore the effect of varying the combination drug efficacy on model behaviour, and the possibility of reconstituting the CTL immune response through antiretroviral therapy. Furthermore, we employ Latin hypercube sampling to investigate the existence of multiple infected equilibria. PMID:22930342
NEXT-GENERATION NUMERICAL MODELING: INCORPORATING ELASTICITY, ANISOTROPY AND ATTENUATION
S. LARSEN; ET AL
2001-03-01
A new effort has been initiated between the Department of Energy (DOE) and the Society of Exploration Geophysicists (SEG) to investigate what features the next generation of numerical seismic models should contain that will best address current technical problems encountered during exploration in increasingly complex geologies. This collaborative work is focused on designing and building these new models, generating synthetic seismic data through simulated surveys of various geometries, and using these data to test and validate new and improved seismic imaging algorithms. The new models will be both 2- and 3-dimensional and will include complex velocity structures as well as anisotropy and attenuation. Considerable attention is being focused on multi-component acoustic and elastic effects because it is now widely recognized that converted phases could play a vital role in improving the quality of seismic images. An existing, validated 3-D elastic modeling code is being used to generate the synthetic data. Preliminary elastic modeling results using this code are presented here along with a description of the proposed new models that will be built and tested.
Zacharof, A I; Butler, A P
2004-01-01
A mathematical model simulating the hydrological and biochemical processes occurring in landfilled waste is presented and demonstrated. The model combines biochemical and hydrological models into an integrated representation of the landfill environment. Waste decomposition is modelled using traditional biochemical waste decomposition pathways combined with a simplified methodology for representing the rate of decomposition. Water flow through the waste is represented using a statistical velocity model capable of representing the effects of waste heterogeneity on leachate flow through the waste. Given the limitations in data capture from landfill sites, significant emphasis is placed on improving parameter identification and reducing parameter requirements. A sensitivity analysis is performed, highlighting the model's response to changes in input variables. A model test run is also presented, demonstrating the model capabilities. A parameter perturbation model sensitivity analysis was also performed. This has been able to show that although the model is sensitive to certain key parameters, its overall intuitive response provides a good basis for making reasonable predictions of the future state of the landfill system. Finally, due to the high uncertainty associated with landfill data, a tool for handling input data uncertainty is incorporated in the model's structure. It is concluded that the model can be used as a reasonable tool for modelling landfill processes and that further work should be undertaken to assess the model's performance. PMID:15120429
Macroscopic singlet oxygen model incorporating photobleaching as an input parameter
NASA Astrophysics Data System (ADS)
Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.
2015-03-01
A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.
UV communications channel modeling incorporating multiple scattering interactions.
Drost, Robert J; Moore, Terrence J; Sadler, Brian M
2011-04-01
In large part because of advancements in the design and fabrication of UV LEDs, photodetectors, and filters, significant research interest has recently been focused on non-line-of-sight UV communication systems. This research in, for example, system design and performance prediction, can be greatly aided by accurate channel models that allow for the reproducibility of results, thus facilitating the fair and consistent comparison of different communication approaches. In this paper, we provide a comprehensive derivation of a multiple-scattering Monte Carlo UV channel model, addressing weaknesses in previous treatments. The resulting model can be used to study the contribution of different orders of scattering to the path loss and impulse response functions associated with general UV communication system geometries. Simulation results are provided that demonstrate the benefit of this approach. PMID:21478967
Francis, Royce A; Vanbriesen, Jeanne M; Small, Mitchell J
2010-02-15
Statistical models are developed for bromine incorporation in the trihalomethane (THM), trihaloacetic acids (THAA), dihaloacetic acid (DHAA), and dihaloacetonitrile (DHAN) subclasses of disinfection byproducts (DBPs) using distribution system samples from plants applying only free chlorine as a primary or residual disinfectant in the Information Collection Rule (ICR) database. The objective of this study is to characterize the effect of water quality conditions before, during, and post-treatment on distribution system bromine incorporation into DBP mixtures. Bayesian Markov Chain Monte Carlo (MCMC) methods are used to model individual DBP concentrations and estimate the coefficients of the linear models used to predict the bromine incorporation fraction for distribution system DBP mixtures in each of the four priority DBP classes. The bromine incorporation models achieve good agreement with the data. The most important predictors of bromine incorporation fraction across DBP classes are alkalinity, specific UV absorption (SUVA), and the bromide to total organic carbon ratio (Br:TOC) at the first point of chlorine addition. Free chlorine residual in the distribution system, distribution system residence time, distribution system pH, turbidity, and temperature only slightly influence bromine incorporation. The bromide to applied chlorine (Br:Cl) ratio is not a significant predictor of the bromine incorporation fraction (BIF) in any of the four classes studied. These results indicate that removal of natural organic matter and the location of chlorine addition are important treatment decisions that have substantial implications for bromine incorporation into disinfection byproduct in drinking waters. PMID:20095529
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
Incorporating Satellite Time-Series Data into Modeling
NASA Technical Reports Server (NTRS)
Gregg, Watson
2008-01-01
In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.
The incorporation of geomorphic information in storage-zone models.
NASA Astrophysics Data System (ADS)
Boufadel, M. C.; Gabriel, M.
2001-12-01
Three stream-tracer studies were conducted in a 190-m reach of an urban stream in Philadelphia to investigate the interactions between the main channel and transverse storage zones. Sodium chloride was used as a conservative tracer and was monitored at two downstream locations using electric conductivity measurements. The experiments were simulated using the advection-dispersion equation with additional terms that account for the transverse exchange. The fit of the model to the data was good when all the parameters were assumed to be sub-reach-averaged. When measurements of the cross sectional area at various downstream distances were introduced into the model, the remaining reach-averaged parameters had to take extreme values to achieve agreement with the experimental breakthrough curve. This indicates that additional but incomplete geomorphic information does not necessarily improve the understanding of a particular stream system. The variation of the parameters with scale was also explored.
The incorporation of geomorphic information in storage-zone models
NASA Astrophysics Data System (ADS)
Boufadel, M.
2003-04-01
Three stream-tracer studies were conducted in a 190-m reach of an urban stream in Philadelphia to investigate the interactions between the main channel and transverse storage zones. Sodium chloride was used as a conservative tracer and was monitored at two downstream locations using electric conductivity measurements. The experiments were simulated using the advection-dispersion equation with additional terms that account for the transverse exchange. The fit of the model to the data was good when all the parameters were assumed to be sub-reach-averaged. When measurements of the cross sectional area at various downstream distances were introduced into the model, the remaining reach-averaged parameters had to take extreme values to achieve agreement with the experimental breakthrough curve. This indicates that additional but incomplete geomorphic information does not necessarily improve the understanding of a particular stream system. The variation of the parameters with scale was also explored.
Incorporating Spatial Models in Visual Field Test Procedures
Rubinstein, Nikki J.; McKendrick, Allison M.; Turpin, Andrew
2016-01-01
Purpose To introduce a perimetric algorithm (Spatially Weighted Likelihoods in Zippy Estimation by Sequential Testing [ZEST] [SWeLZ]) that uses spatial information on every presentation to alter visual field (VF) estimates, to reduce test times without affecting output precision and accuracy. Methods SWeLZ is a maximum likelihood Bayesian procedure, which updates probability mass functions at VF locations using a spatial model. Spatial models were created from empirical data, computational models, nearest neighbor, random relationships, and interconnecting all locations. SWeLZ was compared to an implementation of the ZEST algorithm for perimetry using computer simulations on 163 glaucomatous and 233 normal VFs (Humphrey Field Analyzer 24-2). Output measures included number of presentations and visual sensitivity estimates. Results There was no significant difference in accuracy or precision of SWeLZ for the different spatial models relative to ZEST, either when collated across whole fields or when split by input sensitivity. Inspection of VF maps showed that SWeLZ was able to detect localized VF loss. SWeLZ was faster than ZEST for normal VFs: median number of presentations reduced by 20% to 38%. The number of presentations was equivalent for SWeLZ and ZEST when simulated on glaucomatous VFs. Conclusions SWeLZ has the potential to reduce VF test times in people with normal VFs, without detriment to output precision and accuracy in glaucomatous VFs. Translational Relevance SWeLZ is a novel perimetric algorithm. Simulations show that SWeLZ can reduce the number of test presentations for people with normal VFs. Since many patients have normal fields, this has the potential for significant time savings in clinical settings. PMID:26981329
Incorporating organic soil into a global climate model
NASA Astrophysics Data System (ADS)
Lawrence, David M.; Slater, Andrew G.
2008-02-01
Organic matter significantly alters a soil’s thermal and hydraulic properties but is not typically included in land-surface schemes used in global climate models. This omission has consequences for ground thermal and moisture regimes, particularly in the high-latitudes where soil carbon content is generally high. Global soil carbon data is used to build a geographically distributed, profiled soil carbon density dataset for the Community Land Model (CLM). CLM parameterizations for soil thermal and hydraulic properties are modified to accommodate both mineral and organic soil matter. Offline simulations including organic soil are characterized by cooler annual mean soil temperatures (up to ˜2.5°C cooler for regions of high soil carbon content). Cooling is strong in summer due to modulation of early and mid-summer soil heat flux. Winter temperatures are slightly warmer as organic soils do not cool as efficiently during fall and winter. High porosity and hydraulic conductivity of organic soil leads to a wetter soil column but with comparatively low surface layer saturation levels and correspondingly low soil evaporation. When CLM is coupled to the Community Atmosphere Model, the reduced latent heat flux drives deeper boundary layers, associated reductions in low cloud fraction, and warmer summer air temperatures in the Arctic. Lastly, the insulative properties of organic soil reduce interannual soil temperature variability, but only marginally. This result suggests that, although the mean soil temperature cooling will delay the simulated date at which frozen soil begins to thaw, organic matter may provide only limited insulation from surface warming.
Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.
1997-01-01
Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni
Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.
1997-01-01
Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal, and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineers perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(sub x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for 'graceful' rather than catastrophic failure. When loaded in the fiber direction these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni
Incorporating tracer-tracee differences into models to improve accuracy
Schoeller, D.A. )
1991-05-01
The ideal tracer for metabolic studies is one that behaves exactly like the tracee. Compounds labeled with isotopes come the closest to this ideal because they are chemically identical to the tracee except for the substitution of a stable or radioisotope at one or more positions. Even this substitution, however, can introduce a difference in metabolism that may be quantitatively important with regard to the development of the mathematical model used to interpret the kinetic data. The doubly labeled water method for the measurement of carbon dioxide production and hence energy expenditure in free-living subjects is a good example of how differences between the metabolism of the tracers and the tracee can influence the accuracy of the carbon dioxide production rate determined from the kinetic data.
Incorporating affective bias in models of human decision making
NASA Technical Reports Server (NTRS)
Nygren, Thomas E.
1991-01-01
Research on human decision making has traditionally focused on how people actually make decisions, how good their decisions are, and how their decisions can be improved. Recent research suggests that this model is inadequate. Affective as well as cognitive components drive the way information about relevant outcomes and events is perceived, integrated, and used in the decision making process. The affective components include how the individual frames outcomes as good or bad, whether the individual anticipates regret in a decision situation, the affective mood state of the individual, and the psychological stress level anticipated or experienced in the decision situation. A focus of the current work has been to propose empirical studies that will attempt to examine in more detail the relationships between the latter two critical affective influences (mood state and stress) on decision making behavior.
Incorporating flood event analyses and catchment structures into model development
NASA Astrophysics Data System (ADS)
Oppel, Henning; Schumann, Andreas
2016-04-01
The space-time variability in catchment response results from several hydrological processes which differ in their relevance in an event-specific way. An approach to characterise this variance consists in comparisons between flood events in a catchment and between flood responses of several sub-basins in such an event. In analytical frameworks the impact of space and time variability of rainfall on runoff generation due to rainfall excess can be characterised. Moreover the effect of hillslope and channel network routing on runoff timing can be specified. Hence, a modelling approach is needed to specify the runoff generation and formation. Knowing the space-time variability of rainfall and the (spatial averaged) response of a catchment it seems worthwhile to develop new models based on event and catchment analyses. The consideration of spatial order and the distribution of catchment characteristics in their spatial variability and interaction with the space-time variability of rainfall provides additional knowledge about hydrological processes at the basin scale. For this purpose a new procedure to characterise the spatial heterogeneity of catchments characteristics in their succession along the flow distance (differentiated between river network and hillslopes) was developed. It was applied to study of flood responses at a set of nested catchments in a river basin in eastern Germany. In this study the highest observed rainfall-runoff events were analysed, beginning at the catchment outlet and moving upstream. With regard to the spatial heterogeneities of catchment characteristics, sub-basins were separated by new algorithms to attribute runoff-generation, hillslope and river network processes. With this procedure the cumulative runoff response at the outlet can be decomposed and individual runoff features can be assigned to individual aspects of the catchment. Through comparative analysis between the sub-catchments and the assigned effects on runoff dynamics new
Phillips & Koch (2002) outlined a new stable isotope mixing model which incorporates differences in elemental concentrations in the determinations of source proportions in a mixture. They illustrated their method with sensitivity analyses and two examples from the wildlife ecolog...
Crowther, Michael J; Andersson, Therese M-L; Lambert, Paul C; Abrams, Keith R; Humphreys, Keith
2016-03-30
A now common goal in medical research is to investigate the inter-relationships between a repeatedly measured biomarker, measured with error, and the time to an event of interest. This form of question can be tackled with a joint longitudinal-survival model, with the most common approach combining a longitudinal mixed effects model with a proportional hazards survival model, where the models are linked through shared random effects. In this article, we look at incorporating delayed entry (left truncation), which has received relatively little attention. The extension to delayed entry requires a second set of numerical integration, beyond that required in a standard joint model. We therefore implement two sets of fully adaptive Gauss-Hermite quadrature with nested Gauss-Kronrod quadrature (to allow time-dependent association structures), conducted simultaneously, to evaluate the likelihood. We evaluate fully adaptive quadrature compared with previously proposed non-adaptive quadrature through a simulation study, showing substantial improvements, both in terms of minimising bias and reducing computation time. We further investigate, through simulation, the consequences of misspecifying the longitudinal trajectory and its impact on estimates of association. Our scenarios showed the current value association structure to be very robust, compared with the rate of change that we found to be highly sensitive showing that assuming a simpler trend when the truth is more complex can lead to substantial bias. With emphasis on flexible parametric approaches, we generalise previous models by proposing the use of polynomials or splines to capture the longitudinal trend and restricted cubic splines to model the baseline log hazard function. The methods are illustrated on a dataset of breast cancer patients, modelling mammographic density jointly with survival, where we show how to incorporate density measurements prior to the at-risk period, to make use of all the available
Incorporating swarm data into plasma models and plasma surface interactions
NASA Astrophysics Data System (ADS)
Makabe, Toshiaki
2009-10-01
Since the mid-1980s, modeling of non-equilibrium plasmas in a collisional region driven at radio frequency has been developed at pressure greater than ˜Pa. The collisional plasma has distinct characteristics induced by a quantum property of each of feed gas molecules through collisions with electrons or heavy particles. That is, there exists a proper function caused by chemically active radicals, negative-ions, and radiations based on a molecular quantum structure through short-range interactions mainly with electrons. This differs from high-density, collisionless plasma controlled by the long-range Coulomb interaction. The quantum property in the form of the collision cross section is the first essential through swarm parameters in order to investigate the collisional plasma structure and to predict the function. These structure and function, of course, appear under a self- organized spatiotemporal distribution of electrons and positive ions subject to electromagnetic theory, i.e., bulk-plasma and ion-sheath. In a plasma interacting with a surface, the flux, energy and angle of particles incident on a surface are basic quantities. It will be helpful to learn the limits of the swarm data in a quasi-equilibrium situation and to find a way out of the difficulty, when we predict the collisional plasma, the function, and related surface processes. In this talk we will discuss some of these experiences in the case of space and time varying radiofrequency plasma and the micro/nano-surface processes. This work is partly supported by Global-COE program in Keio University, granted by MEXT Japan.
Incorporating the life course model into MCH nutrition leadership education and training programs.
Haughton, Betsy; Eppig, Kristen; Looney, Shannon M; Cunningham-Sabo, Leslie; Spear, Bonnie A; Spence, Marsha; Stang, Jamie S
2013-01-01
Life course perspective, social determinants of health, and health equity have been combined into one comprehensive model, the life course model (LCM), for strategic planning by US Health Resources and Services Administration's Maternal and Child Health Bureau. The purpose of this project was to describe a faculty development process; identify strategies for incorporation of the LCM into nutrition leadership education and training at the graduate and professional levels; and suggest broader implications for training, research, and practice. Nineteen representatives from 6 MCHB-funded nutrition leadership education and training programs and 10 federal partners participated in a one-day session that began with an overview of the models and concluded with guided small group discussions on how to incorporate them into maternal and child health (MCH) leadership training using obesity as an example. Written notes from group discussions were compiled and coded emergently. Content analysis determined the most salient themes about incorporating the models into training. Four major LCM-related themes emerged, three of which were about training: (1) incorporation by training grants through LCM-framed coursework and experiences for trainees, and similarly framed continuing education and skills development for professionals; (2) incorporation through collaboration with other training programs and state and community partners, and through advocacy; and (3) incorporation by others at the federal and local levels through policy, political, and prevention efforts. The fourth theme focused on anticipated challenges of incorporating the model in training. Multiple methods for incorporating the LCM into MCH training and practice are warranted. Challenges to incorporating include the need for research and related policy development. PMID:22350632
Incorporating phosphorus cycling into global modeling efforts: a worthwhile, tractable endeavor
Reed, Sasha C.; Yang, Xiaojuan; Thornton, Peter E.
2015-01-01
Myriad field, laboratory, and modeling studies show that nutrient availability plays a fundamental role in regulating CO2 exchange between the Earth's biosphere and atmosphere, and in determining how carbon pools and fluxes respond to climatic change. Accordingly, global models that incorporate coupled climate–carbon cycle feedbacks made a significant advance with the introduction of a prognostic nitrogen cycle. Here we propose that incorporating phosphorus cycling represents an important next step in coupled climate–carbon cycling model development, particularly for lowland tropical forests where phosphorus availability is often presumed to limit primary production. We highlight challenges to including phosphorus in modeling efforts and provide suggestions for how to move forward.
Cooley, R.L.
1983-01-01
Investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. -from Author
NASA Astrophysics Data System (ADS)
Peng, Guanghan; He, Hongdi; Lu, Wei-Zhen
2016-01-01
In this paper, a new car-following model is proposed with the consideration of the incorporating timid and aggressive behaviors on single lane. The linear stability condition with the incorporating timid and aggressive behaviors term is obtained. Numerical simulation indicates that the new car-following model can estimate proper delay time of car motion and kinematic wave speed at jam density by considering the incorporating the timid and aggressive behaviors. The results also show that the aggressive behavior can improve traffic flow while the timid behavior deteriorates traffic stability, which means that the aggressive behavior is better than timid behavior since the aggressive driver makes rapid response to the variation of the velocity of the leading car. Snapshot of the velocities also shows that the new model can approach approximation to a wide moving jam.
A Bayesian Model for Pooling Gene Expression Studies That Incorporates Co-Regulation Information
Conlon, Erin M.; Postier, Bradley L.; Methé, Barbara A.; Nevin, Kelly P.; Lovley, Derek R.
2012-01-01
Current Bayesian microarray models that pool multiple studies assume gene expression is independent of other genes. However, in prokaryotic organisms, genes are arranged in units that are co-regulated (called operons). Here, we introduce a new Bayesian model for pooling gene expression studies that incorporates operon information into the model. Our Bayesian model borrows information from other genes within the same operon to improve estimation of gene expression. The model produces the gene-specific posterior probability of differential expression, which is the basis for inference. We found in simulations and in biological studies that incorporating co-regulation information improves upon the independence model. We assume that each study contains two experimental conditions: a treatment and control. We note that there exist environmental conditions for which genes that are supposed to be transcribed together lose their operon structure, and that our model is best carried out for known operon structures. PMID:23284902
NASA Astrophysics Data System (ADS)
Hill, D.; Bell, K. R. W.; McMillan, D.; Infield, D.
2014-05-01
The growth of wind power production in the electricity portfolio is striving to meet ambitious targets set, for example by the EU, to reduce greenhouse gas emissions by 20% by 2020. Huge investments are now being made in new offshore wind farms around UK coastal waters that will have a major impact on the GB electrical supply. Representations of the UK wind field in syntheses which capture the inherent structure and correlations between different locations including offshore sites are required. Here, Vector Auto-Regressive (VAR) models are presented and extended in a novel way to incorporate offshore time series from a pan-European meteorological model called COSMO, with onshore wind speeds from the MIDAS dataset provided by the British Atmospheric Data Centre. Forecasting ability onshore is shown to be improved with the inclusion of the offshore sites with improvements of up to 25% in RMS error at 6 h ahead. In addition, the VAR model is used to synthesise time series of wind at each offshore site, which are then used to estimate wind farm capacity factors at the sites in question. These are then compared with estimates of capacity factors derived from the work of Hawkins et al. (2011). A good degree of agreement is established indicating that this synthesis tool should be useful in power system impact studies.
NASA Astrophysics Data System (ADS)
Rettmann, M. E.; Holmes, D. R., III; Packer, D. L.; Robb, R. A.
2011-03-01
Atrial fibrillation is a common cardiac arrhythmia in which aberrant electrical activity cause the atria to quiver which results in irregular beating of the heart. Catheter ablation therapy is becoming increasingly popular in treating atrial fibrillation, a procedure in which an electrophysiologist guides a catheter into the left atrium and creates radiofrequency lesions to stop the arrhythmia. Typical visualization tools include bi-plane fluoroscopy, 2-D ultrasound, and electroanatomic maps, however, recently there has been increased interest in incorporating preoperative surface models into the procedure. Typical strategies for registration include landmark-based and surface-based methods. Drawbacks of these approaches include difficulty in accurately locating corresponding landmark pairs and the time required to sample surface points with a catheter. In this paper, we describe a new approach which models the catheter tip as a Gaussian kernel and eliminates the need to collect surface points by instead using the stream of continuosly tracked catheter points. We demonstrate the feasibility of this technique with a left atrial phantom model and compare the results with a standard surface based approach.
NASA Astrophysics Data System (ADS)
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.
2016-06-01
In this work, we develop a tantalum strength model that incorporates effects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate effects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa. The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less
The Forced Choice Dilemma: A Model Incorporating Idiocentric/Allocentric Cultural Orientation
ERIC Educational Resources Information Center
Jung, Jae Yup; McCormick, John; Gross, Miraca U. M.
2012-01-01
This study developed and tested a new model of the forced choice dilemma (i.e., the belief held by some intellectually gifted students that they must choose between academic achievement and peer acceptance) that incorporates individual-level cultural orientation variables (i.e., vertical allocentrism and vertical idiocentrism). A survey that had…
SPARC Groups: A Model for Incorporating Spiritual Psychoeducation into Group Work
ERIC Educational Resources Information Center
Christmas, Christopher; Van Horn, Stacy M.
2012-01-01
The use of spirituality as a resource for clients within the counseling field is growing; however, the primary focus has been on individual therapy. The purpose of this article is to provide counseling practitioners, administrators, and researchers with an approach for incorporating spiritual psychoeducation into group work. The proposed model can…
Controllability and Optimal Harvesting of a Prey-Predator Model Incorporating a Prey Refuge
ERIC Educational Resources Information Center
Kar, Tapan Kumar
2006-01-01
This paper deals with a prey-predator model incorporating a prey refuge and harvesting of the predator species. A mathematical analysis shows that prey refuge plays a crucial role for the survival of the species and that the harvesting effort on the predator may be used as a control to prevent the cyclic behaviour of the system. The optimal…
Incorporating Eco-Evolutionary Processes into Population Models:Design and Applications
Eco-evolutionary population models are powerful new tools for exploring howevolutionary processes influence plant and animal population dynamics andvice-versa. The need to manage for climate change and other dynamicdisturbance regimes is creating a demand for the incorporation of...
Incorporating 4MAT Model in Distance Instructional Material--An Innovative Design
ERIC Educational Resources Information Center
Nikolaou, Alexandra; Koutsouba, Maria
2012-01-01
In an attempt to improve the effectiveness of distance learning, the present study aims to introduce an innovative way of creating and designing distance learning instructional material incorporating Bernice McCarthy's 4MAT Model based on learning styles. According to McCarthy's theory, all students can learn effectively in a cycle of learning…
Nine challenges in incorporating the dynamics of behaviour in infectious diseases models.
Funk, Sebastian; Bansal, Shweta; Bauch, Chris T; Eames, Ken T D; Edmunds, W John; Galvani, Alison P; Klepac, Petra
2015-03-01
Traditionally, the spread of infectious diseases in human populations has been modelled with static parameters. These parameters, however, can change when individuals change their behaviour. If these changes are themselves influenced by the disease dynamics, there is scope for mechanistic models of behaviour to improve our understanding of this interaction. Here, we present challenges in modelling changes in behaviour relating to disease dynamics, specifically: how to incorporate behavioural changes in models of infectious disease dynamics, how to inform measurement of relevant behaviour to parameterise such models, and how to determine the impact of behavioural changes on observed disease dynamics. PMID:25843377
Incorporating phosphorus cycling into global modeling efforts: a worthwhile, tractable endeavor.
Reed, Sasha C; Yang, Xiaojuan; Thornton, Peter E
2015-10-01
324 I. 324 II. 325 III. 326 IV. 327 328 References 328 SUMMARY: Myriad field, laboratory, and modeling studies show that nutrient availability plays a fundamental role in regulating CO2 exchange between the Earth's biosphere and atmosphere, and in determining how carbon pools and fluxes respond to climatic change. Accordingly, global models that incorporate coupled climate-carbon cycle feedbacks made a significant advance with the introduction of a prognostic nitrogen cycle. Here we propose that incorporating phosphorus cycling represents an important next step in coupled climate-carbon cycling model development, particularly for lowland tropical forests where phosphorus availability is often presumed to limit primary production. We highlight challenges to including phosphorus in modeling efforts and provide suggestions for how to move forward. PMID:26115197
Incorporating preferential flow into a 3D model of a forested headwater catchment
NASA Astrophysics Data System (ADS)
Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Pfister, Laurent; Klaus, Julian
2016-04-01
Preferential flow plays an important role for water flow and solute transport. The inclusion of preferential flow, for example with dual porosity or dual permeability approaches, is a common feature in transport simulations at the plot scale. But at hillslope and catchment scales, incorporation of macropore and fracture flow into distributed hydrologic 3D models is rare, often due to limited data availability for model parameterisation. In this study, we incorporated preferential flow into an existing 3D integrated surface subsurface hydrologic model (HydroGeoSphere) of a headwater region (6 ha) of the forested Weierbach catchment in western Luxembourg. Our model philosophy was a strong link between measured data and the model setup. The model setup we used previously had been parameterised and validated based on various field data. But existing macropores and fractures had not been considered in this initial model setup. The multi-criteria validation revealed a good model performance but also suggested potential for further improvement by incorporating preferential flow as additional process. In order to pursue the data driven model philosophy for the implementation of preferential flow, we analysed the results of plot scale bromide sprinkling and infiltration experiments carried out in the vicinity of the Weierbach catchment. Three 1 sqm plots were sprinkled for one hour and excavated one day later for bromide depth profile sampling. We simulated these sprinkling experiments at the soil column scale, using the parameterisation of the base headwater model extended by a second permeability domain. Representing the bromide depth profiles was successful without changing this initial parameterisation. Moreover, to explain the variability between the three bromide depth profiles it was sufficient to adapt the dual permeability properties, indicating the spatial heterogeneity of preferential flow. Subsequently, we incorporated the dual permeability simulation in the
Effect of incorporation of uncertainty in PCB bioaccumulation factors on modeled receptor doses
Welsh, C.; Duncan, J.; Purucker, S.; Richardson, N.; Redfearn, A.
1995-12-31
Bioaccumulation factors (BAFs) are regularly employed in ecological risk assessments to model contaminant transfer through ecological food chains. The authors compiled data on bioaccumulation of PCBs in plants, invertebrates, birds, and mammals from published literature and used these data to develop regression equations relating soil or food concentrations to bioaccumulation. They then used Latin Hypercube simulation techniques and simple food chain models to incorporate uncertainty in the BAF regressions into the derivation of exposure dose estimates for selected wildlife receptors. The authors present their preliminary results in this paper. Dose estimates ranged over several orders of magnitude for herbivorous, insectivorous, and carnivorous receptors. These results suggest incorporating the uncertainty in BAF values into food chain exposure models could provide risk assessors and risk managers with information on the probability of a given outcome that can be used in interpreting the potential risks at hazardous waste sites.
NASA Astrophysics Data System (ADS)
Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.; Hanks, Byron W.; Foulk, James W.; Battaile, Corbett C.
2016-05-01
The mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FE meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.
NASA Astrophysics Data System (ADS)
Cantrell, Donald R.; Inayat, Samsoon; Taflove, Allen; Ruoff, Rodney S.; Troy, John B.
2008-03-01
An accurate description of the electrode-electrolyte interfacial impedance is critical to the development of computational models of neural recording and stimulation that aim to improve understanding of neuro-electric interfaces and to expedite electrode design. This work examines the effect that the electrode-electrolyte interfacial impedance has upon the solutions generated from time-harmonic finite-element models of cone- and disk-shaped platinum microelectrodes submerged in physiological saline. A thin-layer approximation is utilized to incorporate a platinum-saline interfacial impedance into the finite-element models. This approximation is easy to implement and is not computationally costly. Using an iterative nonlinear solver, solutions were obtained for systems in which the electrode was driven at ac potentials with amplitudes from 10 mV to 500 mV and frequencies from 100 Hz to 100 kHz. The results of these simulations indicate that, under certain conditions, incorporation of the interface may strongly affect the solutions obtained. This effect, however, is dependent upon the amplitude of the driving potential and, to a lesser extent, its frequency. The solutions are most strongly affected at low amplitudes where the impedance of the interface is large. Here, the current density distribution that is calculated from models incorporating the interface is much more uniform than the current density distribution generated by models that neglect the interface. At higher potential amplitudes, however, the impedance of the interface decreases, and its effect on the solutions obtained is attenuated.
Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.; Hanks, Byron W.; Foulk, James W.; Battaile, Corbett C.
2016-04-25
Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less
Adeniran, Ismail; Hancox, Jules C.; Zhang, Henggui
2013-01-01
Introduction: Genetic forms of the Short QT Syndrome (SQTS) arise due to cardiac ion channel mutations leading to accelerated ventricular repolarization, arrhythmias and sudden cardiac death. Results from experimental and simulation studies suggest that changes to refractoriness and tissue vulnerability produce a substrate favorable to re-entry. Potential electromechanical consequences of the SQTS are less well-understood. The aim of this study was to utilize electromechanically coupled human ventricle models to explore electromechanical consequences of the SQTS. Methods and Results: The Rice et al. mechanical model was coupled to the ten Tusscher et al. ventricular cell model. Previously validated K+ channel formulations for SQT variants 1 and 3 were incorporated. Functional effects of the SQTS mutations on [Ca2+]i transients, sarcomere length shortening and contractile force at the single cell level were evaluated with and without the consideration of stretch-activated channel current (Isac). Without Isac, at a stimulation frequency of 1Hz, the SQTS mutations produced dramatic reductions in the amplitude of [Ca2+]i transients, sarcomere length shortening and contractile force. When Isac was incorporated, there was a considerable attenuation of the effects of SQTS-associated action potential shortening on Ca2+ transients, sarcomere shortening and contractile force. Single cell models were then incorporated into 3D human ventricular tissue models. The timing of maximum deformation was delayed in the SQTS setting compared to control. Conclusion: The incorporation of Isac appears to be an important consideration in modeling functional effects of SQT 1 and 3 mutations on cardiac electro-mechanical coupling. Whilst there is little evidence of profoundly impaired cardiac contractile function in SQTS patients, our 3D simulations correlate qualitatively with reported evidence for dissociation between ventricular repolarization and the end of mechanical systole. PMID
Sun, Jiping; Deng, Li
2002-02-01
Modeling phonological units of speech is a critical issue in speech recognition. In this paper, our recent development of an overlapping-feature-based phonological model that represents long-span contextual dependency in speech acoustics is reported. In this model, high-level linguistic constraints are incorporated in automatic construction of the patterns of feature-overlapping and of the hidden Markov model (HMM) states induced by such patterns. The main linguistic information explored includes word and phrase boundaries, morpheme, syllable, syllable constituent categories, and word stress. A consistent computational framework developed for the construction of the feature-based model and the major components of the model are described. Experimental results on the use of the overlapping-feature model in an HMM-based system for speech recognition show improvements over the conventional triphone-based phonological model. PMID:11863165
NASA Astrophysics Data System (ADS)
Sun, Jiping; Deng, Li
2002-02-01
Modeling phonological units of speech is a critical issue in speech recognition. In this paper, our recent development of an overlapping-feature-based phonological model that represents long-span contextual dependency in speech acoustics is reported. In this model, high-level linguistic constraints are incorporated in automatic construction of the patterns of feature-overlapping and of the hidden Markov model (HMM) states induced by such patterns. The main linguistic information explored includes word and phrase boundaries, morpheme, syllable, syllable constituent categories, and word stress. A consistent computational framework developed for the construction of the feature-based model and the major components of the model are described. Experimental results on the use of the overlapping-feature model in an HMM-based system for speech recognition show improvements over the conventional triphone-based phonological model.
NASA Astrophysics Data System (ADS)
Nam, Jong-Hoon; Fettiplace, Robert
2011-11-01
The organ of Corti (OC) is believed to optimize the force transmission from the outer hair cell (OHC) to the basilar membrane and inner hair cell. Recent studies showed that the OC has complex modes of deformation. In an effort to understand the consequence of the OC deformation, we developed a fully deformable 3D finite element model of the OC. It incorporates hair bundle's mechano-transduction and the OHC electrical circuit. Geometric information was taken from the gerbil cochlea at locations with 18 and 0.7 kHz characteristic frequencies. Cochlear partitions of several hundred micrometers long were simulated. The model describes the signature 3D structural arrangement in the OC, especially the tilt of OHC and Deiters cell process. Transduction channel kinetics contributed to the system's mechanics through the hair bundle. The OHC electrical model incorporated the transduction channel conductance, nonlinear capacitance and piezoelectric properties. It also incorporated recent data on the voltage-dependent potassium conductance and membrane time constant. With the model we simulated (1) the limiting frequencies of mechano-transduction and OHC somatic motility and (2) OC transient response to impulse stimuli.
Dynamic modelling of a double-pendulum gantry crane system incorporating payload
Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.
2011-06-20
The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.
Dynamic Modelling of a Double-Pendulum Gantry Crane System Incorporating Payload
NASA Astrophysics Data System (ADS)
Ismail, R. M. T. Raja; Ahmad, M. A.; Ramli, M. S.; Ishak, R.; Zawawi, M. A.
2011-06-01
The natural sway of crane payloads is detrimental to safe and efficient operation. Under certain conditions, the problem is complicated when the payloads create a double pendulum effect. This paper presents dynamic modelling of a double-pendulum gantry crane system based on closed-form equations of motion. The Lagrangian method is used to derive the dynamic model of the system. A dynamic model of the system incorporating payload is developed and the effects of payload on the response of the system are discussed. Extensive results that validate the theoretical derivation are presented in the time and frequency domains.
NASA Technical Reports Server (NTRS)
Arya, Vinod K.; Halford, Gary R.
1993-01-01
The feasibility of a viscoplastic model incorporating two back stresses and a drag strength is investigated for performing nonlinear finite element analyses of structural engineering problems. To demonstrate suitability for nonlinear structural analyses, the model is implemented into a finite element program and analyses for several uniaxial and multiaxial problems are performed. Good agreement is shown between the results obtained using the finite element implementation and those obtained experimentally. The advantages of using advanced viscoplastic models for performing nonlinear finite element analyses of structural components are indicated.
Incorporating Mobility in Growth Modeling for Multilevel and Longitudinal Item Response Data.
Choi, In-Hee; Wilson, Mark
2016-01-01
Multilevel data often cannot be represented by the strict form of hierarchy typically assumed in multilevel modeling. A common example is the case in which subjects change their group membership in longitudinal studies (e.g., students transfer schools; employees transition between different departments). In this study, cross-classified and multiple membership models for multilevel and longitudinal item response data (CCMM-MLIRD) are developed to incorporate such mobility, focusing on students' school change in large-scale longitudinal studies. Furthermore, we investigate the effect of incorrectly modeling school membership in the analysis of multilevel and longitudinal item response data. Two types of school mobility are described, and corresponding models are specified. Results of the simulation studies suggested that appropriate modeling of the two types of school mobility using the CCMM-MLIRD yielded good recovery of the parameters and improvement over models that did not incorporate mobility properly. In addition, the consequences of incorrectly modeling the school effects on the variance estimates of the random effects and the standard errors of the fixed effects depended upon mobility patterns and model specifications. Two sets of large-scale longitudinal data are analyzed to illustrate applications of the CCMM-MLIRD for each type of school mobility. PMID:26881961
NASA Astrophysics Data System (ADS)
Kock, B. E.
2008-12-01
The increased availability and understanding of agent-based modeling technology and techniques provides a unique opportunity for water resources modelers, allowing them to go beyond traditional behavioral approaches from neoclassical economics, and add rich cognition to social-hydrological models. Agent-based models provide for an individual focus, and the easier and more realistic incorporation of learning, memory and other mechanisms for increased cognitive sophistication. We are in an age of global change impacting complex water resources systems, and social responses are increasingly recognized as fundamentally adaptive and emergent. In consideration of this, water resources models and modelers need to better address social dynamics in a manner beyond the capabilities of neoclassical economics theory and practice. However, going beyond the unitary curve requires unique levels of engagement with stakeholders, both to elicit the richer knowledge necessary for structuring and parameterizing agent-based models, but also to make sure such models are appropriately used. With the aim of encouraging epistemological and methodological convergence in the agent-based modeling of water resources, we have developed a water resources-specific cognitive model and an associated collaborative modeling process. Our cognitive model emphasizes efficiency in architecture and operation, and capacity to adapt to different application contexts. We describe a current application of this cognitive model and modeling process in the Arkansas Basin of Colorado. In particular, we highlight the potential benefits of, and challenges to, using more sophisticated cognitive models in agent-based water resources models.
Sunil Sekhar, Anandakumari Chandrasekharan; Vinod, Chathakudath Prabhakaran
2016-01-01
Ultra-small gold nanoparticles incorporated in mesoporous silica thin films with accessible pore channels perpendicular to the substrate are prepared by a modified sol-gel method. The simple and easy spin coating technique is applied here to make homogeneous thin films. The surface characterization using FESEM shows crack-free films with a perpendicular pore arrangement. The applicability of these thin films as catalysts as well as a robust SERS active substrate for model catalysis study is tested. Compared to bare silica film our gold incorporated silica, GSM-23F gave an enhancement factor of 10³ for RhB with a laser source 633 nm. The reduction reaction of p-nitrophenol with sodium borohydride from our thin films shows a decrease in peak intensity corresponding to -NO₂ group as time proceeds, confirming the catalytic activity. Such model surfaces can potentially bridge the material gap between a real catalytic system and surface science studies. PMID:27213321
A new technique for the incorporation of seafloor topography in electromagnetic modelling
NASA Astrophysics Data System (ADS)
Baba, Kiyoshi; Seama, Nobukazu
2002-08-01
We describe a new technique for incorporating seafloor topography in electromagnetic modelling. It is based on a transformation of the topographic relief into a change in electrical conductivity and magnetic permeability within a flat seafloor. Since the technique allows us to model arbitrary topographic changes without extra grid cells and any restriction by vertical discretization, we can model very precise topographic changes easily without an extra burden in terms of computer memory or calculation time. The reliability and stability of the technique are tested by comparing the magnetotelluric responses from two synthetic seafloor topography models with three different approaches; these incorporate the topographic change in terms of (1) the change in conductance, using the thin-sheet approximation; (2) a series of rectangular block-like steps; and (3) triangular finite elements. The technique is easily applied to any modelling method including 3D modelling, allowing us to model complex structure in the Earth while taking full account of the 3D seafloor topography.
Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.
2016-01-01
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it
NASA Astrophysics Data System (ADS)
Stedinger, Jery R.; Pei, Daniel; Cohn, Timothy A.
1985-05-01
A condensed version of the Valencia-Schaake disaggregation model is developed which describes the distribution of monthly streamflow sequences using a set of coupled univariate regression models rather than a multivariate time series formulation. The condensed model has fewer parameters and is convenient for generating flow sequences which incorporate the intrinsic variability of streamflows and the uncertainty in the parameters of the annual and monthly streamflow models. The impact of parameter uncertainty on derived relationships between reservoir capacity and reservoir performance statistics is illustrated using required reservoir capacity (calculated with the sequent peak algorithm), system reliability, and the average total shortfall. Modeled sequences describe flows in the Rappahannock River in Virginia and the Boise River in Idaho. For high-reliability systems the results show that streamflow generation procedures which ignore model parameter uncertainty can grossly underestimate reservoir system failure rates and the severity of likely shortages, even if based on a 50-year record.
Dose convolution filter: Incorporating spatial dose information into tissue response modeling
Huang Yimei; Joiner, Michael; Zhao Bo; Liao Yixiang; Burmeister, Jay
2010-03-15
Purpose: A model is introduced to integrate biological factors such as cell migration and bystander effects into physical dose distributions, and to incorporate spatial dose information in plan analysis and optimization. Methods: The model consists of a dose convolution filter (DCF) with single parameter {sigma}. Tissue response is calculated by an existing NTCP model with DCF-applied dose distribution as input. The authors determined {sigma} of rat spinal cord from published data. The authors also simulated the GRID technique, in which an open field is collimated into many pencil beams. Results: After applying the DCF, the NTCP model successfully fits the rat spinal cord data with a predicted value of {sigma}=2.6{+-}0.5 mm, consistent with 2 mm migration distances of remyelinating cells. Moreover, it enables the appropriate prediction of a high relative seriality for spinal cord. The model also predicts the sparing of normal tissues by the GRID technique when the size of each pencil beam becomes comparable to {sigma}. Conclusions: The DCF model incorporates spatial dose information and offers an improved way to estimate tissue response from complex radiotherapy dose distributions. It does not alter the prediction of tissue response in large homogenous fields, but successfully predicts increased tissue tolerance in small or highly nonuniform fields.
A new Bernoulli-Euler beam model incorporating microstructure and surface energy effects
NASA Astrophysics Data System (ADS)
Gao, X.-L.; Mahmoud, F. F.
2014-04-01
A new Bernoulli-Euler beam model is developed using a modified couple stress theory and a surface elasticity theory. A variational formulation based on the principle of minimum total potential energy is employed, which leads to the simultaneous determination of the equilibrium equation and complete boundary conditions for a Bernoulli-Euler beam. The new model contains a material length scale parameter accounting for the microstructure effect in the bulk of the beam and three surface elasticity constants describing the mechanical behavior of the beam surface layer. The inclusion of these additional material constants enables the new model to capture the microstructure- and surface energy-dependent size effect. In addition, Poisson's effect is incorporated in the current model, unlike existing beam models. The new beam model includes the models considering only the microstructure dependence or the surface energy effect as special cases. The current model reduces to the classical Bernoulli-Euler beam model when the microstructure dependence, surface energy, and Poisson's effect are all suppressed. To demonstrate the new model, a cantilever beam problem is solved by directly applying the general formulas derived. Numerical results reveal that the beam deflection predicted by the new model is smaller than that by the classical beam model. Also, it is found that the difference between the deflections predicted by the two models is very significant when the beam thickness is small but is diminishing with the increase of the beam thickness.
Stephenson, William J.
2007-01-01
INTRODUCTION In support of earthquake hazards and ground motion studies in the Pacific Northwest, three-dimensional P- and S-wave velocity (3D Vp and Vs) and density (3D rho) models incorporating the Cascadia subduction zone have been developed for the region encompassed from about 40.2?N to 50?N latitude, and from about -122?W to -129?W longitude. The model volume includes elevations from 0 km to 60 km (elevation is opposite of depth in model coordinates). Stephenson and Frankel (2003) presented preliminary ground motion simulations valid up to 0.1 Hz using an earlier version of these models. The version of the model volume described here includes more structural and geophysical detail, particularly in the Puget Lowland as required for scenario earthquake simulations in the development of the Seattle Urban Hazards Maps (Frankel and others, 2007). Olsen and others (in press) used the model volume discussed here to perform a Cascadia simulation up to 0.5 Hz using a Sumatra-Andaman Islands rupture history. As research from the EarthScope Program (http://www.earthscope.org) is published, a wealth of important detail can be added to these model volumes, particularly to depths of the upper-mantle. However, at the time of development for this model version, no EarthScope-specific results were incorporated. This report is intended to be a reference for colleagues and associates who have used or are planning to use this preliminary model in their research. To this end, it is intended that these models will be considered a beginning template for a community velocity model of the Cascadia region as more data and results become available.
Transient Thermohydraulic Heat Pipe Modeling: Incorporating THROHPUT into the CAESAR Environment
NASA Astrophysics Data System (ADS)
Hall, Michael L.
2003-01-01
The THROHPUT code, which models transient thermohydraulic heat pipe behavior, is being incorporated into the CAESAR computational physics development environment. The CAESAR environment provides many beneficial features for enhanced model development, including levelized design, unit testing, Design by Contract™ (Meyer, 1997), and literate programming (Knuth, 1992), in a parallel, object-based manner. The original THROHPUT code was developed as a doctoral thesis research code; the current emphasis is on making a robust, verifiable, documented, component-based production package. Results from the original code are included.
Incorporating many-body effects into modeling of semiconductor lasers and amplifiers
Ning, C.Z.; Moloney, J.V.; Indik, R.A.
1997-06-01
Major many-body effects that are important for semiconductor laser modeling are summarized. The authors adopt a bottom-up approach to incorporate these many-body effects into a model for semiconductor lasers and amplifiers. The optical susceptibility function ({Chi}) computed from the semiconductor Bloch equations (SBEs) is approximated by a single Lorentzian, or a superposition of a few Lorentzians in the frequency domain. Their approach leads to a set of effective Bloch equations (EBEs). The authors compare this approach with the full microscopic SBEs for the case of pulse propagation. Good agreement between the two is obtained for pulse widths longer than tens of picoseconds.
Incorporating Micro-Mechanics Based Damage Models into Earthquake Rupture Simulations
NASA Astrophysics Data System (ADS)
Bhat, H.; Rosakis, A.; Sammis, C. G.
2012-12-01
The micromechanical damage mechanics formulated by Ashby and Sammis, 1990 and generalized by Deshpande and Evans 2008 has been extended to allow for a more generalized stress state and to incorporate an experimentally motivated new crack growth (damage evolution) law that is valid over a wide range of loading rates. This law is sensitive to both the crack tip stress field and its time derivative. Incorporating this feature produces additional strain-rate sensitivity in the constitutive response. The model is also experimentally verified by predicting the failure strength of Dionysus-Pentelicon marble over a wide range of strain rates. Model parameters determined from quasi-static experiments were used to predict the failure strength at higher loading rates. Agreement with experimental results was excellent. After this verification step the constitutive law was incorporated into a Finite Element Code focused on simulating dynamic earthquake ruptures with specific focus on the ends of the fault (fault tip process zone) and the resulting strong ground motion radiation was studied.
Marcus, Michael W; Raji, Olaide Y; Duffy, Stephen W; Young, Robert P; Hopkins, Raewyn J; Field, John K
2016-07-01
Incorporation of genetic variants such as single nucleotide polymorphisms (SNPs) into risk prediction models may account for a substantial fraction of attributable disease risk. Genetic data, from 2385 subjects recruited into the Liverpool Lung Project (LLP) between 2000 and 2008, consisting of 20 SNPs independently validated in a candidate-gene discovery study was used. Multifactor dimensionality reduction (MDR) and random forest (RF) were used to explore evidence of epistasis among 20 replicated SNPs. Multivariable logistic regression was used to identify similar risk predictors for lung cancer in the LLP risk model for the epidemiological model and extended model with SNPs. Both models were internally validated using the bootstrap method and model performance was assessed using area under the curve (AUC) and net reclassification improvement (NRI). Using MDR and RF, the overall best classifier of lung cancer status were SNPs rs1799732 (DRD2), rs5744256 (IL-18), rs2306022 (ITGA11) with training accuracy of 0.6592 and a testing accuracy of 0.6572 and a cross-validation consistency of 10/10 with permutation testing P<0.0001. The apparent AUC of the epidemiological model was 0.75 (95% CI 0.73-0.77). When epistatic data were incorporated in the extended model, the AUC increased to 0.81 (95% CI 0.79-0.83) which corresponds to 8% increase in AUC (DeLong's test P=2.2e-16); 17.5% by NRI. After correction for optimism, the AUC was 0.73 for the epidemiological model and 0.79 for the extended model. Our results showed modest improvement in lung cancer risk prediction when the SNP epistasis factor was added. PMID:27121382
MARCUS, MICHAEL W.; RAJI, OLAIDE Y.; DUFFY, STEPHEN W.; YOUNG, ROBERT P.; HOPKINS, RAEWYN J.; FIELD, JOHN K.
2016-01-01
Incorporation of genetic variants such as single nucleotide polymorphisms (SNPs) into risk prediction models may account for a substantial fraction of attributable disease risk. Genetic data, from 2385 subjects recruited into the Liverpool Lung Project (LLP) between 2000 and 2008, consisting of 20 SNPs independently validated in a candidate-gene discovery study was used. Multifactor dimensionality reduction (MDR) and random forest (RF) were used to explore evidence of epistasis among 20 replicated SNPs. Multivariable logistic regression was used to identify similar risk predictors for lung cancer in the LLP risk model for the epidemiological model and extended model with SNPs. Both models were internally validated using the bootstrap method and model performance was assessed using area under the curve (AUC) and net reclassification improvement (NRI). Using MDR and RF, the overall best classifier of lung cancer status were SNPs rs1799732 (DRD2), rs5744256 (IL-18), rs2306022 (ITGA11) with training accuracy of 0.6592 and a testing accuracy of 0.6572 and a cross-validation consistency of 10/10 with permutation testing P<0.0001. The apparent AUC of the epidemiological model was 0.75 (95% CI 0.73–0.77). When epistatic data were incorporated in the extended model, the AUC increased to 0.81 (95% CI 0.79–0.83) which corresponds to 8% increase in AUC (DeLong's test P=2.2e-16); 17.5% by NRI. After correction for optimism, the AUC was 0.73 for the epidemiological model and 0.79 for the extended model. Our results showed modest improvement in lung cancer risk prediction when the SNP epistasis factor was added. PMID:27121382
Simulations of chlorophyll fluorescence incorporated into the Community Land Model version 4.
Lee, Jung-Eun; Berry, Joseph A; van der Tol, Christiaan; Yang, Xi; Guanter, Luis; Damm, Alexander; Baker, Ian; Frankenberg, Christian
2015-09-01
Several studies have shown that satellite retrievals of solar-induced chlorophyll fluorescence (SIF) provide useful information on terrestrial photosynthesis or gross primary production (GPP). Here, we have incorporated equations coupling SIF to photosynthesis in a land surface model, the National Center for Atmospheric Research Community Land Model version 4 (NCAR CLM4), and have demonstrated its use as a diagnostic tool for evaluating the calculation of photosynthesis, a key process in a land surface model that strongly influences the carbon, water, and energy cycles. By comparing forward simulations of SIF, essentially as a byproduct of photosynthesis, in CLM4 with observations of actual SIF, it is possible to check whether the model is accurately representing photosynthesis and the processes coupled to it. We provide some background on how SIF is coupled to photosynthesis, describe how SIF was incorporated into CLM4, and demonstrate that our simulated relationship between SIF and GPP values are reasonable when compared with satellite (Greenhouse gases Observing SATellite; GOSAT) and in situ flux-tower measurements. CLM4 overestimates SIF in tropical forests, and we show that this error can be corrected by adjusting the maximum carboxylation rate (Vmax ) specified for tropical forests in CLM4. Our study confirms that SIF has the potential to improve photosynthesis simulation and thereby can play a critical role in improving land surface and carbon cycle models. PMID:25881891
NASA Astrophysics Data System (ADS)
Vigeant, Michelle C.
Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different
Incorporation of an evaporative cooling scheme into a dynamic model of orographic precipitation
NASA Technical Reports Server (NTRS)
Barros, Ana Paula; Lettenmaier, Dennis P.
1994-01-01
A simple evaporative cooling scheme was incorporated into a dynamic model to estimate orographic precipitation in mountainous regions. The orographic precipitation model is based on the transport of atmospheric moisture and the quantification of preciptable water across a 3D representation of the terrain from the surface up to 250 hPa. Advective wind fields are computed independently and boundary conditions are extracted from radiosonde data. Precipitation rates are obtained through calibration of a spatially distributed precipitation efficiency parameter. The model was applied to the central Sierra Nevada. Results show a gain of the order of 20% in threat-score coefficients designed to measure the forecast ability of the model. Accuracy gains are largest at high elevations and during intense storms associated with warm air masses.
Modelling long-term deformation of granular soils incorporating the concept of fractional calculus
NASA Astrophysics Data System (ADS)
Sun, Yifei; Xiao, Yang; Zheng, Changjie; Hanif, Khairul Fikry
2016-02-01
Many constitutive models exist to characterise the cyclic behaviour of granular soils but can only simulate deformations for very limited cycles. Fractional derivatives have been regarded as one potential instrument for modelling memory-dependent phenomena. In this paper, the physical connection between the fractional derivative order and the fractal dimension of granular soils is investigated in detail. Then a modified elasto-plastic constitutive model is proposed for evaluating the long-term deformation of granular soils under cyclic loading by incorporating the concept of factional calculus. To describe the flow direction of granular soils under cyclic loading, a cyclic flow potential considering particle breakage is used. Test results of several types of granular soils are used to validate the model performance.
NASA Astrophysics Data System (ADS)
Troy, Tara J.; Ines, Amor V. M.; Lall, Upmanu; Robertson, Andrew W.
2013-04-01
Large-scale hydrologic models, such as the Variable Infiltration Capacity (VIC) model, are used for a variety of studies, from drought monitoring to projecting the potential impact of climate change on the hydrologic cycle decades in advance. The majority of these models simulates the natural hydrological cycle and neglects the effects of human activities such as irrigation, which can result in streamflow withdrawals and increased evapotranspiration. In some parts of the world, these activities do not significantly affect the hydrologic cycle, but this is not the case in south Asia where irrigated agriculture has a large water footprint. To address this gap, we incorporate a crop growth model and irrigation model into the VIC model in order to simulate the impacts of irrigated and rainfed agriculture on the hydrologic cycle over south Asia (Indus, Ganges, and Brahmaputra basin and peninsular India). The crop growth model responds to climate signals, including temperature and water stress, to simulate the growth of maize, wheat, rice, and millet. For the primarily rainfed maize crop, the crop growth model shows good correlation with observed All-India yields (0.7) with lower correlations for the irrigated wheat and rice crops (0.4). The difference in correlation is because irrigation provides a buffer against climate conditions, so that rainfed crop growth is more tied to climate than irrigated crop growth. The irrigation water demands induce hydrologic water stress in significant parts of the region, particularly in the Indus, with the streamflow unable to meet the irrigation demands. Although rainfall can vary significantly in south Asia, we find that water scarcity is largely chronic due to the irrigation demands rather than being intermittent due to climate variability.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Liu, M.; Tague, C.; Choate, J. S.; Evans, R. D.; Johnson, K. A.; Adam, J. C.
2013-12-01
Rangelands provide an opportunity to investigate the coupled feedbacks between human activities and natural ecosystems. These areas comprise at least one-third of the Earth's surface and provide ecological support for birds, insects, wildlife and agricultural animals including grazing lands for livestock. Capturing the interactions among water, carbon, and nitrogen cycles within the context of regional scale patterns of climate and management is important to understand interactions, responses, and feedbacks between rangeland systems and humans, as well as provide relevant information to stakeholders and policymakers. The overarching objective of this research is to understand the full consequences, intended and unintended, of human activities and climate over time in rangelands by incorporating dynamics related to rangeland management into an eco-hydrologic model that also incorporates biogeochemical and soil processes. Here we evaluate our model over ungrazed and grazed sites for different rangeland ecosystems. The Regional Hydro-ecologic Simulation System (RHESSys) is a process-based, watershed-scale model that couples water with carbon and nitrogen cycles. Climate, soil, vegetation, and management effects within the watershed are represented in a nested landscape hierarchy to account for heterogeneity and the lateral movement of water and nutrients. We incorporated a daily time-series of plant biomass loss from rangeland to represent grazing. The TRY Plant Trait Database was used to parameterize genera of shrubs and grasses in different rangeland types, such as tallgrass prairie, Intermountain West cold desert, and shortgrass steppe. In addition, other model parameters captured the reallocation of carbon and nutrients after grass defoliation. Initial simulations were conducted at the Curlew Valley site in northern Utah, a former International Geosphere-Biosphere Programme Desert Biome site. We found that grasses were most sensitive to model parameters affecting
NASA Astrophysics Data System (ADS)
Ferrier, K.; Mitrovica, J. X.
2015-12-01
In sedimentary deltas and fans, sea-level changes are strongly modulated by the deposition and compaction of marine sediment. The deposition of sediment and incorporation of water into the sedimentary pore space reduces sea level by increasing the elevation of the seafloor, which reduces the thickness of sea-water above the bed. In a similar manner, the compaction of sediment and purging of water out of the sedimentary pore space increases sea level by reducing the elevation of the seafloor, which increases the thickness of sea water above the bed. Here we show how one can incorporate the effects of sediment deposition and compaction into the global, gravitationally self-consistent sea-level model of Dalca et al. (2013). Incorporating sediment compaction requires accounting for only one additional quantity that had not been accounted for in Dalca et al. (2013): the mean porosity in the sediment column. We provide a general analytic framework for global sea-level changes including sediment deposition and compaction, and we demonstrate how sea level responds to deposition and compaction under one simple parameterization for compaction. The compaction of sediment generates changes in sea level only by changing the elevation of the seafloor. That is, sediment compaction does not affect the mass load on the crust, and therefore does not generate perturbations in crustal elevation or the gravity field that would further perturb sea level. These results have implications for understanding sedimentary effects on sea-level changes and thus for disentangling the various drivers of sea-level change. ReferencesDalca A.V., Ferrier K.L., Mitrovica J.X., Perron J.T., Milne G.A., Creveling J.R., 2013. On postglacial sea level - III. Incorporating sediment redistribution. Geophysical Journal International, doi: 10.1093/gji/ggt089.
NASA Astrophysics Data System (ADS)
Wotton, Mike; Gibos, Kelsy
2010-05-01
The Canadian Forest Fire Danger Rating System (CFFDRS) is used throughout Canada, and in a number of countries throughout the world, for estimating fire potential in wildland fuels. The standard fuel moisture models in the CFFDRS are representative of moisture in closed canopy jack pine or lodge pole pine stands. These models assume full canopy closure and do not therefore account for the influence of solar radiation and thus cannot readily be adapted to more open environments. Recent research has seen the adaptation of the CFFDRS's hourly Fine Fuel Moisture Code (FFMC) model (which represents litter moisture) to open grasslands, through the incorporation of an explicit solar radiation term. This current study describes more recent extension of this modelling effort to forested stand situations. The development and structure of this new model is described and outputs of this new model, along with outputs from the existing FFMC model, are compared with field observations. Results show that the model tracks the diurnal variation in actual litter moisture content more accurately than the existing model for diurnal calculation of the FFMC in the CFFDRS. Practical examples of the application of this system for operational estimation of litter moisture are provided for stands of varying densities and types.
NASA Astrophysics Data System (ADS)
Liang, Binbin; Zhang, Long; Wang, Binglei; Zhou, Shenjie
2015-07-01
A size-dependent model for the electrostatically actuated Nano-Electro-Mechanical Systems (NEMS) incorporating nonlinearities and Casimir force is presented by using a variational method. The governing equation and boundary conditions are derived with the help of strain gradient elasticity theory and Hamilton principle. Generalized differential quadrature (GDQ) method is employed to solve the problem numerically. The pull-in instability with Casimir force included is then studied. The results reveal that Casimir force, which is a spontaneous force between the two electrodes, can reduce the external applied voltage. With Casimir force incorporated, the pull-in instability occurs without voltage applied when the beam size is in nanoscale. The minimum gap and detachment length can be calculated from the present model for different beam size, which is important for NEMS design. Finally, discussions of size effect induced by the strain gradient terms reveal that the present model is more accurate since size effect play an important role when beam in nanoscale.
Incorporation of detailed eye model into polygon-mesh versions of ICRP-110 reference phantoms
NASA Astrophysics Data System (ADS)
Tat Nguyen, Thang; Yeom, Yeon Soo; Kim, Han Sung; Wang, Zhao Jun; Han, Min Cheol; Kim, Chan Hyeong; Lee, Jai Ki; Zankl, Maria; Petoussi-Henss, Nina; Bolch, Wesley E.; Lee, Choonsik; Chung, Beom Sun
2015-11-01
The dose coefficients for the eye lens reported in ICRP 2010 Publication 116 were calculated using both a stylized model and the ICRP-110 reference phantoms, according to the type of radiation, energy, and irradiation geometry. To maintain consistency of lens dose assessment, in the present study we incorporated the ICRP-116 detailed eye model into the converted polygon-mesh (PM) version of the ICRP-110 reference phantoms. After the incorporation, the dose coefficients for the eye lens were calculated and compared with those of the ICRP-116 data. The results showed generally a good agreement between the newly calculated lens dose coefficients and the values of ICRP 2010 Publication 116. Significant differences were found for some irradiation cases due mainly to the use of different types of phantoms. Considering that the PM version of the ICRP-110 reference phantoms preserve the original topology of the ICRP-110 reference phantoms, it is believed that the PM version phantoms, along with the detailed eye model, provide more reliable and consistent dose coefficients for the eye lens.
NASA Astrophysics Data System (ADS)
Papageorgiou, L.; Metaxas, A. C.; Georghiou, G. E.
2011-02-01
A three-dimensional (3D) numerical model for the characterization of gas discharges in air at atmospheric pressure incorporating photoionization through the solution of the Helmholtz equation is presented. Initially, comparisons with a two-dimensional (2D) axi-symmetric model are performed in order to assess the validity of the model. Subsequently several discharge instabilities (plasma spots and low pressure inhomogeneities) are considered in order to study their effect on streamer branching and off-axis propagation. Depending on the magnitude and position of the plasma spot, deformations and off-axis propagation of the main discharge channel were obtained. No tendency for branching in small (of the order of 0.1 cm) overvolted discharge gaps was observed.
NASA Astrophysics Data System (ADS)
Tkacenko, A.
2013-05-01
In this article, we present a complex baseband model for a wideband power amplifier that incorporates carrier frequency dependent amplitude modulation (AM) and phase modulation (PM) (i.e., AM/AM and AM/PM) characteristics in the design process. The structure used to implement the amplifier model is a Wiener system which accounts for memory effects caused by the frequency selective nature of the amplifier, in addition to the nonlinearities caused by gain compression and saturation. By utilizing piecewise polynomial nonlinearities in the structure, it is shown how to construct the Wiener model to exactly accommodate all given AM/AM and AM/PM measurement constraints. Simulation results using data from a 50 W 32-way Ka-band solid-state power amplifier (SSPA) are provided, highlighting the differences in degradation incurred for a wideband input signal as compared with a narrowband input.
Singh, Anima; Nadkarni, Girish; Gottesman, Omri; Ellis, Stephen B; Bottinger, Erwin P; Guttag, John V
2015-02-01
Predictive models built using temporal data in electronic health records (EHRs) can potentially play a major role in improving management of chronic diseases. However, these data present a multitude of technical challenges, including irregular sampling of data and varying length of available patient history. In this paper, we describe and evaluate three different approaches that use machine learning to build predictive models using temporal EHR data of a patient. The first approach is a commonly used non-temporal approach that aggregates values of the predictors in the patient's medical history. The other two approaches exploit the temporal dynamics of the data. The two temporal approaches vary in how they model temporal information and handle missing data. Using data from the EHR of Mount Sinai Medical Center, we learned and evaluated the models in the context of predicting loss of estimated glomerular filtration rate (eGFR), the most common assessment of kidney function. Our results show that incorporating temporal information in patient's medical history can lead to better prediction of loss of kidney function. They also demonstrate that exactly how this information is incorporated is important. In particular, our results demonstrate that the relative importance of different predictors varies over time, and that using multi-task learning to account for this is an appropriate way to robustly capture the temporal dynamics in EHR data. Using a case study, we also demonstrate how the multi-task learning based model can yield predictive models with better performance for identifying patients at high risk of short-term loss of kidney function. PMID:25460205
Singh, Anima; Nadkarni, Girish; Gottesman, Omri; Ellis, Stephen B.; Bottinger, Erwin P.; Guttag, John V.
2015-01-01
Predictive models built using temporal data in electronic health records (EHRs) can potentially play a major role in improving management of chronic diseases. However, these data present a multitude of technical challenges, including irregular sampling of data and varying length of available patient history. In this paper, we describe and evaluate three different approaches that use machine learning to build predictive models using temporal EHR data of a patient. The first approach is a commonly used non-temporal approach that aggregates values of the predictors in the patient’s medical history. The other two approaches exploit the temporal dynamics of the data. The two temporal approaches vary in how they model temporal information and handle missing data. Using data from the EHR of Mount Sinai Medical Center, we learned and evaluated the models in the context of predicting loss of estimated glomerular filtration rate (eGFR), the most common assessment of kidney function. Our results show that incorporating temporal information in patient’s medical history can lead to better prediction of loss of kidney function. They also demonstrate that exactly how this information is incorporated is important. In particular, our results demonstrate that the relative importance of different predictors varies over time, and that using multi-task learning to account for this is an appropriate way to robustly capture the temporal dynamics in EHR data. Using a case study, we also demonstrate how the multi-task learning based model can yield predictive models with better performance for identifying patients at high risk of short-term loss of kidney function. PMID:25460205
A Direct Method for Incorporating Experimental Data into Multiscale Coarse-Grained Models.
Dannenhoffer-Lafage, Thomas; White, Andrew D; Voth, Gregory A
2016-05-10
To extract meaningful data from molecular simulations, it is necessary to incorporate new experimental observations as they become available. Recently, a new method was developed for incorporating experimental observations into molecular simulations, called experiment directed simulation (EDS), which utilizes a maximum entropy argument to bias an existing model to agree with experimental observations while changing the original model by a minimal amount. However, there is no discussion in the literature of whether or not the minimal bias systematically and generally improves the model by creating agreement with the experiment. In this work, we show that the relative entropy of the biased system with respect to an ideal target is always reduced by the application of a minimal bias, such as the one utilized by EDS. Using all-atom simulations that have been biased with EDS, one can then easily and rapidly improve a bottom-up multiscale coarse-grained (MS-CG) model without the need for a time-consuming reparametrization of the underlying atomistic force field. Furthermore, the improvement given by the many-body interactions introduced by the EDS bias can be maintained after being projected down to effective two-body MS-CG interactions. The result of this analysis is a new paradigm in coarse-grained modeling and simulation in which the "bottom-up" and "top-down" approaches are combined within a single, rigorous formalism based on statistical mechanics. The utility of building the resulting EDS-MS-CG models is demonstrated on two molecular systems: liquid methanol and ethylene carbonate. PMID:27045328
Seifzadeh, A; Wang, J; Oguamanam, D C D; Papini, M
2011-08-01
A nonlinear biphasic fiber-reinforced porohyperviscoelastic (BFPHVE) model of articular cartilage incorporating fiber reorientation effects during applied load was used to predict the response of ovine articular cartilage at relatively high strains (20%). The constitutive material parameters were determined using a coupled finite element-optimization algorithm that utilized stress relaxation indentation tests at relatively high strains. The proposed model incorporates the strain-hardening, tension-compression, permeability, and finite deformation nonlinearities that inherently exist in cartilage, and accounts for effects associated with fiber dispersion and reorientation and intrinsic viscoelasticity at relatively high strains. A new optimization cost function was used to overcome problems associated with large peak-to-peak differences between the predicted finite element and experimental loads that were due to the large strain levels utilized in the experiments. The optimized material parameters were found to be insensitive to the initial guesses. Using experimental data from the literature, the model was also able to predict both the lateral displacement and reaction force in unconfined compression, and the reaction force in an indentation test with a single set of material parameters. Finally, it was demonstrated that neglecting the effects of fiber reorientation and dispersion resulted in poorer agreement with experiments than when they were considered. There was an indication that the proposed BFPHVE model, which includes the intrinsic viscoelasticity of the nonfibrillar matrix (proteoglycan), might be used to model the behavior of cartilage up to relatively high strains (20%). The maximum percentage error between the indentation force predicted by the FE model using the optimized material parameters and that measured experimentally was 3%. PMID:21950897
Dynamic modeling of the outlet of a pulsatile pump incorporating a flow-dependent resistance.
Huang, Huan; Yang, Ming; Wu, Shunjie; Liao, Huogen
2013-08-01
Outlet tube models incorporating a linearly flow-dependent resistance are widely used in pulsatile and rotary pump studies. The resistance is made up of a flow-proportional term and a constant term. Previous studies often focused on the steady state properties of the model. In this paper, a dynamic modeling procedure was presented. Model parameters were estimated by an unscented Kalman filter (UKF). The subspace model identification (SMI) algorithm was proposed to initialize the UKF. Model order and structure were also validated by SMI. A mock circulatory loop driven by a pneumatic pulsatile pump was developed to produce pulsatile pressure and flow. Hydraulic parameters of the outlet tube were adjusted manually by a clamp. Seven groups of steady state experiments were carried out to calibrate the flow-dependent resistance as reference values. Dynamic estimation results showed that the inertance estimates are insensitive to model structures. If the constant term was ignored, estimation errors for the flow-proportional term were limited within 16% of the reference values. Compared with the constant resistance, a time-varying one improves model accuracy in terms of root mean square error. The maximum improvement is up to 35%. However, including the constant term in the time-varying resistance will lead to serious estimation errors. PMID:23253954
NASA Astrophysics Data System (ADS)
Saksala, Timo
2015-07-01
In this paper, the embedded discontinuity approach is applied in finite element modeling of rock in compression and tension. For this end, a rate-dependent constitutive model based on (strong) embedded displacement discontinuity model is developed to describe the mode I, mode II and mixed mode fracture of rock. The constitutive model describes the bulk material as linear elastic until reaching the elastic limit. Beyond the elastic limit, the rate-dependent exponential softening law governs the evolution of the displacement jump. Rock heterogeneity is incorporated in the present approach by random description of the mineral texture of rock. Moreover, initial microcrack population always present in natural rocks is accounted for as randomly-oriented embedded discontinuities. In the numerical examples, the model properties are extensively studied in uniaxial compression. The effect of loading rate and confining pressure is also tested in the 2D (plane strain) numerical simulations. These simulations demonstrate that the model captures the salient features of rock in confined compression and uniaxial tension. The developed method has the computational efficiency of continuum plasticity models. However, it also has the advantage, over these models, of accounting for the orientation of introduced microcracks. This feature is crucial with respect to the fracture behavior of rock in compression as shown in this paper.
Incorporation of mantle effects in lithospheric stress modeling: the Eurasian plate
NASA Astrophysics Data System (ADS)
Ruckstuhl, K.; Wortel, M. J. R.; Govers, R.; Meijer, P.
2009-04-01
The intraplate stress field is the result of forces acting on the lithosphere and as such contains valuable information on the dynamics of plate tectonics. Studies modeling the intraplate stress field have followed two different approaches, with the emphasis either on the lithosphere itself or the underlying convecting mantle. For most tectonic plates on earth one or both methods have been quiet successful in reproducing the large scale stress field. The Eurasian plate however has remained a challenge. A probable cause is that due to the complexity of the plate successful models require both an active mantle and well defined boundary forces. We therefore construct a model for the Eurasian plate in which we combine both modeling approaches by incorporating the effects of an active mantle in a model based on a lithospheric approach, where boundary forces are modeled explicitly. The assumption that the whole plate is in dynamical equilibrium allows for imposing a torque balance on the plate, which provides extra constraints on the forces that cannot be calculated a priori. Mantle interaction is modeled as a shear at the base of the plate obtained from global mantle flow models from literature. A first order approximation of the increased excess pressure of the anomalous ridge near the Iceland hotspot is incorporated. Results are evaluated by comparison with World Stress Map data. Direct incorporation of the sublithospheric stresses from mantle flow modeling in our force model is not possible, due to a discrepancy in the magnitude of the integrated mantle shear and lithospheric forces of around one order of magnitude, prohibiting balance of the torques. This magnitude discrepancy is a well known fundamental problem in geodynamics and we choose to close the gap between the two different approaches by scaling down the absolute magnitude of the sublithospheric stresses. Becker and O'Connell (G3,2,2001) showed that various mantle flow models show a considerable spread in
Sullivan, P.; Eurek, K.; Margolis, R.
2014-07-01
Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.
Incorporating disease and population structure into models of SIR disease in contact networks.
Miller, Joel C; Volz, Erik M
2013-01-01
We consider the recently introduced edge-based compartmental models (EBCM) for the spread of susceptible-infected-recovered (SIR) diseases in networks. These models differ from standard infectious disease models by focusing on the status of a random partner in the population, rather than a random individual. This change in focus leads to simple analytic models for the spread of SIR diseases in random networks with heterogeneous degree. In this paper we extend this approach to handle deviations of the disease or population from the simplistic assumptions of earlier work. We allow the population to have structure due to effects such as demographic features or multiple types of risk behavior. We allow the disease to have more complicated natural history. Although we introduce these modifications in the static network context, it is straightforward to incorporate them into dynamic network models. We also consider serosorting, which requires using dynamic network models. The basic methods we use to derive these generalizations are widely applicable, and so it is straightforward to introduce many other generalizations not considered here. Our goal is twofold: to provide a number of examples generalizing the EBCM method for various different population or disease structures and to provide insight into how to derive such a model under new sets of assumptions. PMID:23990880
Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.
Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A
2013-02-01
The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on
NASA Astrophysics Data System (ADS)
Paul, Pijush Kanti
In the fault damage zone modeling study for a field in the Timor Sea, I present a methodology to incorporate geomechanically-based fault damage zones into reservoir simulation. In the studied field, production history suggests that the mismatch between actual production and model prediction is due to preferential fluid flow through the damage zones associated with the reservoir scale faults, which is not included in the baseline petrophysical model. I analyzed well data to estimate stress heterogeneity and fracture distributions in the reservoir. Image logs show that stress orientations are homogenous at the field scale with a strike-slip/normal faulting stress regime and maximum horizontal stress oriented in NE-SW direction. Observed fracture zones in wells are mostly associated with well scale fault and bed boundaries. These zones do not show any anomalies in production logs or well test data, because most of the fractures are not optimally oriented to the present day stress state, and matrix permeability is high enough to mask any small anomalies from the fracture zones. However, I found that fracture density increases towards the reservoir scale faults, indicating high fracture density zones or damage zones close to these faults, which is consistent with the preferred flow direction indicated by interference and tracer test done between the wells. It is well known from geologic studies that there is a concentration of secondary fractures and faults in a damage zone adjacent to larger faults. Because there is usually inadequate data to incorporate damage zone fractures and faults into reservoir simulation models, in this study I utilized the principles of dynamic rupture propagation from earthquake seismology to predict the nature of fractured/damage zones associated with reservoir scale faults. The implemented workflow can be used to more routinely incorporate damage zones into reservoir simulation models. Applying this methodology to a real reservoir utilizing
Stenroos, Matti; Nummenmaa, Aapo
2016-01-01
MEG/EEG source imaging is usually done using a three-shell (3-S) or a simpler head model. Such models omit cerebrospinal fluid (CSF) that strongly affects the volume currents. We present a four-compartment (4-C) boundary-element (BEM) model that incorporates the CSF and is computationally efficient and straightforward to build using freely available software. We propose a way for compensating the omission of CSF by decreasing the skull conductivity of the 3-S model, and study the robustness of the 4-C and 3-S models to errors in skull conductivity. We generated dense boundary meshes using MRI datasets and automated SimNIBS pipeline. Then, we built a dense 4-C reference model using Galerkin BEM, and 4-C and 3-S test models using coarser meshes and both Galerkin and collocation BEMs. We compared field topographies of cortical sources, applying various skull conductivities and fitting conductivities that minimized the relative error in 4-C and 3-S models. When the CSF was left out from the EEG model, our compensated, unbiased approach improved the accuracy of the 3-S model considerably compared to the conventional approach, where CSF is neglected without any compensation (mean relative error < 20% vs. > 40%). The error due to the omission of CSF was of the same order in MEG and compensated EEG. EEG has, however, large overall error due to uncertain skull conductivity. Our results show that a realistic 4-C MEG/EEG model can be implemented using standard tools and basic BEM, without excessive workload or computational burden. If the CSF is omitted, compensated skull conductivity should be used in EEG. PMID:27472278
Stenroos, Matti; Nummenmaa, Aapo
2016-01-01
MEG/EEG source imaging is usually done using a three-shell (3-S) or a simpler head model. Such models omit cerebrospinal fluid (CSF) that strongly affects the volume currents. We present a four-compartment (4-C) boundary-element (BEM) model that incorporates the CSF and is computationally efficient and straightforward to build using freely available software. We propose a way for compensating the omission of CSF by decreasing the skull conductivity of the 3-S model, and study the robustness of the 4-C and 3-S models to errors in skull conductivity. We generated dense boundary meshes using MRI datasets and automated SimNIBS pipeline. Then, we built a dense 4-C reference model using Galerkin BEM, and 4-C and 3-S test models using coarser meshes and both Galerkin and collocation BEMs. We compared field topographies of cortical sources, applying various skull conductivities and fitting conductivities that minimized the relative error in 4-C and 3-S models. When the CSF was left out from the EEG model, our compensated, unbiased approach improved the accuracy of the 3-S model considerably compared to the conventional approach, where CSF is neglected without any compensation (mean relative error < 20% vs. > 40%). The error due to the omission of CSF was of the same order in MEG and compensated EEG. EEG has, however, large overall error due to uncertain skull conductivity. Our results show that a realistic 4-C MEG/EEG model can be implemented using standard tools and basic BEM, without excessive workload or computational burden. If the CSF is omitted, compensated skull conductivity should be used in EEG. PMID:27472278
Incorporating Social Anxiety Into a Model of College Problem Drinking: Replication and Extension
Ham, Lindsay S.; Hope, Debra A.
2009-01-01
Although research has found an association between social anxiety and alcohol use in noncollege samples, results have been mixed for college samples. College students face many novel social situations in which they may drink to reduce social anxiety. In the current study, the authors tested a model of college problem drinking, incorporating social anxiety and related psychosocial variables among 228 undergraduate volunteers. According to structural equation modeling (SEM) results, social anxiety was unrelated to alcohol use and was negatively related to drinking consequences. Perceived drinking norms mediated the social anxiety–alcohol use relation and was the variable most strongly associated with problem drinking. College students appear to be unique with respect to drinking and social anxiety. Although the notion of social anxiety alone as a risk factor for problem drinking was unsupported, additional research is necessary to determine whether there is a subset of socially anxious students who have high drinking norms and are in need of intervention. PMID:16938075
Runoff Modelling of the Khumbu Glacier, Nepal: Incorporating Debris Cover and Retreat Dynamics.
NASA Astrophysics Data System (ADS)
Douglas, James; Huss, Matthias; Jones, Julie; Swift, Darrel; Salerno, Franco
2016-04-01
Detailed studies on the future evolution and runoff of glaciers in high mountain Asia are scarce considering the region is so reliant on on this essential water source. This study adapts a model well-proven in the European Alps, the Glacier Evolution and Runoff Model (GERM), to simulate the behaviour of the Khumbu glacier, Nepal. GERM calculates glacier mass balance and runoff using a distributed temperature index model which has been modified such that the unique dynamics of debris covered glaciers, namely stagnation, thinning, and melt-inhibiting debris surfaces, are incorporated. Debris thickness is derived from both remote sensing and model based approaches allowing a suite of experiments to be conducted using various levels of debris cover. The model is driven by CORDEX-South Asia regional climate model (RCM) simulations, bias corrected using a quantile mapping technique based on in-situ data from the Pyramid meteorological station. Here, results are presented showing the retreat of the Khumbu glacier and the corresponding changes for annual and seasonal discharge until 2100, using varying melt parameters and debris thicknesses to assess the impact of debris cover on glacier evolution and runoff.
Incorporation of 3D Shortwave Radiative Effects within the Weather Research and Forecasting Model
O'Hirok, W.; Ricchiazzi, P.; Gautier, C.
2005-03-18
A principal goal of the Atmospheric Radiation Measurement (ARM) Program is to understand the 3D cloud-radiation problem from scales ranging from the local to the size of global climate model (GCM) grid squares. For climate models using typical cloud overlap schemes, 3D radiative effects are minimal for all but the most complicated cloud fields. However, with the introduction of ''superparameterization'' methods, where sub-grid cloud processes are accounted for by embedding high resolution 2D cloud system resolving models within a GCM grid cell, the impact of 3D radiative effects on the local scale becomes increasingly relevant (Randall et al. 2003). In a recent study, we examined this issue by comparing the heating rates produced from a 3D and 1D shortwave radiative transfer model for a variety of radar derived cloud fields (O'Hirok and Gautier 2005). As demonstrated in Figure 1, the heating rate differences for a large convective field can be significant where 3D effects produce areas o f intense local heating. This finding, however, does not address the more important question of whether 3D radiative effects can alter the dynamics and structure of a cloud field. To investigate that issue we have incorporated a 3D radiative transfer algorithm into the Weather Research and Forecasting (WRF) model. Here, we present very preliminary findings of a comparison between cloud fields generated from a high resolution non-hydrostatic mesoscale numerical weather model using 1D and 3D radiative transfer codes.
Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter
2010-01-01
In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example. PMID:20345550
An agent-based model of stock markets incorporating momentum investors
NASA Astrophysics Data System (ADS)
Wei, J. R.; Huang, J. P.; Hui, P. M.
2013-06-01
It has been widely accepted that there exist investors who adopt momentum strategies in real stock markets. Understanding the momentum behavior is of both academic and practical importance. For this purpose, we propose and study a simple agent-based model of trading incorporating momentum investors and random investors. The random investors trade randomly all the time. The momentum investors could be idle, buying or selling, and they decide on their action by implementing an action threshold that assesses the most recent price movement. The model is able to reproduce some of the stylized facts observed in real markets, including the fat-tails in returns, weak long-term correlation and scaling behavior in the kurtosis of returns. An analytic treatment of the model relates the model parameters to several quantities that can be extracted from real data sets. To illustrate how the model can be applied, we show that real market data can be used to constrain the model parameters, which in turn provide information on the behavior of momentum investors in different markets.
NASA Astrophysics Data System (ADS)
Ryves, David B.; Battarbee, Richard W.; Fritz, Sherilyn C.
2009-01-01
Taphonomic issues pose fundamental challenges for Quaternary scientists to recover environmental signals from biological proxies and make accurate inferences of past environments. The problem of microfossil preservation, specifically diatom dissolution, remains an important, but often overlooked, source of error in both qualitative and quantitative reconstructions of key variables from fossil samples, especially those using relative abundance data. A first step to tackling this complex issue is establishing an objective method of assessing preservation (here, diatom dissolution) that can be applied by different analysts and incorporated into routine counting strategies. Here, we establish a methodology for assessment of diatom dissolution under standard light microscopy (LM) illustrated with morphological criteria for a range of major diatom valve shapes. Dissolution data can be applied to numerical models (transfer functions) from contemporary samples, and to fossil material to aid interpretation of stratigraphic profiles and taphonomic pathways of individual taxa. Using a surface sediment diatom-salinity training set from the Northern Great Plains (NGP) as an example, we explore a variety of approaches to include dissolution data in salinity inference models indirectly and directly. Results show that dissolution data can improve models, with apparent dissolution-adjusted error (RMSE) up to 15% lower than their unadjusted counterparts. Internal validation suggests improvements are more modest, with bootstrapped prediction errors (RMSEP) up to 10% lower. When tested on a short core from Devils Lake, North Dakota, which has a historical record of salinity, dissolution-adjusted models infer higher values compared to unadjusted models during peak salinity of the 1930s-1940s Dust Bowl but nonetheless significantly underestimate peak values. Site-specific factors at Devils Lake associated with effects of lake level change on taphonomy (preservation and re
NASA Astrophysics Data System (ADS)
Gharari, S.; Hrachowitz, M.; Fenicia, F.; Gao, H.; Gupta, H. V.; Savenije, H.
2014-12-01
Although different strategies have demonstrated that incorporation of expert and a priori knowledge can help to improve the realism of models, no systematic strategy has been presented in the literature for constraining the model parameters to be consistent with the (sometimes) patchy understanding of a modeler regarding how the real system might work. Part of the difficulty in doing this is that expert knowledge may not always consist of explicitly quantifiable relationships between physical system characteristics and model parameters; rather, it may consist of conceptual understanding about consistency relationships that must exist between various model parameter or behavioral relationships that must exist among model state variables and/or fluxes. Apart from aforementioned constraints, a unified strategy for measurement of information content in hierarchal model building seems lacking. Firstly the model structure is built by its building blocks (control volumes or state variables) as well as interconnecting fluxes (formation of control volumes and fluxes). Secondly, parameterizations of model are designed, as an example the effect of a specific type of stage-discharge relation for a control volume can be explored. At the final stage the parameter values are quantified. In each step and based on assumptions made, more and more information is added to the model. In this study we try to construct (based on hierarchal model building scheme) and constrain parameters of different conceptual models built on landscape units classified according to their hydrological functions and based on our logical considerations and general lessons from previous studies across the globe for a Luxembourgish catchment. Based on the result, including our basic understanding of how a system may work into hydrological models appears to be a powerful tool to achieve higher model realism as it leads to models with higher performance. Progressive measurement of performance and uncertainty
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Gupta, Hoshin; Hrachowitz, Markus; Fenicia, Fabrizio; Gao, Hongkai; Savenije, Hubert
2015-04-01
Although different strategies have demonstrated that incorporation of expert and a priori knowledge can help to improve the realism of models, no systematic strategy has been presented in the literature for constraining the model parameters to be consistent with the (sometimes) patchy understanding of a modeler regarding how the real system might work. Part of the difficulty in doing this is that expert knowledge may not always consist of explicitly quantifiable relationships between physical system characteristics and model parameters; rather, it may consist of conceptual understanding about consistency relationships that must exist between various model parameter or behavioral relationships that must exist among model state variables and/or fluxes. Apart from aforementioned constraints, a unified strategy for measurement of information content in hierarchal model building seems lacking. Firstly the model structure is built by its building blocks (control volumes or state variables) as well as interconnecting fluxes (formation of control volumes and fluxes). Secondly, parameterizations of model are designed, as an example the effect of a specific type of stage-discharge relation for a control volume can be explored. At the final stage the parameter values are quantified. In each step and based on assumptions made, more and more information is added to the model. In this study we try to construct (based on hierarchal model building scheme) and constrain parameters of different conceptual models built on landscape units classified according to their hydrological functions and based on our logical considerations and general lessons from previous studies across the globe for a Luxembourgish catchment. Based on the result, including our basic understanding of how a system may work into hydrological models appears to be a powerful tool to achieve higher model realism as it leads to models with higher performance. Progressive measurement of performance and uncertainty
Incorporating advanced combustion models to study power density in diesel engines
NASA Astrophysics Data System (ADS)
Lee, Daniel Michael
A new combustion model is presented that can be used to simulate the diesel combustion process. This combustion process is broken into three phases: low temperature ignition kinetics, premixed burn and high temperature diffusion burn. The low temperature ignition kinetics are modeled using the Shell model. For combustion limited by diffusion, a probability density function (PDF) combustion model is utilized. In this model, the turbulent reacting flow is assumed to be an ensemble of locally laminar flamelets. With this methodology, species mass fractions obtained from the solution of laminar flamelet equations can be conditioned to generate a flamelet library. For kinetically limited (premixed) combustion, an Arrhenius rate is used. To transition between the premixed and diffusion burning modes, a transport equation for premixed fuel was implemented. The ratio of fuel in a computational cell that is premixed is used to determine the contribution of each combustion mode. Results show that this combustion model accurately simulates the diesel combustion process. Furthermore, the simulated results are in agreement with the recent conceptual picture of diesel combustion based upon experimental observations. Large eddy simulation (LES) models for momentum exchange and scalar flux were incorporated into the KIVA solver. In this formulation, the turbulent viscosity, μt, is determined as a function of the sub- grid turbulent kinetic energy, which is in turn determined from a one equation model. The formulation for the scalar transfer coefficient, μs, is similar to that of the turbulent viscosity, yet is made to be consistent with scalar transport. Test cases were run verifying that both momentum and scalar flux can be accurately predicted using LES. Once verified, these LES models were used to simulate the diesel combustion process for a Caterpillar 3400 series engine. Results for the engine simulations were in good agreement with experimental data.
Sutheerawatthana, Pitch; Minato, Takayuki
2010-02-15
The response of a social group is a missing element in the formal impact assessment model. Previous discussion of the involvement of social groups in an intervention has mainly focused on the formation of the intervention. This article discusses the involvement of social groups in a different way. A descriptive model is proposed by incorporating a social group's response into the concept of second- and higher-order effects. The model is developed based on a cause-effect relationship through the observation of phenomena in case studies. The model clarifies the process by which social groups interact with a lower-order effect and then generate a higher-order effect in an iterative manner. This study classifies social groups' responses into three forms-opposing, modifying, and advantage-taking action-and places them in six pathways. The model is expected to be used as an analytical tool for investigating and identifying impacts in the planning stage and as a framework for monitoring social groups' responses during the implementation stage of a policy, plan, program, or project (PPPPs).
Yamamoto, Takashi; Watanuki, Yutaka; Hazen, Elliott L; Nishizawa, Bungo; Sasaki, Hiroko; Takahashi, Akinori
2015-12-01
Habitat use is often examined at a species or population level, but patterns likely differ within a species, as a function of the sex, breeding colony, and current breeding status of individuals. Hence, within-species differences should be considered in habitat models when analyzing and predicting species distributions, such as predicted responses to expected climate change scenarios. Also, species' distribution data obtained by different methods (vessel-survey and individual tracking) are often analyzed separately rather than integrated to improve predictions. Here, we eventually fit generalized additive models for Streaked Shearwaters Calonectris leuconelas using tracking data from two different breeding colonies in the Northwestern Pacific and visual observer data collected during a research cruise off the coast of western Japan. The tracking-based models showed differences among patterns of relative density distribution as a function of life history category (colony, sex, and breeding conditions). The integrated tracking-based and vessel-based bird count model incorporated ecological states rather than predicting a single surface for the entire species. This study highlights both the importance of including ecological and life history data and integrating multiple data types (tag-based tracking and vessel count) when examining species-environment relationships, ultimately advancing the capabilities of species distribution models. PMID:26910963
Incorporation of a Chemical Kinetics Model for Composition B in a Parallel Finite-Element Algorithm
NASA Astrophysics Data System (ADS)
Kallman, Elizabeth; Pauler, Denise
2009-06-01
A thermal degradation model for Composition B (Comp B) explosive is being evaluated for incorporation into a finite-element algorithm [1]. The RDX component of Comp B dominates the thermal degradation since its decomposition process occurs at lower temperatures than TNT. The model assumes that solid and liquid RDX decompose by the same mechanisms, but along different reaction pathways [2, 3]. A steady-state approximation is applied to the gaseous intermediates and is compared to the full transient analysis for the entire reaction scheme. The parallel finite-element algorithm is used to predict the pressure increase on the interior of the metal casing of confined Comp B due to the production of gases during thermal decomposition. =0pt References [1] E. M. Kallman, ``Scalable Cluster-Based Galerkin Analysis for Kinetics Models of Energetic Materials,'' SIAM CSE, March 2-6, 2009. [2] D. K. Zerkle, ``Composition B Decomposition and Ignition Model,'' 13th International Detonation Symposium, July 23-28, 2006. [3] J. M. Zucker, A. J. Barra, D. K. Zerkle, M. J. Kaneshige and P. M. Dickson, ``Thermal Decomposition Models for High Explosive Compositions,'' 14th APS Topical Conference on Shock Compression of Condensed Matter, July 31-August 5, 2005.
NASA Astrophysics Data System (ADS)
Lei, Y.; Zhang, B. W.; Bai, B. F.; Zhao, T. S.
2015-12-01
In a typical all-vanadium redox flow battery (VRFB), the ion exchange membrane is directly exposed in the bulk electrolyte. Consequently, the Donnan effect occurs at the membrane/electrolyte (M/E) interfaces, which is critical for modeling of ion transport through the membrane and the prediction of cell performance. However, unrealistic assumptions in previous VRFB models, such as electroneutrality and discontinuities of ionic potential and ion concentrations at the M/E interfaces, lead to simulated results inconsistent with the theoretical analysis of ion adsorption in the membrane. To address this issue, this work proposes a continuous-Donnan effect-model using the Poisson equation coupled with the Nernst-Planck equation to describe variable distributions at the M/E interfaces. A one-dimensional transient VRFB model incorporating the Donnan effect is developed. It is demonstrated that the present model enables (i) a more realistic simulation of continuous distributions of ion concentrations and ionic potential throughout the membrane and (ii) a more comprehensive estimation for the effect of the fixed charge concentration on species crossover across the membrane and cell performance.
NASA Astrophysics Data System (ADS)
Turner, Sean; Galelli, Stefano; Wilcox, Karen
2015-04-01
Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating
Gripp, A.E.; Gordon, R.G. )
1990-07-01
NUVEL-1 is a new global model of current relative plate velocities which differ significantly from those of prior models. Here the authors incorporate NUVEL-1 into HS2-NUVEL1, a new global model of plate velocities relative to the hotspots. HS2-NUVEL1 was determined from the hotspot data and errors used by Minster and Jordan (1978) to determine AM1-2, which is their model of plate velocities relative to the hotspots. AM1-2 is consistent with Minster and Jordan's relative plate velocity model RM2. Here the authors compare HS2-NUVEL1 with AM1-2 and examine how their differences relate to differences between NUVEL-1 and RM2. HS2-NUVEL1 plate velocities relative to the hotspots are mainly similar to those of AM1-2. Minor differences between the two models include the following: (1) in HS2-NUVEL1 the speed of the partly continental, apparently non-subducting Indian plate is greater than that of the purely oceanic, subducting Nazca plate; (2) in places the direction of motion of the African, Antarctic, Arabian, Australian, Caribbean, Cocos, Eurasian, North American, and South American plates differs between models by more than 10{degree}; (3) in places the speed of the Australian, Caribbean, Cocos, Indian, and Nazca plates differs between models by more than 8 mm/yr. Although 27 of the 30 RM2 Euler vectors differ with 95% confidence from those of NUVEL-1, only the AM1-2 Arabia-hotspot and India-hotspot Euler vectors differ with 95% confidence from those of HS2-NUVEL1. Thus, substituting NUVEL-1 for RM2 in the inversion for plate velocities relative to the hotspots changes few Euler vectors significantly, presumably because the uncertainty in the velocity of a plate relative to the hotspots is much greater than the uncertainty in its velocity relative to other plates.
Tucker, Susan L.; Li, Minghuan; Xu, Ting; Gomez, Daniel; Yuan, Xianglin; Yu, Jinming; Liu, Zhensheng; Yin, Ming; Guan, Xiaoxiang; Wang, Li-E; Wei, Qingyi; Mohan, Radhe; Vinogradskiy, Yevgeniy; Martel, Mary; Liao, Zhongxing
2012-01-01
Purpose To determine whether single nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor beta, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGFβ, TNFα, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiotherapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (Grade ≥3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate (MV) analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGFβ, VEGF, TNFα, XRCC1 and APEX1. With smoking status included in the MV model, the SNPs significantly associated with increased risk of RP were in genes for TGFβ, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk versus <10% risk of RP when exposed to high MLDs. PMID:22541966
Tucker, Susan L.; Li Minghuan; Xu Ting; Gomez, Daniel; Yuan Xianglin; Yu Jinming; Liu Zhensheng; Yin Ming; Guan Xiaoxiang; Wang Lie; Wei Qingyi; Mohan, Radhe; Vinogradskiy, Yevgeniy; Martel, Mary; Liao Zhongxing
2013-01-01
Purpose: To determine whether single-nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor-{beta}, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials: Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGF{beta}, TNF{alpha}, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiation therapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (grade {>=}3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results: Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGF{beta}, VEGF, TNF{alpha}, XRCC1 and APEX1. With smoking status included in the multivariate model, the SNPs significantly associated with increased risk of RP were in genes for TGF{beta}, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions: This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk vs <10% risk of RP when they were exposed to high MLDs.
Simonyan, Kristina
2014-01-01
Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number of extensions of
NASA Astrophysics Data System (ADS)
Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.
2012-12-01
Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which
NASA Astrophysics Data System (ADS)
Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei
2016-04-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat
Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism
Li, Zhen; Bian, Xin; Karniadakis, George Em; Li, Xiantao
2015-12-28
The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while the corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.
Van Breukelen, Boris M; Hunkeler, Daniel; Volkering, Frank
2005-06-01
Compound-specific isotope analysis (CSIA) enables quantification of biodegradation by use of the Rayleigh equation. The Rayleigh equation fails, however, to describe the sequential degradation of chlorinated aliphatic hydrocarbons (CAHs) involving various intermediates that are controlled by simultaneous degradation and production. This paper shows how isotope fractionation during sequential degradation can be simulated in a 1D reactive transport code (PHREEQC-2). 12C and 13C isotopes of each CAH were simulated as separate species, and the ratio of the rate constants of the heavy to light isotope equaled the kinetic isotope fractionation factor for each degradation step. The developed multistep isotope fractionation reactive transport model (IF-RTM) adequately simulated reductive dechlorination of tetrachloroethene (PCE) to ethene in a microcosm experiment. Transport scenarios were performed to evaluate the effect of sorption and of different degradation rate constant ratios among CAH species on the downgradient isotope evolution. The power of the model to quantify degradation is illustrated for situations where mixed sources degrade and for situations where daughter products are removed by oxidative processes. Finally, the model was used to interpret the occurrence of reductive dechlorination at a field site. The developed methodology can easily be incorporated in 3D solute transport models to enable quantification of sequential CAH degradation in the field by CSIA. PMID:15984799
Incorporating seismic phase correlations into a probabilistic model of global-scale seismology
NASA Astrophysics Data System (ADS)
Arora, Nimar
2013-04-01
We present a probabilistic model of seismic phases whereby the attributes of the body-wave phases are correlated to those of the first arriving P phase. This model has been incorporated into NET-VISA (Network processing Vertically Integrated Seismic Analysis) a probabilistic generative model of seismic events, their transmission, and detection on a global seismic network. In the earlier version of NET-VISA, seismic phase were assumed to be independent of each other. Although this didn't affect the quality of the inferred seismic bulletin, for the most part, it did result in a few instances of anomalous phase association. For example, an S phase with a smaller slowness than the corresponding P phase. We demonstrate that the phase attributes are indeed highly correlated, for example the uncertainty in the S phase travel time is significantly reduced given the P phase travel time. Our new model exploits these correlations to produce better calibrated probabilities for the events, as well as fewer anomalous associations.
Incorporation of parametric factors into multilinear receptor model studies of Atlanta aerosol
NASA Astrophysics Data System (ADS)
Kim, Eugene; Hopke, Philip K.; Paatero, Pentti; Edgerton, Eric S.
In prior work with simulated data, ancillary variables including time resolved wind data were utilized in a multilinear model to successfully reduce rotational ambiguity and increase the number of resolved sources. In this study, time resolved wind and other data were incorporated into a model for the analysis of real measurement data. Twenty-four hour integrated PM 2.5 (particulate matter ⩽2.5 μm in aerodynamic diameter) compositional data were measured in Atlanta, GA between August 1998 and August 2000 (662 samples). A two-stage model that utilized 22 elemental species, two wind variables, and three time variables was used for this analysis. The model identified nine sources: sulfate-rich secondary aerosol I (54%), gasoline exhaust (15%), diesel exhaust (11%), nitrate-rich secondary aerosol (9%), metal processing (3%), wood smoke (3%), airborne soil (2%), sulfate-rich secondary aerosol II (2%), and the mixture of a cement kiln with a carbon-rich source (0.9%). The results of this study indicate that utilizing time resolved wind measurements aids to separate diesel exhaust from gasoline vehicle exhaust. For most of the sources, well-defined directional profiles, seasonal trends, and weekend effects were obtained.
Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism.
Li, Zhen; Bian, Xin; Li, Xiantao; Karniadakis, George Em
2015-12-28
The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while the corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model. PMID:26723613
Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism
NASA Astrophysics Data System (ADS)
Li, Zhen; Bian, Xin; Li, Xiantao; Karniadakis, George Em
2015-12-01
The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while the corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.
English, Sinéad; Bateman, Andrew W; Clutton-Brock, Tim H
2012-05-01
Lifetime records of changes in individual size or mass in wild animals are scarce and, as such, few studies have attempted to model variation in these traits across the lifespan or to assess the factors that affect them. However, quantifying lifetime growth is essential for understanding trade-offs between growth and other life history parameters, such as reproductive performance or survival. Here, we used model selection based on information theory to measure changes in body mass over the lifespan of wild meerkats, and compared the relative fits of several standard growth models (monomolecular, von Bertalanffy, Gompertz, logistic and Richards). We found that meerkats exhibit monomolecular growth, with the best model incorporating separate growth rates before and after nutritional independence, as well as effects of season and total rainfall in the previous nine months. Our study demonstrates how simple growth curves may be improved by considering life history and environmental factors, which may be particularly relevant when quantifying growth patterns in wild populations. PMID:22108854
Berg, Larry K.; Allwine, K Jerry; Rutz, Frederick C.
2004-08-23
A new modeling system has been developed to provide a non-meteorologist with tools to predict air pollution transport in regions of complex terrain. This system couples the Penn State/NCAR Mesoscale Model 5 (MM5) with Earth Tech’s CALMET-CALPUFF system using a unique Graphical User Interface (GUI) developed at Pacific Northwest National Laboratory. This system is most useful in data-sparse regions, where there are limited observations to initialize the CALMET model. The user is able to define the domain of interest, provide details about the source term, and enter a surface weather observation through the GUI. The system then generates initial conditions and time constant boundary conditions for use by MM5. MM5 is run and the results are piped to CALPUFF for the dispersion calculations. Contour plots of pollutant concentration are prepared for the user. The primary advantages of the system are the streamlined application of MM5 and CALMET, limited data requirements, and the ability to run the coupled system on a desktop or laptop computer. In comparison with data collected as part of a field campaign, the new modeling system shows promise that a full-physics mesoscale model can be used in an applied modeling system to effectively simulate locally thermally-driven winds with minimal observations as input. An unexpected outcome of this research was how well CALMET represented the locally thermally-driven flows.
NASA Astrophysics Data System (ADS)
Gao, X.-L.; Zhang, G. Y.
2016-03-01
A new non-classical Kirchhoff plate model is developed using a modified couple stress theory, a surface elasticity theory and a two-parameter elastic foundation model. A variational formulation based on Hamilton's principle is employed, which leads to the simultaneous determination of the equations of motion and the complete boundary conditions and provides a unified treatment of the microstructure, surface energy and foundation effects. The new plate model contains a material length scale parameter to account for the microstructure effect, three surface elastic constants to describe the surface energy effect, and two foundation moduli to represent the foundation effect. The current non-classical plate model reduces to its classical elasticity-based counterpart when the microstructure, surface energy and foundation effects are all suppressed. In addition, the newly developed plate model includes the models considering the microstructure dependence or the surface energy effect or the foundation influence alone as special cases and recovers the Bernoulli-Euler beam model incorporating the microstructure, surface energy and foundation effects. To illustrate the new model, the static bending and free vibration problems of a simply supported rectangular plate are analytically solved by directly applying the general formulas derived. For the static bending problem, the numerical results reveal that the deflection of the simply supported plate with or without the elastic foundation predicted by the current model is smaller than that predicted by the classical model. Also, it is observed that the difference in the deflection predicted by the new and classical plate models is very large when the plate thickness is sufficiently small, but it is diminishing with the increase of the plate thickness. For the free vibration problem, it is found that the natural frequency predicted by the new plate model with or without the elastic foundation is higher than that predicted by the
Improving consumption rate estimates by incorporating wild activity into a bioenergetics model.
Brodie, Stephanie; Taylor, Matthew D; Smith, James A; Suthers, Iain M; Gray, Charles A; Payne, Nicholas L
2016-04-01
Consumption is the basis of metabolic and trophic ecology and is used to assess an animal's trophic impact. The contribution of activity to an animal's energy budget is an important parameter when estimating consumption, yet activity is usually measured in captive animals. Developments in telemetry have allowed the energetic costs of activity to be measured for wild animals; however, wild activity is seldom incorporated into estimates of consumption rates. We calculated the consumption rate of a free-ranging marine predator (yellowtail kingfish, Seriola lalandi) by integrating the energetic cost of free-ranging activity into a bioenergetics model. Accelerometry transmitters were used in conjunction with laboratory respirometry trials to estimate kingfish active metabolic rate in the wild. These field-derived consumption rate estimates were compared with those estimated by two traditional bioenergetics methods. The first method derived routine swimming speed from fish morphology as an index of activity (a "morphometric" method), and the second considered activity as a fixed proportion of standard metabolic rate (a "physiological" method). The mean consumption rate for free-ranging kingfish measured by accelerometry was 152 J·g(-1)·day(-1), which lay between the estimates from the morphometric method (μ = 134 J·g(-1)·day(-1)) and the physiological method (μ = 181 J·g(-1)·day(-1)). Incorporating field-derived activity values resulted in the smallest variance in log-normally distributed consumption rates (σ = 0.31), compared with the morphometric (σ = 0.57) and physiological (σ = 0.78) methods. Incorporating field-derived activity into bioenergetics models probably provided more realistic estimates of consumption rate compared with the traditional methods, which may further our understanding of trophic interactions that underpin ecosystem-based fisheries management. The general methods used to estimate active metabolic rates of free-ranging fish
Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail
2013-12-14
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.
Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma
Ruggieri, Ruggero; Stavreva, Nadejda; Naccarato, Stefania; Stavrev, Pavel
2012-08-01
Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as due to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.
An integrative model of the cardiac ventricular myocyte incorporating local control of Ca2+ release.
Greenstein, Joseph L; Winslow, Raimond L
2002-01-01
The local control theory of excitation-contraction (EC) coupling in cardiac muscle asserts that L-type Ca(2+) current tightly controls Ca(2+) release from the sarcoplasmic reticulum (SR) via local interaction of closely apposed L-type Ca(2+) channels (LCCs) and ryanodine receptors (RyRs). These local interactions give rise to smoothly graded Ca(2+)-induced Ca(2+) release (CICR), which exhibits high gain. In this study we present a biophysically detailed model of the normal canine ventricular myocyte that conforms to local control theory. The model formulation incorporates details of microscopic EC coupling properties in the form of Ca(2+) release units (CaRUs) in which individual sarcolemmal LCCs interact in a stochastic manner with nearby RyRs in localized regions where junctional SR membrane and transverse-tubular membrane are in close proximity. The CaRUs are embedded within and interact with the global systems of the myocyte describing ionic and membrane pump/exchanger currents, SR Ca(2+) uptake, and time-varying cytosolic ion concentrations to form a model of the cardiac action potential (AP). The model can reproduce both the detailed properties of EC coupling, such as variable gain and graded SR Ca(2+) release, and whole-cell phenomena, such as modulation of AP duration by SR Ca(2+) release. Simulations indicate that the local control paradigm predicts stable APs when the L-type Ca(2+) current is adjusted in accord with the balance between voltage- and Ca(2+)-dependent inactivation processes as measured experimentally, a scenario where common pool models become unstable. The local control myocyte model provides a means for studying the interrelationship between microscopic and macroscopic behaviors in a manner that would not be possible in experiments. PMID:12496068
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua; Alfonsi, Andrea; Askin Guler; Tunc Aldemir
2014-11-01
Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper represents an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation
ERIC Educational Resources Information Center
Grady, Matthew W.; Beretvas, S. Natasha
2010-01-01
Multiple membership random effects models (MMREMs) have been developed for use in situations where individuals are members of multiple higher level organizational units. Despite their availability and the frequency with which multiple membership structures are encountered, no studies have extended the MMREM approach to hierarchical growth curve…
Candy, J V; Chambers, D H; Breitfeller, E F; Guidry, B L; Verbeke, J M; Axelrod, M A; Sale, K E; Meyer, A M
2010-03-02
The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representation of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.
Evolutionary Models of Super-Earths and Mini-Neptunes Incorporating Cooling and Mass Loss
NASA Astrophysics Data System (ADS)
Howe, Alex R.; Burrows, Adam
2015-08-01
We construct models of the structural evolution of super-Earth- and mini-Neptune-type exoplanets with H2-He envelopes, incorporating radiative cooling and XUV-driven mass loss. We conduct a parameter study of these models, focusing on initial mass, radius, and envelope mass fractions, as well as orbital distance, metallicity, and the specific prescription for mass loss. From these calculations, we investigate how the observed masses and radii of exoplanets today relate to the distribution of their initial conditions. Orbital distance and the initial envelope mass fraction are the most important factors determining planetary evolution, particularly radius evolution. Initial mass also becomes important below a “turnoff mass,” which varies with orbital distance, with mass-radius curves being approximately flat for higher masses. Initial radius is the least important parameter we study, with very little difference between the hot start and cold start limits after an age of 100 Myr. Model sets with no mass loss fail to produce results consistent with observations, but a plausible range of mass-loss scenarios is allowed. In addition, we present scenarios for the formation of the Kepler-11 planets. Our best fit to observations of Kepler-11b and Kepler-11c involves formation beyond the snow line, after which they moved inward, circularized, and underwent a reduced degree of mass loss.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Incorporating spatially explicit crown light competition into a model of canopy transpiration
NASA Astrophysics Data System (ADS)
Loranty, M. M.; Mackay, D. S.; Roberts, D. E.; Ewers, B. E.; Kruger, E. L.; Traver, E.
2006-12-01
Stomatal conductance parameterized in a transpiration model has been shown to vary spatially for aspen ( Populus tremuloides) and alder (Alnus incana) growing along a moisture gradient. We hypothesized that competition for light within the canopy would explain some of this variation. Sap flux data was collected over 10 days in 2004, and 30 days in 2005 at a 1.5 ha site near the WLEF AmeriFlux tower in the Chequmegon National Forest near Park Falls, Wisconsin. We used inverse modeling with the Terrestrial Regional Ecosystem Exchange Simulator (TREES) to estimate values of GSref for individual trees. Competition data for individual aspen sampled for sap flux was collected in August 2006. The number, height, DBH, and location of all competitors within 5 meters of each flux tree were recorded. Preliminary geostatistical analysis indicates that the number of competitor trees varies spatially for aspen. We hypothesize that height and species specific crown characteristics of competitor trees will have a spatially variable affect on transpiration via light attenuation. Furthermore, a simple light competition term will be able to incorporate this variability into the TREES transpiration model.
A Model for Incorporating Chemical Reactions in Mesoscale Modeling of Laser Ablation of Polymers
NASA Astrophysics Data System (ADS)
Garrison, Barbara J.; Yingling, Yaroslava G.
2004-03-01
We have developed a methodology for including effects of chemical reactions in coarse-grained computer simulations such as those that use the united atom or bead and spring approximations. The new coarse-grained chemical reaction model (CGCRM) adopts the philosophy of kinetic Monte Carlo approaches and includes a probabilistic element to predicting when reactions occur, thus obviating the need for a chemically correct interaction potential. The CGCRM uses known chemical reactions along with their probabilities and exothermicities for a specific material in order to assess the effect of chemical reactions on a physical process of interest. The reaction event in the simulation is implemented by removing the reactant molecules from the simulation and replacing them with product molecules. The position of the product molecules is carefully adjusted to make sure that the total energy change of the system corresponds to the reaction exothermicity. The CGCR model was initially implemented in simulations of laser irradiation at fluences such that there is ablation or massive removal of material. The initial reaction is photon cleavage of a chemical bond thus creating two radicals that can undergo subsequent abstraction and radical-radical recombination reactions. The talk will discuss application of the model to photoablation of PMMA. Y. G. Yingling, L. V. Zhigilei and B. J. Garrison, J. Photochemistry and Photobiology A: Chemistry, 145, 173-181 (2001); Y. G. Yingling and B. J. Garrison, Chem. Phys. Lett., 364, 237-243 (2002).
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2014-05-01
Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.
A land use regression model incorporating data on industrial point source pollution.
Chen, Li; Wang, Yuming; Li, Peiwu; Ji, Yaqin; Kong, Shaofei; Li, Zhiyong; Bai, Zhipeng
2012-01-01
Advancing the understanding of the spatial aspects of air pollution in the city regional environment is an area where improved methods can be of great benefit to exposure assessment and policy support. We created land use regression (LUR) models for SO2, NO2 and PM10 for Tianjin, China. Traffic volumes, road networks, land use data, population density, meteorological conditions, physical conditions and satellite-derived greenness, brightness and wetness were used for predicting SO2, NO2 and PM10 concentrations. We incorporated data on industrial point sources to improve LUR model performance. In order to consider the impact of different sources, we calculated the PSIndex, LSIndex and area of different land use types (agricultural land, industrial land, commercial land, residential land, green space and water area) within different buffer radii (1 to 20 km). This method makes up for the lack of consideration of source impact based on the LUR model. Remote sensing-derived variables were significantly correlated with gaseous pollutant concentrations such as SO2 and NO2. R2 values of the multiple linear regression equations for SO2, NO2 and PM10 were 0.78, 0.89 and 0.84, respectively, and the RMSE values were 0.32, 0.18 and 0.21, respectively. Model predictions at validation monitoring sites went well with predictions generally within 15% of measured values. Compared to the relationship between dependent variables and simple variables (such as traffic variables or meteorological condition variables), the relationship between dependent variables and integrated variables was more consistent with a linear relationship. Such integration has a discernable influence on both the overall model prediction and health effects assessment on the spatial distribution of air pollution in the city region. PMID:23513446
View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition.
Muramatsu, Daigo; Makihara, Yasushi; Yagi, Yasushi
2016-07-01
Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view transformation model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings. PMID:26259209
Verhaegen, F; Liu, H H
2001-02-01
In radiation therapy, new treatment modalities employing dynamic collimation and intensity modulation increase the complexity of dose calculation because a new dimension, time, has to be incorporated into the traditional three-dimensional problem. In this work, we investigated two classes of sampling technique to incorporate dynamic collimator motion in Monte Carlo simulation. The methods were initially evaluated for modelling enhanced dynamic wedges (EDWs) from Varian accelerators (Varian Medical Systems, Palo Alto, USA). In the position-probability-sampling or PPS method, a cumulative probability distribution function (CPDF) was computed for the collimator position, which could then be sampled during simulations. In the static-component-simulation or SCS method, a dynamic field is approximated by multiple static fields in a step-shoot fashion. The weights of the particles or the number of particles simulated for each component field are computed from the probability distribution function (PDF) of the collimator position. The CPDF and PDF were computed from the segmented treatment tables (STTs) for the EDWs. An output correction factor had to be applied in this calculation to account for the backscattered radiation affecting monitor chamber readings. Comparison of the phase-space data from the PPS method (with the step-shoot motion) with those from the SCS method showed excellent agreement. The accuracy of the PPS method was further verified from the agreement between the measured and calculated dose distributions. Compared to the SCS method, the PPS method is more automated and efficient from an operational point of view. The principle of the PPS method can be extended to simulate other dynamic motions, and in particular, intensity-modulated beams using multileaf collimators. PMID:11229715
Bias in Diet Determination: Incorporating Traditional Methods in Bayesian Mixing Models
Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G.; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo
2013-01-01
There are not “universal methods” to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators’ diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal’s diet the sea lion’s did not have a clear dominance of any prey. In contrast, SIMM-IP’s diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs’ estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys’ contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators’ diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably
Zhang, Yang; Zhang, Xin; Wang, Kai; He, Jian; Leung, Lai-Yung R.; Fan, Jiwen; Nenes, Athanasios
2015-07-22
Aerosol activation into cloud droplets is an important process that governs aerosol indirect effects. The advanced treatment of aerosol activation by Fountoukis and Nenes (2005) and its recent updates, collectively called the FN series, have been incorporated into a newly developed regional coupled climate-air quality model based on the Weather Research and Forecasting model with the physics package of the Community Atmosphere Model version 5 (WRF-CAM5) to simulate aerosol-cloud interactions in both resolved and convective clouds. The model is applied to East Asia for two full years of 2005 and 2010. A comprehensive model evaluation is performed for model predictions of meteorological, radiative, and cloud variables, chemical concentrations, and column mass abundances against satellite data and surface observations from air quality monitoring sites across East Asia. The model performs overall well for major meteorological variables including near-surface temperature, specific humidity, wind speed, precipitation, cloud fraction, precipitable water, downward shortwave and longwave radiation, and column mass abundances of CO, SO2, NO2, HCHO, and O3 in terms of both magnitudes and spatial distributions. Larger biases exist in the predictions of surface concentrations of CO and NOx at all sites and SO2, O3, PM2.5, and PM10 concentrations at some sites, aerosol optical depth, cloud condensation nuclei over ocean, cloud droplet number concentration (CDNC), cloud liquid and ice water path, and cloud optical thickness. Compared with the default Abdul-Razzack Ghan (2000) parameterization, simulations with the FN series produce ~107–113% higher CDNC, with half of the difference attributable to the higher aerosol activation fraction by the FN series and the remaining half due to feedbacks in subsequent cloud microphysical processes. With the higher CDNC, the FN series are more skillful in simulating cloud water path, cloud optical thickness, downward shortwave radiation
NASA Astrophysics Data System (ADS)
Pal, David; Jaffe, Peter
2015-04-01
Estimates of global CH4 emissions from wetlands indicate that wetlands are the largest natural source of CH4 to the atmosphere. In this paper, we propose that there is a missing component to these models that should be addressed. CH4 is produced in wetland sediments from the microbial degradation of organic carbon through multiple fermentation steps and methanogenesis pathways. There are multiple sources of carbon for methananogenesis; in vegetated wetland sediments, microbial communities consume root exudates as a major source of organic carbon. In many methane models propionate is used as a model carbon molecule. This simple sugar is fermented into acetate and H2, acetate is transformed to methane and CO2, while the H2 and CO2 are used to form an additional CH4 molecule. The hydrogenotrophic pathway involves the equilibrium of two dissolved gases, CH4 and H2. In an effort to limit CH4 emissions from wetlands, there has been growing interest in finding ways to limit plant transport of soil gases through root systems. Changing planted species, or genetically modifying new species of plants may control this transport of soil gases. While this may decrease the direct emissions of methane, there is little understanding about how H2 dynamics may feedback into overall methane production. The results of an incubation study were combined with a new model of propionate degradation for methanogenesis that also examines other natural parameters (i.e. gas transport through plants). This presentation examines how we would expect this model to behave in a natural field setting with changing sulfate and carbon loading schemes. These changes can be controlled through new plant species and other management practices. Next, we compare the behavior of two variations of this model, with or without the incorporation of H2 interactions, with changing sulfate, carbon loading and root volatilization. Results show that while the models behave similarly there may be a discrepancy of nearly
No-net-rotation model of current plate velocities incorporating plate motion model NUVEL-1
NASA Technical Reports Server (NTRS)
Argus, Donald F.; Gordon, Richard G.
1991-01-01
NNR-NUVEL1 is presented which is a model of plate velocities relative to the unique reference frame defined by requiring no-net-rotation of the lithosphere while constraining relative plate velocities to equal those in global plate motion model NUVEL-1 (DeMets et al., 1990). In NNR-NUVEL1, the Pacific plate rotates in a right-handed sense relative to the no-net-rotation reference frame at 0.67 deg/m.y. about 63 deg S, 107 deg E. At Hawaii the Pacific plate moves relative to the no-net-rotation reference frame at 70 mm/yr, which is 25 mm/yr slower than the Pacific plate moves relative to the hotspots. Differences between NNR-NUVEL1 and HS2-NUVEL1 are described. The no-net-rotation reference frame differs significantly from the hotspot reference frame. If the difference between reference frames is caused by motion of the hotspots relative to a mean-mantle reference frame, then hotspots beneath the Pacific plate move with coherent motion towards the east-southeast. Alternatively, the difference between reference frames can show that the uniform drag, no-net-torque reference frame, which is kinematically equivalent to the no-net-rotation reference frame, is based on a dynamically incorrect premise.
Pike, Ivy L; Williams, Sharon R
2006-01-01
This paper investigates the potential benefits and limitations of including psychosocial stress data in a biocultural framework of human adaptability. Building on arguments within human biology on the importance of political economic perspectives for examining patterns of biological variation, this paper suggests that psychosocial perspectives may further refine our understanding of the mechanisms through which social distress yields differences in health and well-being. To assess a model that integrates psychosocial experiences, we conducted a preliminary study among nomadic pastoralist women from northern Kenya. We interviewed 45 women about current and past stressful experiences, and collected anthropometric data and salivary cortisol measures. Focus group and key informant interviews were conducted to refine our understanding of how the Turkana discuss and experience distress. The results suggest that the most sensitive indicators of Turkana women's psychosocial experiences were the culturally defined idioms of distress, which showed high concordance with measures of first-day salivary cortisol. Other differences in stress reactivity were associated with the frequent movement of encampments, major herd losses, and direct experiences of livestock raiding. Despite the preliminary nature of these data, we believe that the results offer important lessons and insights into the longer-term process of incorporating psychosocial models into human adaptability studies. PMID:17039478
Ma, Songyun; Scheider, Ingo; Bargmann, Swantje
2016-09-01
An anisotropic constitutive model is proposed in the framework of finite deformation to capture several damage mechanisms occurring in the microstructure of dental enamel, a hierarchical bio-composite. It provides the basis for a homogenization approach for an efficient multiscale (in this case: multiple hierarchy levels) investigation of the deformation and damage behavior. The influence of tension-compression asymmetry and fiber-matrix interaction on the nonlinear deformation behavior of dental enamel is studied by 3D micromechanical simulations under different loading conditions and fiber lengths. The complex deformation behavior and the characteristics and interaction of three damage mechanisms in the damage process of enamel are well captured. The proposed constitutive model incorporating anisotropic damage is applied to the first hierarchical level of dental enamel and validated by experimental results. The effect of the fiber orientation on the damage behavior and compressive strength is studied by comparing micro-pillar experiments of dental enamel at the first hierarchical level in multiple directions of fiber orientation. A very good agreement between computational and experimental results is found for the damage evolution process of dental enamel. PMID:27294283
Conroy, M.J.; Senar, J.C.; Hines, J.E.; Domenech, J.
1999-01-01
We developed an extension of Cormack-Jolly-Seber models to handle a complex mark-recapture problem in which (a) the sex of birds cannot be determined prior to first moult, but can be predicted on the basis of body measurements, and (b) a significant portion of captured birds appear to be transients (i.e. are captured once but leave the area or otherwise become ' untrappable'). We applied this methodology to a data set of 4184 serins (Serinus serinus) trapped in northeastern Spain during 1985-96, in order to investigate age-, sex-, and time-specific variation in survival rates. Using this approach, we were able to successfully incorporate the majority of ringings of serins. Had we eliminated birds not previously captured (as has been advocated to avoid the problem of transience) we would have reduced our sample sizes by >2000 releases. In addition, we were able to include 1610 releases of birds of unknown (but predicted) sex; these data contributed to the precision of our estimates and the power of statistical tests. We discuss problems with data structure, encoding of the algorithms to compute parameter estimates, model selection, identifiability of parameters, and goodness-of-fit, and make recommendations for the design and analysis of future studies facing similar problems.
Conroy, M.J.; Senar, J.C.; Hines, J.E.; Domenech, J.
1999-01-01
We developed an extension of Cormack-Jolly-Seber models to handle a complex mark-recapture problem in which (a) the sex of birds cannot be determined prior to first moult, but can be predicted on the basis of body measurements, and (b) a significant portion of captured birds appear to be transients (i.e. are captured once but leave the area or otherwise become 'untrappable'). We applied this methodology to a data set of 4184 serins (Serinus serinus) trapped in northeastern Spain during 1985-96, in order to investigate age-, sex-, and time-specific variation in survival rates. Using this approach, we were able to successfully incorporate the majority of ringings of serins. Had we eliminated birds not previously captured (as has been advocated to avoid the problem of transience) we would have reduced our sample sizes by >2000 releases. In addition, we were able to include 1610 releases of birds of unknown (but predicted) sex; these data contributed to the precision of our estimates and the power of statistical tests. We discuss problems with data structure, encoding of the algorithms to compute parameter estimates, model selection, identifiability of parameters, and goodness-of-fit, and make recommendations for the design and analysis of future studies facing similar problems.
Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye
2014-01-01
This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye
2014-01-01
This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109
A large-scale methane model by incorporating the surface water transport
NASA Astrophysics Data System (ADS)
Lu, Xiaoliang; Zhuang, Qianlai; Liu, Yaling; Zhou, Yuyu; Aghakouchak, Amir
2016-06-01
The effect of surface water movement on methane emissions is not explicitly considered in most of the current methane models. In this study, a surface water routing was coupled into our previously developed large-scale methane model. The revised methane model was then used to simulate global methane emissions during 2006-2010. From our simulations, the global mean annual maximum inundation extent is 10.6 ± 1.9 km2 and the methane emission is 297 ± 11 Tg C/yr in the study period. In comparison to the currently used TOPMODEL-based approach, we found that the incorporation of surface water routing leads to 24.7% increase in the annual maximum inundation extent and 30.8% increase in the methane emissions at the global scale for the study period, respectively. The effect of surface water transport on methane emissions varies in different regions: (1) the largest difference occurs in flat and moist regions, such as Eastern China; (2) high-latitude regions, hot spots in methane emissions, show a small increase in both inundation extent and methane emissions with the consideration of surface water movement; and (3) in arid regions, the new model yields significantly larger maximum flooded areas and a relatively small increase in the methane emissions. Although surface water is a small component in the terrestrial water balance, it plays an important role in determining inundation extent and methane emissions, especially in flat regions. This study indicates that future quantification of methane emissions shall consider the effects of surface water transport.
NASA Astrophysics Data System (ADS)
Andre, B. J.; Rajaram, H.; Silverstein, J.
2010-12-01
diffusion model at the scale of a single rock is developed incorporating the proposed kinetic rate expressions. Simulations of initiation, washout and AMD flows are discussed to gain a better understanding of the role of porosity, effective diffusivity and reactive surface area in generating AMD. Simulations indicate that flow boundary conditions control generation of acid rock drainage as porosity increases.
Incorporation of in vitro drug metabolism data into physiologically-based pharmacokinetic models.
Houston, J B; Carlile, D J
1997-10-01
The liver poses particular problems in constructing physiologically-based pharmacokinetic models since this organ is not only a distribution site for drugs/chemicals but frequently the major site of metabolism. The impact of hepatic drug metabolism in modelling is substantial and it is crucial to the success of the model that in vitro data on biotransformation be incorporated in a judicious manner. The value of in vitro/in vivo extrapolation is clearly demonstrated by considering kinetic data from incubations with freshly isolated hepatocytes. The determination of easily measurable in vitro parameters, such as V(max) and K(m), from initial rate studies and scaling to the in vivo situation by accounting for hepatocellularity provides intrinsic clearance estimates. A scaling factor of 1200 x 10(6) cells per 250 g rat has proved to be a robust parameter independent of laboratory technique and insensitive to animal pretreatment. Similar procedures can also be adopted for other in vitro systems such as hepatic microsomes and liver slices. An appropriate scaling factor for microsomal studies is the microsomal recovery index which allows for the incomplete recovery of cytochrome P-450 with standard differential centrifugation of liver homogenates. The hepatocellularity of a liver slice has been unsatisfactory in scaling kinetic parameters from liver slices. The level of success varies from drug to drug and substrate diffusion is a competing process to metabolism within the slice incubation system; hence, low clearance drugs are better predicted than high clearance drugs. The use of three liver models (venous-equilibration, undistributed sinusoidal and dispersion models) have been compared to predict hepatic clearance from in vitro intrinsic clearance values. As no consistent advantage of one model over the others could be demonstrated, the simplest, the venous-equilibration model, is adequate for the currently available data in hepatocytes. While these successes are
ERIC Educational Resources Information Center
Jung, Jae Yup
2013-01-01
This study tested a newly developed model of the cognitive decision-making processes of senior high school students related to university entry. The model incorporated variables derived from motivation theory (i.e. expectancy-value theory and the theory of reasoned action), literature on cultural orientation and occupational considerations. A…
Litman, Heather J; Horton, Nicholas J; Hernández, Bernardo; Laird, Nan M
2007-02-28
Multiple informant data refers to information obtained from different individuals or sources used to measure the same construct; for example, researchers might collect information regarding child psychopathology from the child's teacher and the child's parent. Frequently, studies with multiple informants have incomplete observations; in some cases the missingness of informants is substantial. We introduce a Maximum Likelihood (ML) technique to fit models with multiple informants as predictors that permits missingness in the predictors as well as the response. We provide closed form solutions when possible and analytically compare the ML technique to the existing Generalized Estimating Equations (GEE) approach. We demonstrate that the ML approach can be used to compare the effect of the informants on response without standardizing the data. Simulations incorporating missingness show that ML is more efficient than the existing GEE method. In the presence of MCAR missing data, we find through a simulation study that the ML approach is robust to a relatively extreme departure from the normality assumption. We implement both methods in a study investigating the association between physical activity and obesity with activity measured using multiple informants (children and their mothers). PMID:16755531
Incorporation of GRACE Data into a Bayesian Model for Groundwater Drought Monitoring
NASA Astrophysics Data System (ADS)
Slinski, K.; Hogue, T. S.; McCray, J. E.; Porter, A.
2015-12-01
Groundwater drought, defined as the sustained occurrence of below average availability of groundwater, is marked by below average water levels in aquifers and reduced flows to groundwater-fed rivers and wetlands. The impact of groundwater drought on ecosystems, agriculture, municipal water supply, and the energy sector is an increasingly important global issue. However, current drought monitors heavily rely on precipitation and vegetative stress indices to characterize the timing, duration, and severity of drought events. The paucity of in situ observations of aquifer levels is a substantial obstacle to the development of systems to monitor groundwater drought in drought-prone areas, particularly in developing countries. Observations from the NASA/German Space Agency's Gravity Recovery and Climate Experiment (GRACE) have been used to estimate changes in groundwater storage over areas with sparse point measurements. This study incorporates GRACE total water storage observations into a Bayesian framework to assess the performance of a probabilistic model for monitoring groundwater drought based on remote sensing data. Overall, it is hoped that these methods will improve global drought preparedness and risk reduction by providing information on groundwater drought necessary to manage its impacts on ecosystems, as well as on the agricultural, municipal, and energy sectors.
Sandmeier, Franziska C; Tracy, Richard C
2014-09-01
We propose a new heuristic model that incorporates metabolic rate and pace of life to predict a vertebrate species' investment in adaptive immune function. Using reptiles as an example, we hypothesize that animals with low metabolic rates will invest more in innate immunity compared with adaptive immunity. High metabolic rates and body temperatures should logically optimize the efficacy of the adaptive immune system--through rapid replication of T and B cells, prolific production of induced antibodies, and kinetics of antibody--antigen interactions. In current theory, the precise mechanisms of vertebrate immune function oft are inadequately considered as diverse selective pressures on the evolution of pathogens. We propose that the strength of adaptive immune function and pace of life together determine many of the important dynamics of host-pathogen evolution, namely, that hosts with a short lifespan and innate immunity or with a long lifespan and strong adaptive immunity are expected to drive the rapid evolution of their populations of pathogens. Long-lived hosts that rely primarily on innate immune functions are more likely to use defense mechanisms of tolerance (instead of resistance), which are not expected to act as a selection pressure for the rapid evolution of pathogens' virulence. PMID:24760792
Sensitivity studies for incorporating the direct effect of sulfate aerosols into climate models
NASA Astrophysics Data System (ADS)
Miller, Mary Rawlings Lamberton
2000-09-01
Aerosols have been identified as a major element of the climate system known to scatter and absorb solar and infrared radiation, but the development of procedures for representing them is still rudimentary. This study addresses the need to improve the treatment of sulfate aerosols in climate models by investigating how sensitive radiative particles are to varying specific sulfate aerosol properties. The degree to which sulfate particles absorb or scatter radiation, termed the direct effect, varies with the size distribution of particles, the aerosol mass density, the aerosol refractive indices, the relative humidity and the concentration of the aerosol. This study develops 504 case studies of altering sulfate aerosol chemistry, size distributions, refractive indices and densities at various ambient relative humidity conditions. Ammonium sulfate and sulfuric acid aerosols are studied with seven distinct size distributions at a given mode radius with three corresponding standard deviations implemented from field measurements. These test cases are evaluated for increasing relative humidity. As the relative humidity increases, the complex index of refraction and the mode radius for each distribution correspondingly change. Mie theory is employed to obtain the radiative properties for each case study. The case studies are then incorporated into a box model, the National Center of Atmospheric Research's (NCAR) column radiation model (CRM), and NCAR's community climate model version 3 (CCM3) to determine how sensitive the radiative properties and potential climatic effects are to altering sulfate properties. This study found the spatial variability of the sulfate aerosol leads to regional areas of intense aerosol forcing (W/m2). These areas are particularly sensitive to altering sulfate properties. Changes in the sulfate lognormal distribution standard deviation can lead to substantial regional differences in the annual aerosol forcing greater than 2 W/m 2. Changes in the
NexGen PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models
We examine how the integration of evolutionary and ecological processes in population dynamics – an emerging framework in ecology – could be incorporated into population viability analysis (PVA). Driven by parallel, complementary advances in population genomics and computational ...
NASA Technical Reports Server (NTRS)
Bartos, R. D.
1992-01-01
As the pointing accuracy and service life requirements of the DSN 70 meter antenna increase, it is necessary to gain a more complete understanding of the servo hydraulic system in order to improve system designs to meet the new requirements. A mathematical model is developed for the servovalve incorporated into the hydraulic system of the 70 meter antenna and uses experimental data to verify the validity of the model and to identify the model parameters.
Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2010-01-01
In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it
NASA Astrophysics Data System (ADS)
Guo, Jianmao; Lu, Weisong; Zhang, Guoping; Qian, Yonglan; Yu, Qiang; Zhang, Jiahua
2006-12-01
Accurate crop growth monitoring and yield predicting is very important to food security and agricultural sustainable development. Crop models can be forceful tools for monitoring crop growth status and predicting yield over homogeneous areas, however, their application to a larger spatial domains is hampered by lack of sufficient spatial information about model inputs, such as the value of some of their parameters and initial conditions, which may have great difference between regions even fields. The use of remote sensing data helps to overcome this problem. By incorporating remote sensing data into the WOFOST crop model (through LAI), it is possible to incorporate remote sensing variables (vegetation index) for each point of the spatial domain, and it is possible for this point to re-estimate new values of the parameters or initial conditions, to which the model is particularly sensitive. This paper describes the use of such a method on a local scale, for winter wheat, focusing on the parameters describing emergence and early crop growth. These processes vary greatly depending on the soil, climate and seedbed preparation, and affect yield significantly. The WOFOST crop model is calibrated under standard conditions and then evaluated under test conditions to which the emergence and early growth parameters of the WOFOST model are adjusted by incorporating remote sensing data. The inversion of the combined model allows us to accurately monitoring crop growth status and predicting yield on a regional scale.
Chatterjee, Bishu; Sharp, Peter A.
2006-07-15
Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)
T.D. Sisk; N.M. Haddad
2002-01-01
Sisk, T.D., and N.M. Haddad. 2002. Incorporating the effects of habitat edges into landscape models: Effective area models for cross-boundary management. Chapter 8, Pp. 208-240 in J. Liu and W.W. Taylor, Integrating landscape ecology into natural resource management, Cambridge University Press, Cambridge, UK. Abstract: Natural resource managers are increasingly charged with meeting multiple, often conflicting goals in landscapes undergoing significant change due to shifts in land use. Conservation from native to anthropogenic habitats typically fragments the landscape, reducing the size and increasing the isolation of the resulting patches, with profound ecological impacts. These impacts occur both within and adjacent to areas under active management, creating extensive edges between habitat types. Boundaries established between management areas, for example, between timber harvest units or between reserves and adjacent agricultural fields, inevitably lead to differences in the quality of habitats on either side of the boundary, and a habitat edge results. Although edges are common components of undisturbed landscapes, the amount of edge proliferates rapidly as landscapes are fragmented. Insightful analysis of the complex issues associated with cross-boundary management necessitates an explicit focus on habitat quality in the boundary regions.
NASA Astrophysics Data System (ADS)
Ooka, Ryozo; Sato, Taiki; Harayama, Kazuya; Murakami, Shuzo; Kawamoto, Yoichi
2011-01-01
The summer climate around the Tokyo metropolitan area has been analysed on an urban scale, and the regional characteristics of the thermal energy balance of a bayside business district in the centre of Tokyo (Otemachi) have been compared with an inland residential district (Nerima), using a mesoscale meteorological model incorporating an urban canopy model. From the results of the analysis, the mechanism of diurnal change in air temperature and absolute humidity in these areas is quantitatively demonstrated, with a focus on the thermal energy balance. Moreover, effective countermeasures against urban heat-islands are considered from the viewpoint of each region's thermal energy balance characteristics. In addition to thermal energy outflux by turbulent diffusion, advection by sea-breezes from Tokyo Bay discharges sensible heat in Otemachi. This mitigates temperature increases during the day. On the other hand, because all sea-breezes must first cross the centre of Tokyo, it has less of a cooling effect in Nerima. As a result, the air temperature during the day in Nerima is higher than that in Otemachi.
Anticipating and Incorporating Stakeholder Feedback When Developing Value-Added Models
ERIC Educational Resources Information Center
Balch, Ryan; Koedel, Cory
2014-01-01
State and local education agencies across the United States are increasingly adopting rigorous teacher evaluation systems. Most systems formally incorporate teacher performance as measured by student test-score growth, sometimes by state mandate. An important consideration that will influence the long-term persistence and efficacy of these systems…
Strategies for Incorporating Women-Specific Sexuality Education into Addiction Treatment Models
ERIC Educational Resources Information Center
James, Raven
2007-01-01
This paper advocates for the incorporation of a women-specific sexuality curriculum in the addiction treatment process to aid in sexual healing and provide for aftercare issues. Sexuality in addiction treatment modalities is often approached from a sex-negative stance, or that of sexual victimization. Sexual issues are viewed as addictive in and…
NASA Astrophysics Data System (ADS)
Wythers, Kirk R.; Reich, Peter B.; Bradford, John B.
2013-03-01
Evidence suggests that respiration acclimation (RA) to temperature in plants can have a substantial influence on ecosystem carbon balance. To assess the influence of RA on ecosystem response variables in the presence of global change drivers, we incorporated a temperature-sensitive Q10 of respiration and foliar basal RA into the ecosystem model PnET-CN. We examined the new algorithms' effects on modeled net primary production (NPP), total canopy foliage mass, foliar nitrogen concentration, net ecosystem exchange (NEE), and ecosystem respiration/gross primary production ratios. This latter ratio more closely matched eddy covariance long-term data when RA was incorporated in the model than when not. Averaged across four boreal ecotone sites and three forest types at year 2100, the enhancement of NPP in response to the combination of rising [CO2] and warming was 9% greater when RA algorithms were used, relative to responses using fixed respiration parameters. The enhancement of NPP response to global change was associated with concomitant changes in foliar nitrogen and foliage mass. In addition, impacts of RA algorithms on modeled responses of NEE closely paralleled impacts on NPP. These results underscore the importance of incorporating temperature-sensitive Q10 and basal RA algorithms into ecosystem models. Given the current evidence that atmospheric [CO2] and surface temperature will continue to rise, and that ecosystem responses to those changes appear to be modified by RA, which is a common phenotypic adjustment, the potential for misleading results increases if models fail to incorporate RA into their carbon balance calculations.
Steinlin, Christine; Bogdal, Christian; Lüthi, Martin P; Pavlova, Pavlina A; Schwikowski, Margit; Zennegg, Markus; Schmid, Peter; Scheringer, Martin; Hungerbühler, Konrad
2016-06-01
In previous studies, the incorporation of polychlorinated biphenyls (PCBs) has been quantified in the accumulation areas of Alpine glaciers. Here, we introduce a model framework that quantifies mass fluxes of PCBs in glaciers and apply it to the Silvretta glacier (Switzerland). The models include PCB incorporation into the entire surface of the glacier, downhill transport with the flow of the glacier ice, and chemical fate in the glacial lake. The models are run for the years 1900-2100 and validated by comparing modeled and measured PCB concentrations in an ice core, a lake sediment core, and the glacial streamwater. The incorporation and release fluxes, as well as the storage of PCBs in the glacier increase until the 1980s and decrease thereafter. After a temporary increase in the 2000s, the future PCB release and the PCB concentrations in the glacial stream are estimated to be small but persistent throughout the 21st century. This study quantifies all relevant PCB fluxes in and from a temperate Alpine glacier over two centuries, and concludes that Alpine glaciers are a small secondary source of PCBs, but that the aftermath of environmental pollution by persistent and toxic chemicals can endure for decades. PMID:27164482
Hasegawa, Takanori; Yamaguchi, Rui; Nagasaki, Masao; Miyano, Satoru; Imoto, Seiya
2014-01-01
Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid
Hoyos, David; Mariel, Petr; Hess, Stephane
2015-02-01
Environmental economists are increasingly interested in better understanding how people cognitively organise their beliefs and attitudes towards environmental change in order to identify key motives and barriers that stimulate or prevent action. In this paper, we explore the utility of a commonly used psychometric scale, the awareness of consequences (AC) scale, in order to better understand stated choices. The main contribution of the paper is that it provides a novel approach to incorporate attitudinal information into discrete choice models for environmental valuation: firstly, environmental attitudes are incorporated using a reinterpretation of the classical AC scale recently proposed by Ryan and Spash (2012); and, secondly, attitudinal data is incorporated as latent variables under a hybrid choice modelling framework. This novel approach is applied to data from a survey conducted in the Basque Country (Spain) in 2008 aimed at valuing land-use policies in a Natura 2000 Network site. The results are relevant to policy-making because choice models that are able to accommodate underlying environmental attitudes may help in designing more effective environmental policies. PMID:25461111
Priano, Lorenzo; Esposti, Daniele; Esposti, Roberto; Castagna, Giovanna; De Medici, Clotilde; Fraschini, Franco; Gasco, Maria Rosa; Mauro, Alessandro
2007-10-01
melatonin (MT) is a hormone produced by the pineal gland at night, involved in the regulation of circadian rhythms. For clinical purposes, exogenous MT administration should mimic the typical nocturnal endogenous MT levels, but its pharmacokinetics is not favourable due to short half-life of elimination. Aim of this study is to examine pharmacokinetics of MT incorporated in solid lipid nanoparticles (SLN), administered by oral and transdermal route. SLN peculiarity consists in the possibility of acting as a reservoir, permitting a constant and prolonged release of the drugs included. In 7 healthy subjects SLN incorporating MT 3 mg (MT-SLN-O) were orally administered at 8.30 a.m. MT 3 mg in standard formulation (MT-S) was then administered to the same subjects after one week at 8.30 a.m. as controls. In 10 healthy subjects SLN incorporating MT were administered transdermally (MT-SLN-TD) by the application of a patch at 8.30 a.m. for 24 hours. Compared to MT-S, Tmax after MT-SLN-O administration resulted delayed of about 20 minutes, while mean AUC and mean half life of elimination was significantly higher (respectively 169944.7 +/- 64954.4 pg/ml x hour vs. 85148.4 +/- 50642.6 pg/ml x hour, p = 0.018 and 93.1 +/- 37.1 min vs. 48.2 +/- 8.9 min, p = 0.009). MT absorption and elimination after MT-SLN-TD demonstrated to be slow (mean half life of absorption: 5.3 +/- 1.3 hours; mean half life of elimination: 24.6 +/- 12.0 hours), so MT plasma levels above 50 pg/ml were maintained for at least 24 hours. This study demonstrates a significant absorption of MT incorporated in SLN, with detectable plasma level achieved for several hours in particular after transdermal administration. As dosages and concentrations of drugs included in SLN can be varied, different plasma level profile could be obtained, so disclosing new possibilities for sustained delivery systems. PMID:18330178
NASA Astrophysics Data System (ADS)
BABA, T.; Takahashi, N.; Kaneda, Y.; Inazawa, Y.; Kikkojin, M.
2012-12-01
The tsunami caused by the 2011 Tohoku-oki earthquake widely inundated destroying or passing the buildings and structures on land. Effect of buildings and structures on tsunami inundation is represented by a bottom friction in the conventional modeling solving non-linear shallow water theory in finite difference scheme, not included 3D shapes of those. But large strong buildings would be able to directly protect incoming tsunami like as seawalls, rather than bottom friction. While LiDAR measurements are recently being carried out along the Japan coast by Geospatial Information Authority of Japan, which collect reflected plus from the top of surface such as the building roof, road, bridge, and the top of trees. This is hereby called digital surface model (DSM). We extracted buildings and structures can be possible to affect tsunami inundation from DSM by comparing the Fundamental Geospatial Data which indicates locations of buildings and structures in the city. This is because we have to remove trees and river bridges in DSM where tsunami can pass through them. The 3D building data was incorporated as topography in tsunami computation of the 2011 Tohoku-oki earthquake (hereby, incorporated model) to compare the result from the conventional model. URSGA tsunami code (Jakeman et al. 2010) was used to include variable nesting system. The finest topographic grid interval was 0.22 arc-second (about 5m) along the longitude and latitude directions in coastal area. The initial sea-surface defamation was derived from the finite fault model version 1.1 provided by Tohoku University. In the incorporated model, the maximum inundation height at the front of coastal buildings and structures is higher than that in the conventional model. The inundation height is increased by 63 % (4.8 m) at the maximum. In the area back of the coastal buildings, the inundation height is inversely smaller than that in the conventional model. The tsunami inundation area becomes to be smaller in the
Sullivan, T.J.
1992-09-01
A project was initiated in March, 1992 to (1) incorporate a rigorous organic acid representation, based on empirical data and geochemical considerations, into the MAGIC model of acidification response, and (2) test the revised model using three sets of independent data. After six months of performance, the project is on schedule and the majority of the tasks outlined for Year 1 have been successfully completed. Major accomplishments to data include development of the organic acid modeling approach, using data from the Adirondack Lakes Survey Corporation (ALSC), and coupling the organic acid model with MAGIC for chemical hindcast comparisons. The incorporation of an organic acid representation into MAGIC can account for much of the discrepancy earlier observed between MAGIC hindcasts and paleolimnological reconstructions of preindustrial pH and alkalinity for 33 statistically-selected Adirondack lakes. Additional work is on-going for model calibration and testing with data from two whole-catchment artificial acidification projects. Results obtained thus far are being prepared as manuscripts for submission to the peer-reviewed scientific literature.
Computer simulation incorporating a helicopter model for evaluation of aircraft avionics systems
NASA Technical Reports Server (NTRS)
Ostroff, A. J.; Wood, R. B.
1977-01-01
A computer program was developed to integrate avionics research in navigation, guidance, controls, and displays with a realistic aircraft model. A user oriented program is described that allows a flexible combination of user supplied models to perform research in any avionics area. A preprocessor technique for selecting various models without significantly changing the memory storage is included. Also included are mathematical models for several avionics error models and for the CH-47 helicopter used in this program.
Lizarralde, I; Fernández-Arévalo, T; Brouckaert, C; Vanrolleghem, P; Ikumi, D S; Ekama, G A; Ayesa, E; Grau, P
2015-05-01
This paper introduces a new general methodology for incorporating physico-chemical and chemical transformations into multi-phase wastewater treatment process models in a systematic and rigorous way under a Plant-Wide modelling (PWM) framework. The methodology presented in this paper requires the selection of the relevant biochemical, chemical and physico-chemical transformations taking place and the definition of the mass transport for the co-existing phases. As an example a mathematical model has been constructed to describe a system for biological COD, nitrogen and phosphorus removal, liquid-gas transfer, precipitation processes, and chemical reactions. The capability of the model has been tested by comparing simulated and experimental results for a nutrient removal system with sludge digestion. Finally, a scenario analysis has been undertaken to show the potential of the obtained mathematical model to study phosphorus recovery. PMID:25746499
Hrib, Jakub; Hobzova, Radka; Hampejsova, Zuzana; Bosakova, Zuzana; Munzarova, Marcela; Michalek, Jiri
2015-01-01
Summary Nanofibers were prepared from polycaprolactone, polylactide and polyvinyl alcohol using NanospiderTM technology. Polyethylene glycols with molecular weights of 2 000, 6 000, 10 000 and 20 000 g/mol, which can be used to moderate the release profile of incorporated pharmacologically active compounds, served as model molecules. They were terminated by aromatic isocyanate and incorporated into the nanofibers. The release of these molecules into an aqueous environment was investigated. The influences of the molecular length and chemical composition of the nanofibers on the release rate and the amount of released polyethylene glycols were evaluated. Longer molecules released faster, as evidenced by a significantly higher amount of released molecules after 72 hours. However, the influence of the chemical composition of nanofibers was even more distinct – the highest amount of polyethylene glycol molecules released from polyvinyl alcohol nanofibers, the lowest amount from polylactide nanofibers. PMID:26665065
Zhang Pengpeng Wu, Leester; Liu Tian; Kutcher, Gerald J..; Isaacson, Steven
2008-04-01
Purpose: To integrate imaging performance characteristics, specifically sensitivity and specificity, of magnetic resonance angiography (MRA) and digital subtraction angiography (DSA) into arteriovenous malformation (AVM) radiosurgery planning and evaluation. Methods and Materials: Images of 10 patients with AVMs located in critical brain areas were analyzed in this retrospective planning study. The image findings were first used to estimate the sensitivity and specificity of MRA and DSA. Instead of accepting the imaging observation as a binary (yes or no) mapping of AVM location, our alternative is to translate the image into an AVM probability distribution map by incorporating imagers' sensitivity and specificity, and to use this map as a basis for planning and evaluation. Three sets of radiosurgery plans, targeting the MRA and DSA positive overlap, MRA positive, and DSA positive were optimized for best conformality. The AVM obliteration rate (ORAVM) and brain complication rate served as endpoints for plan comparison. Results: In our 10-patient study, the specificities and sensitivities of MRA and DSA were estimated to be (0.95, 0.74) and (0.71, 0.95), respectively. The positive overlap of MRA and DSA accounted for 67.8% {+-} 4.9% of the estimated true AVM volume. Compared with plans targeting MRA and DSA-positive overlap, plans targeting MRA-positive or DSA-positive improved ORAVM by 4.1% {+-} 1.9% and 15.7% {+-} 8.3%, while also increasing the complication rate by 1.0% {+-} 0.8% and 4.4% {+-} 2.3%, respectively. Conclusions: The impact of imagers' quality should be quantified and incorporated in AVM radiosurgery planning and evaluation to facilitate clinical decision making.
Improving weed germination models by incorporating seed microclimate and translocation by tillage
Technology Transfer Automated Retrieval System (TEKTRAN)
Weed emergence models are of critical importance in deciding the timing of field weed control measures (tillage or chemical). However, the state of weed germination modeling is still in its infancy. Existing models do provide a baseline picture of emergence patterns, but improvements are needed to m...
Incorporating Multi-model Ensemble Techniques Into a Probabilistic Hydrologic Forecasting System
NASA Astrophysics Data System (ADS)
Sonessa, M. Y.; Bohn, T. J.; Lettenmaier, D. P.
2008-12-01
Multi-model ensemble techniques have been shown to reduce bias and to aid in quantification of the effects of model uncertainty in hydrologic modeling. However, these techniques are only beginning to be applied in operational hydrologic forecast systems. To investigate the performance of a multi-model ensemble in the context of probabilistic hydrologic forecasting, we have extended the University of Washington's West-wide Seasonal Hydrologic Forecasting System to use an ensemble of three models: the Variable Infiltration Capacity (VIC) model version 4.0.6, the NCEP NOAH model version 2.7.1, and the NWS grid-based Sacramento/Snow-17 model (SAC). The objective of this presentation is to assess the performance of the ensemble of the three models as compared to the performance of the models individually. Three forecast points within the West-wide forecast system domain were used for this research: the Feather River at Oroville, CA, the Salmon River at White horse, ID, and the Colorado River at Grand Junction. The forcing and observed streamflow data are for years 1951-2005 for the Feather and Salmon Rivers; and 1951-2003 for the Colorado. The models were first run for the retrospective period, then bias-corrected, and model weights were then determined using multiple linear regression. We assessed the performance of the ensemble in comparison with the individual models in terms of correlation with observed flows and Root Mean Square Error, and Nash-Sutcliffe. We found that for evaluations of retrospective simulations in comparison with observations, the ensemble performed better overall than any of the models individually even though in few individual months individual models performed slightly better than the ensemble. To test forecast skill, we performed Ensemble Streamflow Prediction (ESP) forecasts for each year of the retrospective period, using forcings from all other years, for individual models and for the multi-model ensemble. To form the ensemble for the ESP
NASA Astrophysics Data System (ADS)
Kapasi, Sanjay; Robertson, Stewart; Biafore, John; Smith, Mark D.
2009-12-01
Recent publications have emphasized the criticality of computational lithography in source-mask selection for 32 and 22 nm technology nodes. Lithographers often select the illuminator geometries based on analyzing aerial images for a limited set of structures using computational lithography tools. Last year, Biafore, et al1 demonstrated the divergence between aerial image models and resist models in computational lithography. In a follow-up study2, it was illustrated that optimal illuminator is different when selected based on resist model in contrast to aerial image model. In the study, optimal source shapes were evaluated for 1D logic patterns using aerial image model and two distinct commercial resist models. Physics based lumped parameter resist model (LPM) was used. Accurately calibrated full physical models are portable across imaging conditions compared to the lumped models. This study will be an extension of previous work. Full physical resist models (FPM) with calibrated resist parameters3,4,5,6 will be used in selecting optimum illumination geometries for 1D logic patterns. Several imaging parameters - like Numerical Aperture (NA), source geometries (Annular, Quadrupole, etc.), illumination configurations for different sizes and pitches will be explored in the study. Our goal is to compare and analyze the optimal source-shapes across various imaging conditions. In the end, the optimal source-mask solution for given set of designs based on all the models will be recommended.
Incorporating Cold Cap Behavior in a Joule-heated Waste Glass Melter Model
Varija Agarwal; Donna Post Guillen
2013-08-01
In this paper, an overview of Joule-heated waste glass melters used in the vitrification of high level waste (HLW) is presented, with a focus on the cold cap region. This region, in which feed-to-glass conversion reactions occur, is critical in determining the melting properties of any given glass melter. An existing 1D computer model of the cold cap, implemented in MATLAB, is described in detail. This model is a standalone model that calculates cold cap properties based on boundary conditions at the top and bottom of the cold cap. Efforts to couple this cold cap model with a 3D STAR-CCM+ model of a Joule-heated melter are then described. The coupling is being implemented in ModelCenter, a software integration tool. The ultimate goal of this model is to guide the specification of melter parameters that optimize glass quality and production rate.
Grucza, Richard A.; Johnson, Eric O.; Krueger, Robert F.; Breslau, Naomi; Saccone, Nancy L.; Chen, Li-Shiun; Derringer, Jaime; Agrawal, Arpana; Lynskey, Micheal; Bierut, Laura J.
2011-01-01
Nicotine dependence is moderately heritable, but identified genetic associations explain only modest portions of this heritability. We analyzed 3,369 SNPs from 349 candidate genes, and investigated whether incorporation of SNP-by-environment interaction into association analyses might bolster gene discovery efforts and prediction of nicotine dependence. Specifically, we incorporated the interaction between allele count and age-at-onset of regular smoking (AOS) into association analyses of nicotine dependence. Subjects were from the Collaborative Genetic Study of Nicotine Dependence, and included 797 cases ascertained for Fagerström nicotine dependence, and 811 non-nicotine dependent smokers as controls, all of European descent. Compared with main-effect models, SNP x AOS interaction models resulted in higher numbers of nominally significant tests, increased predictive utility at individual SNPs, and higher predictive utility in a multi-locus model. Some SNPs previously documented in main-effect analyses exhibited improved fits in the joint-analysis, including rs16969968 from CHRNA5 and rs2314379 from MAP3K4. CHRNA5 exhibited larger effects in later-onset smokers, in contrast with a previous report that suggested the opposite interaction (Weiss et al, PLOS Genetics, 4: e1000125, 2008). However, a number of SNPs that did not emerge in main-effect analyses were among the strongest findings in the interaction analyses. These include SNPs located in GRIN2B (p=1.5 × 10−5), which encodes a subunit of the NMDA receptor channel, a key molecule in mediating age-dependent synaptic plasticity. Incorporation of logically chosen interaction parameters, such as AOS, into genetic models of substance-use disorders may increase the degree of explained phenotypic variation, and constitutes a promising avenue for gene-discovery. PMID:20624154
Anand, M.; Rajagopal, K.; Rajagopal, K. R.
2003-01-01
Multiple interacting mechanisms control the formation and dissolution of clots to maintain blood in a state of delicate balance. In addition to a myriad of biochemical reactions, rheological factors also play a crucial role in modulating the response of blood to external stimuli. To date, a comprehensive model for clot formation and dissolution, that takes into account the biochemical, medical and rheological factors, has not been put into place, the existing models emphasizing either one or the other of the factors. In this paper, after discussing the various biochemical, physiologic and rheological factors at some length, we develop a modelmore » for clot formation and dissolution that incorporates many of the relevant crucial factors that have a bearing on the problem. The model, though just a first step towards understanding a complex phenomenon, goes further than previous models in integrating the biochemical, physiologic and rheological factors that come into play.« less
Stewart, P.C.
1992-09-01
This paper describes the incorporation of the Harshvardhan et al. (1987) radiation parameterization into the Naval Research Laboratory Limited Area Dynamical Weather Prediction Model. A comparison between model runs with the radiation scheme and runs without the scheme was made to examine three mesoscale phenomena along the west coast of the United States during the period 0000 UTC 02 May 1990 - 1200 UTC 03 %lay 1990: the land and sea breeze, the southerly surge and the Catalina eddy. In general the updated model with the radiation parameterization yielded a more accurate simulation of the layer temperatures, geopotential heights, cloud cover, and radiative processes as verified from synoptic, mesoscale: and satellite observations. Subsequently, the updated model also forecast a more realistic diurnal evolution of the sea and land breeze, the southerly surge and the Catalina eddy.
Bestley, Sophie; Jonsen, Ian D; Hindell, Mark A; Guinet, Christophe; Charrassin, Jean-Benoît
2013-01-01
A fundamental goal in animal ecology is to quantify how environmental (and other) factors influence individual movement, as this is key to understanding responsiveness of populations to future change. However, quantitative interpretation of individual-based telemetry data is hampered by the complexity of, and error within, these multi-dimensional data. Here, we present an integrative hierarchical Bayesian state-space modelling approach where, for the first time, the mechanistic process model for the movement state of animals directly incorporates both environmental and other behavioural information, and observation and process model parameters are estimated within a single model. When applied to a migratory marine predator, the southern elephant seal (Mirounga leonina), we find the switch from directed to resident movement state was associated with colder water temperatures, relatively short dive bottom time and rapid descent rates. The approach presented here can have widespread utility for quantifying movement-behaviour (diving or other)-environment relationships across species and systems. PMID:23135676
Dosne, Anne-Gaëlle; Bergstrand, Martin; Karlsson, Mats O
2016-04-01
Nonlinear mixed effects models parameters are commonly estimated using maximum likelihood. The properties of these estimators depend on the assumption that residual errors are independent and normally distributed with mean zero and correctly defined variance. Violations of this assumption can cause bias in parameter estimates, invalidate the likelihood ratio test and preclude simulation of real-life like data. The choice of error model is mostly done on a case-by-case basis from a limited set of commonly used models. In this work, two strategies are proposed to extend and unify residual error modeling: a dynamic transform-both-sides approach combined with a power error model (dTBS) capable of handling skewed and/or heteroscedastic residuals, and a t-distributed residual error model allowing for symmetric heavy tails. Ten published pharmacokinetic and pharmacodynamic models as well as stochastic simulation and estimation were used to evaluate the two approaches. dTBS always led to significant improvements in objective function value, with most examples displaying some degree of right-skewness and variances proportional to predictions raised to powers between 0 and 1. The t-distribution led to significant improvement for 5 out of 10 models with degrees of freedom between 3 and 9. Six models were most improved by the t-distribution while four models benefited more from dTBS. Changes in other model parameter estimates were observed. In conclusion, the use of dTBS and/or t-distribution models provides a flexible and easy-to-use framework capable of characterizing all commonly encountered residual error distributions. PMID:26679003
A Hall Thruster Performance Model Incorporating the Effects of a Multiply-Charged Plasma
NASA Technical Reports Server (NTRS)
Hofer, Richard R.; Jankovsky, Robert S.
2002-01-01
A Hall thruster performance model that predicts anode specific impulse, anode efficiency, and thrust is discussed. The model is derived as a function of a voltage loss parameter, an electron loss parameter, and the charge state of the plasma. Experimental data from SPT and TAL type thrusters up to discharge powers of 21.6 kW are used to determine the best fit for model parameters. General values for the model parameters are found, applicable to high power thrusters and irrespective of thruster type. Performance of a 50 kW thruster is calculated for an anode specific impulse of 2500 seconds or a discharge current of 100 A.
Tow, D.M.; Reuter, W.G.
1998-03-01
A probabilistic model has been developed for predicting the reliability of structures based on fracture mechanics and the results of nondestructive examination (NDE). The distinctive feature of this model is the way in which inspection results and the probability of detection (POD) curve are used to calculate a probability density function (PDF) for the number of flaws and the distribution of those flaws among the various size ranges. In combination with a probabilistic fracture mechanics model, this density function is used to estimate the probability of failure (POF) of a structure in which flaws have been detected by NDE. The model is useful for parametric studies of inspection techniques and material characteristics.
NASA Astrophysics Data System (ADS)
Barrios, J. M.; Verstraeten, W. W.; Farifteh, J.; Maes, P.; Aerts, J. M.; Coppin, P.
2012-04-01
Lyme borreliosis (LB) is the most common tick-borne disease in Europe and incidence growth has been reported in several European countries during the last decade. LB is caused by the bacterium Borrelia burgdorferi and the main vector of this pathogen in Europe is the tick Ixodes ricinus. LB incidence and spatial spread is greatly dependent on environmental conditions impacting habitat, demography and trophic interactions of ticks and the wide range of organisms ticks parasite. The landscape configuration is also a major determinant of tick habitat conditions and -very important- of the fashion and intensity of human interaction with vegetated areas, i.e. human exposure to the pathogen. Hence, spatial notions as distance and adjacency between urban and vegetated environments are related to human exposure to tick bites and, thus, to risk. This work tested the adequacy of a gravity model setting to model the observed spatio-temporal pattern of LB as a function of location and size of urban and vegetated areas and the seasonal and annual change in the vegetation dynamics as expressed by MODIS NDVI. Opting for this approach implies an analogy with Newton's law of universal gravitation in which the attraction forces between two bodies are directly proportional to the bodies mass and inversely proportional to distance. Similar implementations have proven useful in fields like trade modeling, health care service planning, disease mapping among other. In our implementation, the size of human settlements and vegetated systems and the distance separating these landscape elements are considered the 'bodies'; and the 'attraction' between them is an indicator of exposure to pathogen. A novel element of this implementation is the incorporation of NDVI to account for the seasonal and annual variation in risk. The importance of incorporating this indicator of vegetation activity resides in the fact that alterations of LB incidence pattern observed the last decade have been ascribed
Fernández, Estibalitz; Rodríguez, Gelen; Hostachy, Sarah; Clède, Sylvain; Cócera, Mercedes; Sandt, Christophe; Lambert, François; de la Maza, Alfonso; Policar, Clotilde; López, Olga
2015-07-01
A rhenium tris-carbonyl derivative (fac-[Re(CO)3Cl(2-(1-dodecyl-1H-1,2,3,triazol-4-yl)-pyridine)]) was incorporated into phospholipid assemblies, called bicosomes, and the penetration of this molecule into skin was monitored using Fourier-transform infrared microspectroscopy (FTIR). To evaluate the capacity of bicosomes to promote the penetration of this derivative, the skin penetration of the Re(CO)3 derivative dissolved in dimethyl sulfoxide (DMSO), a typical enhancer, was also studied. Dynamic light scattering results (DLS) showed an increase in the size of the bicosomes with the incorporation of the Re(CO)3 derivative, and the FTIR microspectroscopy showed that the Re(CO)3 derivative incorporated in bicosomes penetrated deeper into the skin than when dissolved in DMSO. When this molecule was applied on the skin using the bicosomes, 60% of the Re(CO)3 derivative was retained in the stratum corneum (SC) and 40% reached the epidermis (Epi). Otherwise, the application of this molecule via DMSO resulted in 95% of the Re(CO)3 derivative being in the SC and only 5% reaching the Epi. Using a Re(CO)3 derivative with a dodecyl-chain as a model molecule, it was possible to determine the distribution of molecules with similar physicochemical characteristics in the skin using bicosomes. This fact makes these nanostructures promising vehicles for the application of lipophilic molecules inside the skin. PMID:25969419
Chakraborty, Kunal; Das, Kunal; Kar, Tapan Kumar
2013-01-01
In this paper, we propose a prey-predator system with stage structure for predator. The proposed system incorporates cannibalism for predator populations in a competitive environment. The combined fishing effort is considered as control used to harvest the populations. The steady states of the system are determined and the dynamical behavior of the system is discussed. Local stability of the system is analyzed and sufficient conditions are derived for the global stability of the system at the positive equilibrium point. The existence of the Hopf bifurcation phenomenon is examined at the positive equilibrium point of the proposed system. We consider harvesting effort as a control parameter and subsequently, characterize the optimal control parameter in order to formulate the optimal control problem under the dynamic framework towards optimal utilization of the resource. Moreover, the optimal system is solved numerically to investigate the sustainability of the ecosystem using an iterative method with a Runge-Kutta fourth-order scheme. Simulation results show that the optimal control scheme can achieve sustainable ecosystem. Results are analyzed with the help of graphical illustrations. PMID:23537768
Model for the incorporation of plant detritus within clastic accumulating interdistributary bays
Gastaldo, R.A.; McCarroll, S.M.; Douglass, D.P.
1985-01-01
Plant-bearing clastic lithologies interpreted as interdistributary bay deposits are reported from rocks Devonian to Holocene in age. Often, these strata preserve accumulations of discrete, laterally continuous leaf beds or coaly horizons. Investigations within two modern inter-distributary bays in the lower delta plain of the Mobile Delta, Alabama have provided insight into the phytotaphonomic processes responsible for the generation of carbonaceous lithologies, coaly horizons and laterally continuous leaf beds. Delvan and Chacalooche Bays lie adjacent to the Tensaw River distributary channel and differ in the mode of clastic and plant detrital accumulation. Delvan Bay, lying west of the distributary channel, is accumulating detritus solely by overbank deposition. Chacaloochee Bay, lying east of the channel, presently is accumulating detritus by active crevasse-splay activity. Plant detritus is accumulating as transported assemblages in both bays, but the mode of preservation differs. In Delvan Bay, the organic component is highly degraded and incorporated within the clastic component resulting in a carbonaceous silt. Little identifiable plant detritus can be recovered. On the other hand, the organic component in Chacaloochee Bay is accumulating in locally restricted allochthonous peat deposits up to 2 m in thickness, and discrete leaf beds generated by flooding events. In addition, autochthonous plant accumulations occur on subaerially and aerially exposed portions of the crevasse. The resultant distribution of plant remains is a complicated array of transported and non-transported organics.
Technology Transfer Automated Retrieval System (TEKTRAN)
Watershed simulation models can be calibrated using “hard data” such as temporal streamflow observations; however, users may find upon examination of detailed outputs that some of the calibrated models may not reflect summative actual watershed behavior. Thus, it is necessary to use “soft data” (i....
A Preventative Model of School Consultation: Incorporating Perspectives from Positive Psychology
ERIC Educational Resources Information Center
Akin-Little, K. Angeleque; Little, Steven G.; Delligatti, Nina
2004-01-01
Using the principles of mental health and behavioral consultation, combined with concepts from positive psychology, this paper generates a new preventative model of school consultation. This model has two steps: (1) the school psychologist aids the teacher in the development and use of his/her personal positive psychology (e.g., optimism,…