Sample records for relevant model parameters

  1. Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.

    PubMed

    Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F

    2013-10-01

    A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).

  2. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  3. Application of all relevant feature selection for failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.

    2015-07-01

    The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.

  4. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  5. Estimating the Relevance of World Disturbances to Explain Savings, Interference and Long-Term Motor Adaptation Effects

    PubMed Central

    Berniker, Max; Kording, Konrad P.

    2011-01-01

    Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters. PMID:21998574

  6. Principles of parametric estimation in modeling language competition

    PubMed Central

    Zhang, Menghan; Gong, Tao

    2013-01-01

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678

  7. Principles of parametric estimation in modeling language competition.

    PubMed

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  8. Application of all-relevant feature selection for the failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, Wiesław; Wrzesien, Mariusz; Niemiec, Rafał; Rudnicki, Witold R.

    2016-03-01

    Climate models are extremely complex pieces of software. They reflect the best knowledge on the physical components of the climate; nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a simulation crashing. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to the simulation crashing and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the data set used in this research using different methodology. We confirm the main conclusion of the original study concerning the suitability of machine learning for the prediction of crashes. We show that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three others are relevant but redundant and two are not relevant at all. We also show that the variance due to the split of data between training and validation sets has a large influence both on the accuracy of predictions and on the relative importance of variables; hence only a cross-validated approach can deliver a robust prediction of performance and relevance of variables.

  9. Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.

    PubMed

    Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M

    2010-01-01

    Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.

  10. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  11. Quantitative interpretations of Visible-NIR reflectance spectra of blood.

    PubMed

    Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H

    2008-10-27

    This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.

  12. Control-Relevant Modeling, Analysis, and Design for Scramjet-Powered Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Rodriguez, Armando A.; Dickeson, Jeffrey J.; Sridharan, Srikanth; Benavides, Jose; Soloway, Don; Kelkar, Atul; Vogel, Jerald M.

    2009-01-01

    Within this paper, control-relevant vehicle design concepts are examined using a widely used 3 DOF (plus flexibility) nonlinear model for the longitudinal dynamics of a generic carrot-shaped scramjet powered hypersonic vehicle. Trade studies associated with vehicle/engine parameters are examined. The impact of parameters on control-relevant static properties (e.g. level-flight trimmable region, trim controls, AOA, thrust margin) and dynamic properties (e.g. instability and right half plane zero associated with flight path angle) are examined. Specific parameters considered include: inlet height, diffuser area ratio, lower forebody compression ramp inclination angle, engine location, center of gravity, and mass. Vehicle optimizations is also examined. Both static and dynamic considerations are addressed. The gap-metric optimized vehicle is obtained to illustrate how this control-centric concept can be used to "reduce" scheduling requirements for the final control system. A classic inner-outer loop control architecture and methodology is used to shed light on how specific vehicle/engine design parameter selections impact control system design. In short, the work represents an important first step toward revealing fundamental tradeoffs and systematically treating control-relevant vehicle design.

  13. Crystal viscoplasticity model for the creep-fatigue interactions in single-crystal Ni-base superalloy CMSX-8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estrada Rodas, Ernesto A.; Neu, Richard W.

    A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less

  14. Crystal viscoplasticity model for the creep-fatigue interactions in single-crystal Ni-base superalloy CMSX-8

    DOE PAGES

    Estrada Rodas, Ernesto A.; Neu, Richard W.

    2017-09-11

    A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less

  15. Retrieving relevant time-course experiments: a study on Arabidopsis microarrays.

    PubMed

    Şener, Duygu Dede; Oğul, Hasan

    2016-06-01

    Understanding time-course regulation of genes in response to a stimulus is a major concern in current systems biology. The problem is usually approached by computational methods to model the gene behaviour or its networked interactions with the others by a set of latent parameters. The model parameters can be estimated through a meta-analysis of available data obtained from other relevant experiments. The key question here is how to find the relevant experiments which are potentially useful in analysing current data. In this study, the authors address this problem in the context of time-course gene expression experiments from an information retrieval perspective. To this end, they introduce a computational framework that takes a time-course experiment as a query and reports a list of relevant experiments retrieved from a given repository. These retrieved experiments can then be used to associate the environmental factors of query experiment with the findings previously reported. The model is tested using a set of time-course Arabidopsis microarrays. The experimental results show that relevant experiments can be successfully retrieved based on content similarity.

  16. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  17. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  18. Parameter identification of process simulation models as a means for knowledge acquisition and technology transfer

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.; Ifanti, Konstantina

    2012-12-01

    Process simulation models are usually empirical, therefore there is an inherent difficulty in serving as carriers for knowledge acquisition and technology transfer, since their parameters have no physical meaning to facilitate verification of the dependence on the production conditions; in such a case, a 'black box' regression model or a neural network might be used to simply connect input-output characteristics. In several cases, scientific/mechanismic models may be proved valid, in which case parameter identification is required to find out the independent/explanatory variables and parameters, which each parameter depends on. This is a difficult task, since the phenomenological level at which each parameter is defined is different. In this paper, we have developed a methodological framework under the form of an algorithmic procedure to solve this problem. The main parts of this procedure are: (i) stratification of relevant knowledge in discrete layers immediately adjacent to the layer that the initial model under investigation belongs to, (ii) design of the ontology corresponding to these layers, (iii) elimination of the less relevant parts of the ontology by thinning, (iv) retrieval of the stronger interrelations between the remaining nodes within the revised ontological network, and (v) parameter identification taking into account the most influential interrelations revealed in (iv). The functionality of this methodology is demonstrated by quoting two representative case examples on wastewater treatment.

  19. On the Influence of Material Parameters in a Complex Material Model for Powder Compaction

    NASA Astrophysics Data System (ADS)

    Staf, Hjalmar; Lindskog, Per; Andersson, Daniel C.; Larsson, Per-Lennart

    2016-10-01

    Parameters in a complex material model for powder compaction, based on a continuum mechanics approach, are evaluated using real insert geometries. The parameter sensitivity with respect to density and stress after compaction, pertinent to a wide range of geometries, is studied in order to investigate completeness and limitations of the material model. Finite element simulations with varied material parameters are used to build surrogate models for the sensitivity study. The conclusion from this analysis is that a simplification of the material model is relevant, especially for simple insert geometries. Parameters linked to anisotropy and the plastic strain evolution angle have a small impact on the final result.

  20. Women's Later Life Career Development: Looking through the Lens of the Kaleidoscope Career Model

    ERIC Educational Resources Information Center

    August, Rachel A.

    2011-01-01

    This study explores the relevance of the Kaleidoscope Career Model (KCM) to women's later life career development. Qualitative interview data were gathered from 14 women in both the "truly" late career and bridge employment periods using a longitudinal design. The relevance of authenticity, balance, and challenge--central parameters in the KCM--is…

  1. Simulation modeling for stratified breast cancer screening - a systematic review of cost and quality of life assumptions.

    PubMed

    Arnold, Matthias

    2017-12-02

    The economic evaluation of stratified breast cancer screening gains momentum, but produces also very diverse results. Systematic reviews so far focused on modeling techniques and epidemiologic assumptions. However, cost and utility parameters received only little attention. This systematic review assesses simulation models for stratified breast cancer screening based on their cost and utility parameters in each phase of breast cancer screening and care. A literature review was conducted to compare economic evaluations with simulation models of personalized breast cancer screening. Study quality was assessed using reporting guidelines. Cost and utility inputs were extracted, standardized and structured using a care delivery framework. Studies were then clustered according to their study aim and parameters were compared within the clusters. Eighteen studies were identified within three study clusters. Reporting quality was very diverse in all three clusters. Only two studies in cluster 1, four studies in cluster 2 and one study in cluster 3 scored high in the quality appraisal. In addition to the quality appraisal, this review assessed if the simulation models were consistent in integrating all relevant phases of care, if utility parameters were consistent and methodological sound and if cost were compatible and consistent in the actual parameters used for screening, diagnostic work up and treatment. Of 18 studies, only three studies did not show signs of potential bias. This systematic review shows that a closer look into the cost and utility parameter can help to identify potential bias. Future simulation models should focus on integrating all relevant phases of care, using methodologically sound utility parameters and avoiding inconsistent cost parameters.

  2. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  3. Combining the ‘bottom up’ and ‘top down’ approaches in pharmacokinetic modelling: fitting PBPK models to observed clinical data

    PubMed Central

    Tsamandouras, Nikolaos; Rostami-Hodjegan, Amin; Aarons, Leon

    2015-01-01

    Pharmacokinetic models range from being entirely exploratory and empirical, to semi-mechanistic and ultimately complex physiologically based pharmacokinetic (PBPK) models. This choice is conditional on the modelling purpose as well as the amount and quality of the available data. The main advantage of PBPK models is that they can be used to extrapolate outside the studied population and experimental conditions. The trade-off for this advantage is a complex system of differential equations with a considerable number of model parameters. When these parameters cannot be informed from in vitro or in silico experiments they are usually optimized with respect to observed clinical data. Parameter estimation in complex models is a challenging task associated with many methodological issues which are discussed here with specific recommendations. Concepts such as structural and practical identifiability are described with regards to PBPK modelling and the value of experimental design and sensitivity analyses is sketched out. Parameter estimation approaches are discussed, while we also highlight the importance of not neglecting the covariance structure between model parameters and the uncertainty and population variability that is associated with them. Finally the possibility of using model order reduction techniques and minimal semi-mechanistic models that retain the physiological-mechanistic nature only in the parts of the model which are relevant to the desired modelling purpose is emphasized. Careful attention to all the above issues allows us to integrate successfully information from in vitro or in silico experiments together with information deriving from observed clinical data and develop mechanistically sound models with clinical relevance. PMID:24033787

  4. Parameter dimensionality reduction of a conceptual model for streamflow prediction in Canadian, snowmelt dominated ungauged basins

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Poissant, Dominique; Brissette, François

    2015-11-01

    This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.

  5. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    NASA Astrophysics Data System (ADS)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  6. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    PubMed

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  7. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    PubMed Central

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123

  8. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  9. Effects of the seasonal cycle on superrotation in planetary atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Jonathan L.; Vallis, Geoffrey K.; Potter, Samuel F.

    2014-05-20

    The dynamics of dry atmospheric general circulation model simulations forced by seasonally varying Newtonian relaxation are explored over a wide range of two control parameters and are compared with the large-scale circulation of Earth, Mars, and Titan in their relevant parameter regimes. Of the parameters that govern the behavior of the system, the thermal Rossby number (Ro) has previously been found to be important in governing the spontaneous transition from an Earth-like climatology of winds to a superrotating one with prograde equatorial winds, in the absence of a seasonal cycle. This case is somewhat unrealistic as it applies only ifmore » the planet has zero obliquity or if surface thermal inertia is very large. While Venus has nearly vanishing obliquity, Earth, Mars, and Titan (Saturn) all have obliquities of ∼25° and varying degrees of seasonality due to their differing thermal inertias and orbital periods. Motivated by this, we introduce a time-dependent Newtonian cooling to drive a seasonal cycle using idealized model forcing, and we define a second control parameter that mimics non-dimensional thermal inertia of planetary surfaces. We then perform and analyze simulations across the parameter range bracketed by Earth-like and Titan-like regimes, assess the impact on the spontaneous transition to superrotation, and compare Earth, Mars, and Titan to the model simulations in the relevant parameter regime. We find that a large seasonal cycle (small thermal inertia) prevents model atmospheres with large thermal Rossby numbers from developing superrotation by the influences of (1) cross-equatorial momentum advection by the Hadley circulation and (2) hemispherically asymmetric zonal-mean zonal winds that suppress instabilities leading to equatorial momentum convergence. We also demonstrate that baroclinic instabilities must be sufficiently weak to allow superrotation to develop. In the relevant parameter regimes, our seasonal model simulations compare favorably to large-scale, seasonal phenomena observed on Earth and Mars. In the Titan-like regime the seasonal cycle in our model acts to prevent superrotation from developing, and it is necessary to increase the value of a third parameter—the atmospheric Newtonian cooling time—to achieve a superrotating climatology.« less

  10. Kink dynamics in a parametric Φ 6 system: a model with controllably many internal modes

    DOE PAGES

    Demirkaya, A.; Decker, R.; Kevrekidis, P. G.; ...

    2017-12-14

    We explore a variant of the Φ 6 model originally proposed in Phys. Rev.D 12 (1975) 1606 as a prototypical, so-called, “bag” model in which domain walls play the role of quarks within hadrons. We examine the steady state of the model, namely an apparent bound state of two kink structures. We explore its linearization, and we find that, as a function of a parameter controlling the curvature of the potential, an effectively arbitrary number of internal modes may arise in the point spectrum of the linearization about the domain wall profile. We explore some of the key characteristics ofmore » kink-antikink collisions, such as the critical velocity and the multi-bounce windows, and how they depend on the principal parameter of the model. We find that the critical velocity exhibits a non-monotonic dependence on the parameter controlling the curvature of the potential. For the multi-bounce windows, we find that their range and complexity decrease as the relevant parameter decreases (and as the number of internal modes in the model increases). We use a modified collective coordinates method [in the spirit of recent works such as Phys. Rev.D 94 (2016) 085008] in order to capture the relevant phenomenology in a semi-analytical manner.« less

  11. Kink dynamics in a parametric Φ 6 system: a model with controllably many internal modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demirkaya, A.; Decker, R.; Kevrekidis, P. G.

    We explore a variant of the Φ 6 model originally proposed in Phys. Rev.D 12 (1975) 1606 as a prototypical, so-called, “bag” model in which domain walls play the role of quarks within hadrons. We examine the steady state of the model, namely an apparent bound state of two kink structures. We explore its linearization, and we find that, as a function of a parameter controlling the curvature of the potential, an effectively arbitrary number of internal modes may arise in the point spectrum of the linearization about the domain wall profile. We explore some of the key characteristics ofmore » kink-antikink collisions, such as the critical velocity and the multi-bounce windows, and how they depend on the principal parameter of the model. We find that the critical velocity exhibits a non-monotonic dependence on the parameter controlling the curvature of the potential. For the multi-bounce windows, we find that their range and complexity decrease as the relevant parameter decreases (and as the number of internal modes in the model increases). We use a modified collective coordinates method [in the spirit of recent works such as Phys. Rev.D 94 (2016) 085008] in order to capture the relevant phenomenology in a semi-analytical manner.« less

  12. Kink dynamics in a parametric ϕ 6 system: a model with controllably many internal modes

    NASA Astrophysics Data System (ADS)

    Demirkaya, A.; Decker, R.; Kevrekidis, P. G.; Christov, I. C.; Saxena, A.

    2017-12-01

    We explore a variant of the ϕ 6 model originally proposed in Phys. Rev. D 12 (1975) 1606 as a prototypical, so-called, "bag" model in which domain walls play the role of quarks within hadrons. We examine the steady state of the model, namely an apparent bound state of two kink structures. We explore its linearization, and we find that, as a function of a parameter controlling the curvature of the potential, an effectively arbitrary number of internal modes may arise in the point spectrum of the linearization about the domain wall profile. We explore some of the key characteristics of kink-antikink collisions, such as the critical velocity and the multi-bounce windows, and how they depend on the principal parameter of the model. We find that the critical velocity exhibits a non-monotonic dependence on the parameter controlling the curvature of the potential. For the multi-bounce windows, we find that their range and complexity decrease as the relevant parameter decreases (and as the number of internal modes in the model increases). We use a modified collective coordinates method [in the spirit of recent works such as Phys. Rev. D 94 (2016) 085008] in order to capture the relevant phenomenology in a semi-analytical manner.

  13. Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination

    PubMed Central

    Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos

    2010-01-01

    The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093

  14. Magnetic anisotropy in the Kitaev model systems Na2IrO3 and RuCl3

    NASA Astrophysics Data System (ADS)

    Chaloupka, Jiří; Khaliullin, Giniyat

    2016-08-01

    We study the ordered moment direction in the extended Kitaev-Heisenberg model relevant to honeycomb lattice magnets with strong spin-orbit coupling. We utilize numerical diagonalization and analyze the exact cluster ground states using a particular set of spin-coherent states, obtaining thereby quantum corrections to the magnetic anisotropy beyond conventional perturbative methods. It is found that the quantum fluctuations strongly modify the moment direction obtained at a classical level and are thus crucial for a precise quantification of the interactions. The results show that the moment direction is a sensitive probe of the model parameters in real materials. Focusing on the experimentally relevant zigzag phases of the model, we analyze the currently available neutron-diffraction and resonant x-ray-diffraction data on Na2IrO3 and RuCl3 and discuss the parameter regimes plausible in these Kitaev-Heisenberg model systems.

  15. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  16. Astrobiological complexity with probabilistic cellular automata.

    PubMed

    Vukotić, Branislav; Ćirković, Milan M

    2012-08-01

    The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.

  17. Fractional poisson--a simple dose-response model for human norovirus.

    PubMed

    Messner, Michael J; Berger, Philip; Nappier, Sharon P

    2014-10-01

    This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.

  18. Evolution of a mini-scale biphasic dissolution model: Impact of model parameters on partitioning of dissolved API and modelling of in vivo-relevant kinetics.

    PubMed

    Locher, Kathrin; Borghardt, Jens M; Frank, Kerstin J; Kloft, Charlotte; Wagner, Karl G

    2016-08-01

    Biphasic dissolution models are proposed to have good predictive power for the in vivo absorption. The aim of this study was to improve our previously introduced mini-scale dissolution model to mimic in vivo situations more realistically and to increase the robustness of the experimental model. Six dissolved APIs (BCS II) were tested applying the improved mini-scale biphasic dissolution model (miBIdi-pH-II). The influence of experimental model parameters including various excipients, API concentrations, dual paddle and its rotation speed was investigated. The kinetics in the biphasic model was described applying a one- and four-compartment pharmacokinetic (PK) model. The improved biphasic dissolution model was robust related to differing APIs and excipient concentrations. The dual paddle guaranteed homogenous mixing in both phases; the optimal rotation speed was 25 and 75rpm for the aqueous and the octanol phase, respectively. A one-compartment PK model adequately characterised the data of fully dissolved APIs. A four-compartment PK model best quantified dissolution, precipitation, and partitioning also of undissolved amounts due to realistic pH profiles. The improved dissolution model is a powerful tool for investigating the interplay between dissolution, precipitation and partitioning of various poorly soluble APIs (BCS II). In vivo-relevant PK parameters could be estimated applying respective PK models. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Probabilistic evaluation of damage potential in earthquake-induced liquefaction in a 3-D soil deposit

    NASA Astrophysics Data System (ADS)

    Halder, A.; Miller, F. J.

    1982-03-01

    A probabilistic model to evaluate the risk of liquefaction at a site and to limit or eliminate damage during earthquake induced liquefaction is proposed. The model is extended to consider three dimensional nonhomogeneous soil properties. The parameters relevant to the liquefaction phenomenon are identified, including: (1) soil parameters; (2) parameters required to consider laboratory test and sampling effects; and (3) loading parameters. The fundamentals of risk based design concepts pertient to liquefaction are reviewed. A detailed statistical evaluation of the soil parameters in the proposed liquefaction model is provided and the uncertainty associated with the estimation of in situ relative density is evaluated for both direct and indirect methods. It is found that the liquefaction potential the uncertainties in the load parameters could be higher than those in the resistance parameters.

  20. Models of Pilot Behavior and Their Use to Evaluate the State of Pilot Training

    NASA Astrophysics Data System (ADS)

    Jirgl, Miroslav; Jalovecky, Rudolf; Bradac, Zdenek

    2016-07-01

    This article discusses the possibilities of obtaining new information related to human behavior, namely the changes or progressive development of pilots' abilities during training. The main assumption is that a pilot's ability can be evaluated based on a corresponding behavioral model whose parameters are estimated using mathematical identification procedures. The mean values of the identified parameters are obtained via statistical methods. These parameters are then monitored and their changes evaluated. In this context, the paper introduces and examines relevant mathematical models of human (pilot) behavior, the pilot-aircraft interaction, and an example of the mathematical analysis.

  1. Geostatistical characterisation of geothermal parameters for a thermal aquifer storage site in Germany

    NASA Astrophysics Data System (ADS)

    Rodrigo-Ilarri, J.; Li, T.; Grathwohl, P.; Blum, P.; Bayer, P.

    2009-04-01

    The design of geothermal systems such as aquifer thermal energy storage systems (ATES) must account for a comprehensive characterisation of all relevant parameters considered for the numerical design model. Hydraulic and thermal conductivities are the most relevant parameters and its distribution determines not only the technical design but also the economic viability of such systems. Hence, the knowledge of the spatial distribution of these parameters is essential for a successful design and operation of such systems. This work shows the first results obtained when applying geostatistical techniques to the characterisation of the Esseling Site in Germany. In this site a long-term thermal tracer test (> 1 year) was performed. On this open system the spatial temperature distribution inside the aquifer was observed over time in order to obtain as much information as possible that yield to a detailed characterisation both of the hydraulic and thermal relevant parameters. This poster shows the preliminary results obtained for the Esseling Site. It has been observed that the common homogeneous approach is not sufficient to explain the observations obtained from the TRT and that parameter heterogeneity must be taken into account.

  2. Using HEC-HMS: Application to Karkheh river basin

    USDA-ARS?s Scientific Manuscript database

    This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...

  3. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  4. Composite laminate failure parameter optimization through four-point flexure experimentation and analysis

    DOE PAGES

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    2016-05-06

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  5. Effects of selective attention on continuous opinions and discrete decisions

    NASA Astrophysics Data System (ADS)

    Si, Xia-Meng; Liu, Yun; Xiong, Fei; Zhang, Yan-Chao; Ding, Fei; Cheng, Hui

    2010-09-01

    Selective attention describes that individuals have a preference on information according to their involving motivation. Based on achievements of social psychology, we propose an opinion interacting model to improve the modeling of individuals’ interacting behaviors. There are two parameters governing the probability of agents interacting with opponents, i.e. individual relevance and time-openness. It is found that, large individual relevance and large time-openness advance the appearance of large clusters, but large individual relevance and small time-openness favor the lessening of extremism. We also put this new model into application to work out some factor leading to a successful product. Numerical simulations show that selective attention, especially individual relevance, cannot be ignored by launcher firms and information spreaders so as to attain the most successful promotion.

  6. DINA Model and Parameter Estimation: A Didactic

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2009-01-01

    Cognitive and skills diagnosis models are psychometric models that have immense potential to provide rich information relevant for instruction and learning. However, wider applications of these models have been hampered by their novelty and the lack of commercially available software that can be used to analyze data from this psychometric…

  7. Relating Data and Models to Characterize Parameter and Prediction Uncertainty

    EPA Science Inventory

    Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...

  8. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  9. Knowledge modeling in image-guided neurosurgery: application in understanding intraoperative brain shift

    NASA Astrophysics Data System (ADS)

    Cohen-Adad, Julien; Paul, Perrine; Morandi, Xavier; Jannin, Pierre

    2006-03-01

    During an image-guided neurosurgery procedure, the neuronavigation system is subject to inaccuracy because of anatomical deformations which induce a gap between the preoperative images and their anatomical reality. Thus, the objective of many research teams is to succeed in quantifying these deformations in order to update preoperative images. Anatomical intraoperative deformations correspond to a complex spatio-temporal phenomenon. Our objective is to identify the parameters implicated in these deformations and to use these parameters as constrains for systems dedicated to updating preoperative images. In order to identify these parameters of deformation we followed the iterative methodology used for cognitive system conception: identification, conceptualization, formalization, implementation and validation. A state of the art about cortical deformations has been established in order to identify relevant parameters probably involved in the deformations. As a first step, 30 parameters have been identified and described following an ontological approach. They were formalized into a Unified Modeling Language (UML) class diagram. We implemented that model into a web-based application in order to fill a database. Two surgical cases have been studied at this moment. After having entered enough surgical cases for data mining purposes, we expect to identify the most relevant and influential parameters and to gain a better ability to understand the deformation phenomenon. This original approach is part of a global system aiming at quantifying and correcting anatomical deformations.

  10. Creating Simulated Microgravity Patient Models

    NASA Technical Reports Server (NTRS)

    Hurst, Victor; Doerr, Harold K.; Bacal, Kira

    2004-01-01

    The Medical Operational Support Team (MOST) has been tasked by the Space and Life Sciences Directorate (SLSD) at the NASA Johnson Space Center (JSC) to integrate medical simulation into 1) medical training for ground and flight crews and into 2) evaluations of medical procedures and equipment for the International Space Station (ISS). To do this, the MOST requires patient models that represent the physiological changes observed during spaceflight. Despite the presence of physiological data collected during spaceflight, there is no defined set of parameters that illustrate or mimic a 'space normal' patient. Methods: The MOST culled space-relevant medical literature and data from clinical studies performed in microgravity environments. The areas of focus for data collection were in the fields of cardiovascular, respiratory and renal physiology. Results: The MOST developed evidence-based patient models that mimic the physiology believed to be induced by human exposure to a microgravity environment. These models have been integrated into space-relevant scenarios using a human patient simulator and ISS medical resources. Discussion: Despite the lack of a set of physiological parameters representing 'space normal,' the MOST developed space-relevant patient models that mimic microgravity-induced changes in terrestrial physiology. These models are used in clinical scenarios that will medically train flight surgeons, biomedical flight controllers (biomedical engineers; BME) and, eventually, astronaut-crew medical officers (CMO).

  11. Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.

    2002-05-01

    Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.

  12. A review of the physics and response models for burnout of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Orvis, W. J.; Khanaka, G. H.; Yee, J. H.

    1984-12-01

    Physical mechanisms that cause semiconductor devices to fail from electrical overstress--particularly, EMP-induced electrical stress--are described in light of the current literature and the authors' own research. A major concern is the cause and effects of second breakdown phenomena in p-n junction devices. Models of failure thresholds are evaluated for their inherent errors and for their ability to represent the relevant physics. Finally, the response models that relate electromagnetic stress parameters to appropriate failure-threshold parameters are discussed.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy Michael; Kress, Joel David; Bhat, Kabekode Ghanasham

    Year 1 Objectives (August 2016 – December 2016) – The original Independence model is a sequentially regressed set of parameters from numerous data sets in the Aspen Plus modeling framework. The immediate goal with the basic data model is to collect and evaluate those data sets relevant to the thermodynamic submodels (pure substance heat capacity, solvent mixture heat capacity, loaded solvent heat capacities, and volatility data). These data are informative for the thermodynamic parameters involved in both vapor-liquid equilibrium, and in the chemical equilibrium of the liquid phase.

  14. Using aerial images for establishing a workflow for the quantification of water management measures

    NASA Astrophysics Data System (ADS)

    Leuschner, Annette; Merz, Christoph; van Gasselt, Stephan; Steidl, Jörg

    2017-04-01

    Quantified landscape characteristics, such as morphology, land use or hydrological conditions, play an important role for hydrological investigations as landscape parameters directly control the overall water balance. A powerful assimilation and geospatial analysis of remote sensing datasets in combination with hydrological modeling allows to quantify landscape parameters and water balances efficiently. This study focuses on the development of a workflow to extract hydrologically relevant data from aerial image datasets and derived products in order to allow an effective parametrization of a hydrological model. Consistent and self-contained data source are indispensable for achieving reasonable modeling results. In order to minimize uncertainties and inconsistencies, input parameters for modeling should be extracted from one remote-sensing dataset mainly if possbile. Here, aerial images have been chosen because of their high spatial and spectral resolution that permits the extraction of various model relevant parameters, like morphology, land-use or artificial drainage-systems. The methodological repertoire to extract environmental parameters range from analyses of digital terrain models, multispectral classification and segmentation of land use distribution maps and mapping of artificial drainage-systems based on spectral and visual inspection. The workflow has been tested for a mesoscale catchment area which forms a characteristic hydrological system of a young moraine landscape located in the state of Brandenburg, Germany. These dataset were used as input-dataset for multi-temporal hydrological modelling of water balances to detect and quantify anthropogenic and meteorological impacts. ArcSWAT, as a GIS-implemented extension and graphical user input interface for the Soil Water Assessment Tool (SWAT) was chosen. The results of this modeling approach provide the basis for anticipating future development of the hydrological system, and regarding system changes for the adaption of water resource management decisions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  16. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction.

    PubMed

    Lu, Jingtao; Goldsmith, Michael-Rock; Grulke, Christopher M; Chang, Daniel T; Brooks, Raina D; Leonard, Jeremy A; Phillips, Martin B; Hypes, Ethan D; Fair, Matthew J; Tornero-Velez, Rogelio; Johnson, Jeffre; Dary, Curtis C; Tan, Yu-Mei

    2016-02-01

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals.

  17. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction

    PubMed Central

    Grulke, Christopher M.; Chang, Daniel T.; Brooks, Raina D.; Leonard, Jeremy A.; Phillips, Martin B.; Hypes, Ethan D.; Fair, Matthew J.; Tornero-Velez, Rogelio; Johnson, Jeffre; Dary, Curtis C.; Tan, Yu-Mei

    2016-01-01

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals. PMID:26871706

  18. A hybrid model for river water temperature as a function of air temperature and discharge

    NASA Astrophysics Data System (ADS)

    Toffolon, Marco; Piccolroaz, Sebastiano

    2015-11-01

    Water temperature controls many biochemical and ecological processes in rivers, and theoretically depends on multiple factors. Here we formulate a model to predict daily averaged river water temperature as a function of air temperature and discharge, with the latter variable being more relevant in some specific cases (e.g., snowmelt-fed rivers, rivers impacted by hydropower production). The model uses a hybrid formulation characterized by a physically based structure associated with a stochastic calibration of the parameters. The interpretation of the parameter values allows for better understanding of river thermal dynamics and the identification of the most relevant factors affecting it. The satisfactory agreement of different versions of the model with measurements in three different rivers (root mean square error smaller than 1oC, at a daily timescale) suggests that the proposed model can represent a useful tool to synthetically describe medium- and long-term behavior, and capture the changes induced by varying external conditions.

  19. Coefficient of restitution in fractional viscoelastic compliant impacts using fractional Chebyshev collocation

    NASA Astrophysics Data System (ADS)

    Dabiri, Arman; Butcher, Eric A.; Nazari, Morad

    2017-02-01

    Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.

  20. Estimation of a Nonlinear Intervention Phase Trajectory for Multiple-Baseline Design Data

    ERIC Educational Resources Information Center

    Hembry, Ian; Bunuan, Rommel; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2015-01-01

    A multilevel logistic model for estimating a nonlinear trajectory in a multiple-baseline design is introduced. The model is applied to data from a real multiple-baseline design study to demonstrate interpretation of relevant parameters. A simple change-in-levels (?"Levels") model and a model involving a quadratic function…

  1. Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation.

    PubMed

    Villaverde, Alejandro F; Banga, Julio R

    2017-11-01

    The concept of dynamical compensation has been recently introduced to describe the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. However, the original definition of dynamical compensation amounts to lack of structural identifiability. This is relevant if model parameters need to be estimated, as is often the case in biological modelling. Care should we taken when using an unidentifiable model to extract biological insight: the estimated values of structurally unidentifiable parameters are meaningless, and model predictions about unmeasured state variables can be wrong. Taking this into account, we explore alternative definitions of dynamical compensation that do not necessarily imply structural unidentifiability. Accordingly, we show different ways in which a model can be made identifiable while exhibiting dynamical compensation. Our analyses enable the use of the new concept of dynamical compensation in the context of parameter identification, and reconcile it with the desirable property of structural identifiability.

  2. Prediction of biomechanical parameters of the proximal femur using statistical appearance models and support vector regression.

    PubMed

    Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer

    2008-01-01

    Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.

  3. Towards an atrio-ventricular delay optimization assessed by a computer model for cardiac resynchronization therapy

    NASA Astrophysics Data System (ADS)

    Ojeda, David; Le Rolle, Virginie; Tse Ve Koon, Kevin; Thebault, Christophe; Donal, Erwan; Hernández, Alfredo I.

    2013-11-01

    In this paper, lumped-parameter models of the cardiovascular system, the cardiac electrical conduction system and a pacemaker are coupled to generate mitral ow pro les for di erent atrio-ventricular delay (AVD) con gurations, in the context of cardiac resynchronization therapy (CRT). First, we perform a local sensitivity analysis of left ventricular and left atrial parameters on mitral ow characteristics, namely E and A wave amplitude, mitral ow duration, and mitral ow time integral. Additionally, a global sensitivity analysis over all model parameters is presented to screen for the most relevant parameters that a ect the same mitral ow characteristics. Results provide insight on the in uence of left ventricle and atrium in uence on mitral ow pro les. This information will be useful for future parameter estimation of the model that could reproduce the mitral ow pro les and cardiovascular hemodynamics of patients undergoing AVD optimization during CRT.

  4. Phase I Contaminant Transport Parameters for the Groundwater Flow and Contaminant Transport Model of Corrective Action Unit 97: Yucca Flat/Climax Mine, Nevada Test Site, Nye County, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John McCord

    2007-09-01

    This report documents transport data and data analyses for Yucca Flat/Climax Mine CAU 97. The purpose of the data compilation and related analyses is to provide the primary reference to support parameterization of the Yucca Flat/Climax Mine CAU transport model. Specific task objectives were as follows: • Identify and compile currently available transport parameter data and supporting information that may be relevant to the Yucca Flat/Climax Mine CAU. • Assess the level of quality of the data and associated documentation. • Analyze the data to derive expected values and estimates of the associated uncertainty and variability. The scope of thismore » document includes the compilation and assessment of data and information relevant to transport parameters for the Yucca Flat/Climax Mine CAU subsurface within the context of unclassified source-term contamination. Data types of interest include mineralogy, aqueous chemistry, matrix and effective porosity, dispersivity, matrix diffusion, matrix and fracture sorption, and colloid-facilitated transport parameters.« less

  5. Computational Design of Short Pulse Laser Driven Iron Opacity Measurements at Stellar-Relevant Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Madison E.

    Opacity is a critical parameter in the simulation of radiation transport in systems such as inertial con nement fusion capsules and stars. The resolution of current disagreements between solar models and helioseismological observations would bene t from experimental validation of theoretical opacity models. Overall, short pulse laser heated iron experiments reaching stellar-relevant conditions have been designed with consideration of minimizing tamper emission and optical depth effects while meeting plasma condition and x-ray emission goals.

  6. An adaptive control scheme for a flexible manipulator

    NASA Technical Reports Server (NTRS)

    Yang, T. C.; Yang, J. C. S.; Kudva, P.

    1987-01-01

    The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.

  7. Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard

    2013-06-01

    The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant CQ ∝ |Vzz| and the asymmetry parameter ηQ that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.

  8. Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass.

    PubMed

    Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard

    2013-06-26

    The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant C(Q) is proportional to |V(zz)| and the asymmetry parameter η(Q) that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.

  9. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  10. Method for extracting relevant electrical parameters from graphene field-effect transistors using a physical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boscá, A., E-mail: alberto.bosca@upm.es; Dpto. de Ingeniería Electrónica, E.T.S.I. de Telecomunicación, Universidad Politécnica de Madrid, Madrid 28040; Pedrós, J.

    2015-01-28

    Due to its intrinsic high mobility, graphene has proved to be a suitable material for high-speed electronics, where graphene field-effect transistor (GFET) has shown excellent properties. In this work, we present a method for extracting relevant electrical parameters from GFET devices using a simple electrical characterization and a model fitting. With experimental data from the device output characteristics, the method allows to calculate parameters such as the mobility, the contact resistance, and the fixed charge. Differentiated electron and hole mobilities and direct connection with intrinsic material properties are some of the key aspects of this method. Moreover, the method outputmore » values can be correlated with several issues during key fabrication steps such as the graphene growth and transfer, the lithographic steps, or the metalization processes, providing a flexible tool for quality control in GFET fabrication, as well as a valuable feedback for improving the material-growth process.« less

  11. Electromagnetic sunscreen model: design of experiments on particle specifications.

    PubMed

    Lécureux, Marie; Deumié, Carole; Enoch, Stefan; Sergent, Michelle

    2015-10-01

    We report a numerical study on sunscreen design and optimization. Thanks to the combined use of electromagnetic modeling and design of experiments, we are able to screen the most relevant parameters of mineral filters and to optimize sunscreens. Several electromagnetic modeling methods are used depending on the type of particles, density of particles, etc. Both the sun protection factor (SPF) and the UVB/UVA ratio are considered. We show that the design of experiments' model should include interactions between materials and other parameters. We conclude that the material of the particles is a key parameter for the SPF and the UVB/UVA ratio. Among the materials considered, none is optimal for both. The SPF is also highly dependent on the size of the particles.

  12. Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment

    DTIC Science & Technology

    2013-10-01

    presented below. Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in...multiple parameters .  Most samples have been mechanically tested and data extracted for multiple parameters .  Histological evaluation of subset of...Sumner, D. R. Saline Irrigation Does Not Affect Bone Formation or Fixation Strength of Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model

  13. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  14. Selection, calibration, and validation of models of tumor growth.

    PubMed

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.

  15. Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation

    PubMed Central

    2017-01-01

    The concept of dynamical compensation has been recently introduced to describe the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. However, the original definition of dynamical compensation amounts to lack of structural identifiability. This is relevant if model parameters need to be estimated, as is often the case in biological modelling. Care should we taken when using an unidentifiable model to extract biological insight: the estimated values of structurally unidentifiable parameters are meaningless, and model predictions about unmeasured state variables can be wrong. Taking this into account, we explore alternative definitions of dynamical compensation that do not necessarily imply structural unidentifiability. Accordingly, we show different ways in which a model can be made identifiable while exhibiting dynamical compensation. Our analyses enable the use of the new concept of dynamical compensation in the context of parameter identification, and reconcile it with the desirable property of structural identifiability. PMID:29186132

  16. Rational Design of Glucose-Responsive Insulin Using Pharmacokinetic Modeling.

    PubMed

    Bakh, Naveed A; Bisker, Gili; Lee, Michael A; Gong, Xun; Strano, Michael S

    2017-11-01

    A glucose responsive insulin (GRI) is a therapeutic that modulates its potency, concentration, or dosing of insulin in relation to a patient's dynamic glucose concentration, thereby approximating aspects of a normally functioning pancreas. Current GRI design lacks a theoretical basis on which to base fundamental design parameters such as glucose reactivity, dissociation constant or potency, and in vivo efficacy. In this work, an approach to mathematically model the relevant parameter space for effective GRIs is induced, and design rules for linking GRI performance to therapeutic benefit are developed. Well-developed pharmacokinetic models of human glucose and insulin metabolism coupled to a kinetic model representation of a freely circulating GRI are used to determine the desired kinetic parameters and dosing for optimal glycemic control. The model examines a subcutaneous dose of GRI with kinetic parameters in an optimal range that results in successful glycemic control within prescribed constraints over a 24 h period. Additionally, it is demonstrated that the modeling approach can find GRI parameters that enable stable glucose levels that persist through a skipped meal. The results provide a framework for exploring the parameter space of GRIs, potentially without extensive, iterative in vivo animal testing. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Integrating retention soil filters into urban hydrologic models - Relevant processes and important parameters

    NASA Astrophysics Data System (ADS)

    Bachmann-Machnik, Anna; Meyer, Daniel; Waldhoff, Axel; Fuchs, Stephan; Dittmer, Ulrich

    2018-04-01

    Retention Soil Filters (RSFs), a form of vertical flow constructed wetlands specifically designed for combined sewer overflow (CSO) treatment, have proven to be an effective tool to mitigate negative impacts of CSOs on receiving water bodies. Long-term hydrologic simulations are used to predict the emissions from urban drainage systems during planning of stormwater management measures. So far no universally accepted model for RSF simulation exists. When simulating hydraulics and water quality in RSFs, an appropriate level of detail must be chosen for reasonable balancing between model complexity and model handling, considering the model input's level of uncertainty. The most crucial parameters determining the resultant uncertainties of the integrated sewer system and filter bed model were identified by evaluating a virtual drainage system with a Retention Soil Filter for CSO treatment. To determine reasonable parameter ranges for RSF simulations, data of 207 events from six full-scale RSF plants in Germany were analyzed. Data evaluation shows that even though different plants with varying loading and operation modes were examined, a simple model is sufficient to assess relevant suspended solids (SS), chemical oxygen demand (COD) and NH4 emissions from RSFs. Two conceptual RSF models with different degrees of complexity were assessed. These models were developed based on evaluation of data from full scale RSF plants and column experiments. Incorporated model processes are ammonium adsorption in the filter layer and degradation during subsequent dry weather period, filtration of SS and particulate COD (XCOD) to a constant background concentration and removal of solute COD (SCOD) by a constant removal rate during filter passage as well as sedimentation of SS and XCOD in the filter overflow. XCOD, SS and ammonium loads as well as ammonium concentration peaks are discharged primarily via RSF overflow not passing through the filter bed. Uncertainties of the integrated simulation of the sewer system and RSF model mainly originate from the model parameters of the hydrologic sewer system model.

  18. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    PubMed

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  19. Dynamical complexity in a mean-field model of human EEG

    NASA Astrophysics Data System (ADS)

    Frascoli, Federico; Dafilis, Mathew P.; van Veen, Lennaert; Bojak, Ingo; Liley, David T. J.

    2008-12-01

    A recently proposed mean-field theory of mammalian cortex rhythmogenesis describes the salient features of electrical activity in the cerebral macrocolumn, with the use of inhibitory and excitatory neuronal populations (Liley et al 2002). This model is capable of producing a range of important human EEG (electroencephalogram) features such as the alpha rhythm, the 40 Hz activity thought to be associated with conscious awareness (Bojak & Liley 2007) and the changes in EEG spectral power associated with general anesthetic effect (Bojak & Liley 2005). From the point of view of nonlinear dynamics, the model entails a vast parameter space within which multistability, pseudoperiodic regimes, various routes to chaos, fat fractals and rich bifurcation scenarios occur for physiologically relevant parameter values (van Veen & Liley 2006). The origin and the character of this complex behaviour, and its relevance for EEG activity will be illustrated. The existence of short-lived unstable brain states will also be discussed in terms of the available theoretical and experimental results. A perspective on future analysis will conclude the presentation.

  20. EMPIRE: A code for nuclear astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A.

    The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the directsemidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.

  1. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 2: A simulated case study in the North German Basin

    NASA Astrophysics Data System (ADS)

    Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.

  2. Estimates of atmospheric O2 in the Paleoproterozoic from paleosols

    NASA Astrophysics Data System (ADS)

    Kanzaki, Yoshiki; Murakami, Takashi

    2016-02-01

    A weathering model was developed to constrain the partial pressure of atmospheric O2 (PO2) in the Paleoproterozoic from the Fe records in paleosols. The model describes the Fe behavior in a weathering profile by dissolution/precipitation of Fe-bearing minerals, oxidation of dissolved Fe(II) to Fe(III) by oxygen and transport of dissolved Fe by water flow, in steady state. The model calculates the ratio of the precipitated Fe(III)-(oxyhydr)oxides from the dissolved Fe(II) to the dissolved Fe(II) during weathering (ϕ), as a function of PO2 . An advanced kinetic expression for Fe(II) oxidation by O2 was introduced into the model from the literature to calculate accurate ϕ-PO2 relationships. The model's validity is supported by the consistency of the calculated ϕ-PO2 relationships with those in the literature. The model can calculate PO2 for a given paleosol, once a ϕ value and values of the other parameters relevant to weathering, namely, pH of porewater, partial pressure of carbon dioxide (PCO2), water flow, temperature and O2 diffusion into soil, are obtained for the paleosol. The above weathering-relevant parameters were scrutinized for individual Paleoproterozoic paleosols. The values of ϕ, temperature, pH and PCO2 were obtained from the literature on the Paleoproterozoic paleosols. The parameter value of water flow was constrained for each paleosol from the mass balance of Si between water and rock phases and the relationships between water saturation ratio and hydraulic conductivity. The parameter value of O2 diffusion into soil was calculated for each paleosol based on the equation for soil O2 concentration with the O2 transport parameters in the literature. Then, we conducted comprehensive PO2 calculations for individual Paleoproterozoic paleosols which reflect all uncertainties in the weathering-relevant parameters. Consequently, robust estimates of PO2 in the Paleoproterozoic were obtained: 10-7.1-10-5.4 atm at ∼2.46 Ga, 10-5.0-10-2.5 atm at ∼2.15 Ga, 10-5.2-10-1.7 atm at ∼2.08 Ga and more than 10-4.6-10-2.0 atm at ∼1.85 Ga. Comparison of the present PO2 estimates to those in the literature suggests that a drastic rise of oxygen would not have occurred at ∼2.4 Ga, supporting a slightly rapid rise of oxygen at ∼2.4 Ga and a gradual rise of oxygen in the Paleoproterozoic in long term.

  3. The role of structural parameters in DNA cyclization

    DOE PAGES

    Alexandrov, Ludmil B.; Bishop, Alan R.; Rasmussen, Kim O.; ...

    2016-02-04

    The intrinsic bendability of DNA plays an important role with relevance for myriad of essential cellular mechanisms. The flexibility of a DNA fragment can be experimentally and computationally examined by its propensity for cyclization, quantified by the Jacobson-Stockmayer J factor. In this paper, we use a well-established coarse-grained three-dimensional model of DNA and seven distinct sets of experimentally and computationally derived conformational parameters of the double helix to evaluate the role of structural parameters in calculating DNA cyclization.

  4. Modeling the Morphogenesis of Epidermal Tissues on the Surface of a 3D Last

    NASA Astrophysics Data System (ADS)

    McCleery, W. Tyler; Crews, Sarah M.; Mashburn, David N.; Veldhuis, Jim; Brodland, G. Wayne; Hutson, M. Shane

    2014-03-01

    Embryogenesis in the fruit fly Drosophila melanogaster is coordinated by the interaction of cells in adjacent tissues. For some events of embryogenesis, e.g., dorsal closure, two-dimensional models have been sufficient to elucidate the relevant cell and tissue mechanics. Here, we describe a new three-dimensional cell-level finite element model for investigating germ band retraction - a morphogenetic event where one epidermal tissue, the germ band, initially wraps around the posterior end of the ellipsoidal embryo. This tissue then retracts with a mechanical assist from contraction of cells in a second epidermal tissue, the amnioserosa. To speed simulation run times and focus on the relevant tissues, we only model epidermal tissue interactions. Epidermal cells are defined as polygons constrained to lie on the surface of the ellipsoidal last, but have adjustable parameters such as edge tensions and cell pressures. Tissue movements are simulated by balancing these dynamic cell-level forces with viscous resistance and allowing cells to exchange neighbors. Our choice of modeling parameters is informed by in vivo measurements of cell-level forces using laser microsurgery. We use this model to investigate the multicellular stress fields in normal and aberrant development.

  5. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  6. Advanced Interactive Display Formats for Terminal Area Traffic Control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Shaviv, G. E.

    1999-01-01

    This research project deals with an on-line dynamic method for automated viewing parameter management in perspective displays. Perspective images are optimized such that a human observer will perceive relevant spatial geometrical features with minimal errors. In order to compute the errors at which observers reconstruct spatial features from perspective images, a visual spatial-perception model was formulated. The model was employed as the basis of an optimization scheme aimed at seeking the optimal projection parameter setting. These ideas are implemented in the context of an air traffic control (ATC) application. A concept, referred to as an active display system, was developed. This system uses heuristic rules to identify relevant geometrical features of the three-dimensional air traffic situation. Agile, on-line optimization was achieved by a specially developed and custom-tailored genetic algorithm (GA), which was to deal with the multi-modal characteristics of the objective function and exploit its time-evolving nature.

  7. Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.

    ERIC Educational Resources Information Center

    Taghva, Kazem; And Others

    1996-01-01

    Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)

  8. Evolving Spiking Neural Networks for Recognition of Aged Voices.

    PubMed

    Silva, Marco; Vellasco, Marley M B R; Cataldo, Edson

    2017-01-01

    The aging of the voice, known as presbyphonia, is a natural process that can cause great change in vocal quality of the individual. This is a relevant problem to those people who use their voices professionally, and its early identification can help determine a suitable treatment to avoid its progress or even to eliminate the problem. This work focuses on the development of a new model for the identification of aging voices (independently of their chronological age), using as input attributes parameters extracted from the voice and glottal signals. The proposed model, named Quantum binary-real evolving Spiking Neural Network (QbrSNN), is based on spiking neural networks (SNNs), with an unsupervised training algorithm, and a Quantum-Inspired Evolutionary Algorithm that automatically determines the most relevant attributes and the optimal parameters that configure the SNN. The QbrSNN model was evaluated in a database composed of 120 records, containing samples from three groups of speakers. The results obtained indicate that the proposed model provides better accuracy than other approaches, with fewer input attributes. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. A review of bias flow liners for acoustic damping in gas turbine combustors

    NASA Astrophysics Data System (ADS)

    Lahiri, C.; Bake, F.

    2017-07-01

    The optimized design of bias flow liner is a key element for the development of low emission combustion systems in modern gas turbines and aero-engines. The research of bias flow liners has a fairly long history concerning both the parameter dependencies as well as the methods to model the acoustic behaviour of bias flow liners under the variety of different bias and grazing flow conditions. In order to establish an overview over the state of the art, this paper provides a comprehensive review about the published research on bias flow liners and modelling approaches with an extensive study of the most relevant parameters determining the acoustic behaviour of these liners. The paper starts with a historical description of available investigations aiming on the characterization of the bias flow absorption principle. This chronological compendium is extended by the recent and ongoing developments in this field. In a next step the fundamental acoustic property of bias flow liner in terms of the wall impedance is introduced and the different derivations and formulations of this impedance yielding the different published model descriptions are explained and compared. Finally, a parametric study reveals the most relevant parameters for the acoustic damping behaviour of bias flow liners and how this is reflected by the various model representations. Although the general trend of the investigated acoustic behaviour is captured by the different models fairly well for a certain range of parameters, in the transition region between the resonance dominated and the purely bias flow related regime all models lack the correct damping prediction. This seems to be connected to the proper implementation of the reactance as a function of bias flow Mach number.

  10. Mathematical Model Relating Uniaxial Compressive Behavior of Manufactured Sand Mortar to MIP-Derived Pore Structure Parameters

    PubMed Central

    Tian, Zhenghong; Bu, Jingwu

    2014-01-01

    The uniaxial compression response of manufactured sand mortars proportioned using different water-cement ratio and sand-cement ratio is examined. Pore structure parameters such as porosity, threshold diameter, mean diameter, and total amounts of macropores, as well as shape and size of micropores are quantified by using mercury intrusion porosimetry (MIP) technique. Test results indicate that strains at peak stress and compressive strength decreased with the increasing sand-cement ratio due to insufficient binders to wrap up entire sand. A compression stress-strain model of normal concrete extending to predict the stress-strain relationships of manufactured sand mortar is verified and agreed well with experimental data. Furthermore, the stress-strain model constant is found to be influenced by threshold diameter, mean diameter, shape, and size of micropores. A mathematical model relating stress-strain model constants to the relevant pore structure parameters of manufactured sand mortar is developed. PMID:25133257

  11. Mathematical model relating uniaxial compressive behavior of manufactured sand mortar to MIP-derived pore structure parameters.

    PubMed

    Tian, Zhenghong; Bu, Jingwu

    2014-01-01

    The uniaxial compression response of manufactured sand mortars proportioned using different water-cement ratio and sand-cement ratio is examined. Pore structure parameters such as porosity, threshold diameter, mean diameter, and total amounts of macropores, as well as shape and size of micropores are quantified by using mercury intrusion porosimetry (MIP) technique. Test results indicate that strains at peak stress and compressive strength decreased with the increasing sand-cement ratio due to insufficient binders to wrap up entire sand. A compression stress-strain model of normal concrete extending to predict the stress-strain relationships of manufactured sand mortar is verified and agreed well with experimental data. Furthermore, the stress-strain model constant is found to be influenced by threshold diameter, mean diameter, shape, and size of micropores. A mathematical model relating stress-strain model constants to the relevant pore structure parameters of manufactured sand mortar is developed.

  12. Aspects of metallic low-temperature transport in Mott-insulator/band-insulator superlattices: Optical conductivity and thermoelectricity

    NASA Astrophysics Data System (ADS)

    Rüegg, Andreas; Pilgram, Sebastian; Sigrist, Manfred

    2008-06-01

    We investigate the low-temperature electrical and thermal transport properties in atomically precise metallic heterostructures involving strongly correlated electron systems. The model of the Mott-insulator/band-insulator superlattice was discussed in the framework of the slave-boson mean-field approximation and transport quantities were derived by use of the Boltzmann transport equation in the relaxation-time approximation. The results for the optical conductivity are in good agreement with recently published experimental data on (LaTiO3)N/(SrTiO3)M superlattices and allow us to estimate the values of key parameters of the model. Furthermore, predictions for the thermoelectric response were made and the dependence of the Seebeck coefficient on model parameters was studied in detail. The width of the Mott-insulating material was identified as the most relevant parameter, in particular, this parameter provides a way to optimize the thermoelectric power factor at low temperatures.

  13. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  14. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners

    PubMed Central

    Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.

    2015-01-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101

  15. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  16. Linking 1D coastal ocean modelling to environmental management: an ensemble approach

    NASA Astrophysics Data System (ADS)

    Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia

    2017-12-01

    The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.

  17. Phenotypic models of evolution and development: geometry as destiny.

    PubMed

    François, Paul; Siggia, Eric D

    2012-12-01

    Quantitative models of development that consider all relevant genes typically are difficult to fit to embryonic data alone and have many redundant parameters. Computational evolution supplies models of phenotype with relatively few variables and parameters that allows the patterning dynamics to be reduced to a geometrical picture for how the state of a cell moves. The clock and wavefront model, that defines the phenotype of somitogenesis, can be represented as a sequence of two discrete dynamical transitions (bifurcations). The expression-time to space map for Hox genes and the posterior dominance rule are phenotypes that naturally follow from computational evolution without considering the genetics of Hox regulation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Construction of robust dynamic genome-scale metabolic model structures of Saccharomyces cerevisiae through iterative re-parameterization.

    PubMed

    Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo

    2014-09-01

    Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  19. General squark flavour mixing: constraints, phenomenology and benchmarks

    DOE PAGES

    De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...

    2015-11-19

    Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  20. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  1. Determination of Littlest Higgs Model Parameters at the ILC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conley, John A.; Hewett, JoAnne; Le, My Phuong

    2005-07-27

    We examine the effects of the extended gauge sector of the Littlest Higgs model in high energy e{sup +}e{sup -} collisions. We find that the search reach in e{sup +}e{sup -} {yields} f{bar f} at a {radical}s = 500 GeV International Linear Collider covers essentially the entire parameter region where the Littlest Higgs model is relevant to the gauge hierarchy problem. In addition, we show that this channel provides an accurate determination of the fundamental model parameters, to the precision of a few percent, provided that the LHC measures the mass of the heavy neutral gauge .eld. Additionally, we showmore » that the couplings of the extra gauge bosons to the light Higgs can be observed from the process e{sup +}e{sup -} {yields} Zh for a significant region of the parameter space. This allows for confirmation of the structure of the cancellation of the Higgs mass quadratic divergence and would verify the little Higgs mechanism.« less

  2. Uncertainty in temperature-based determination of time of death

    NASA Astrophysics Data System (ADS)

    Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan

    2018-03-01

    Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.

  3. Strong competition between ΘI I-loop-current order and d -wave charge order along the diagonal direction in a two-dimensional hot spot model

    NASA Astrophysics Data System (ADS)

    de Carvalho, Vanuildo S.; Kloss, Thomas; Montiel, Xavier; Freire, Hermann; Pépin, Catherine

    2015-08-01

    We study the fate of the so-called ΘI I-loop-current order that breaks both time-reversal and parity symmetries in a two-dimensional hot spot model with antiferromagnetically mediated interactions, using Fermi surfaces relevant to the phenomenology of the cuprate superconductors. We start from a three-band Emery model describing the hopping of holes in the CuO2 plane that includes two hopping parameters tp p and tp d, local onsite Coulomb interactions Ud and Up, and nearest-neighbor Vp d couplings between the fermions in the copper [Cu (3 dx2-y2) ] and oxygen [O (2 px) and O (2 py)] orbitals. By focusing on the lowest-energy band, we proceed to decouple the local interaction Ud of the Cu orbital in the spin channel using a Hubbard-Stratonovich transformation to arrive at the interacting part of the so-called spin-fermion model. We also decouple the nearest-neighbor interaction Vp d to introduce the order parameter of the ΘI I-loop-current order. In this way, we are able to construct a consistent mean-field theory that describes the strong competition between the composite order parameter made of a quadrupole-density wave and d -wave pairing fluctuations proposed in Efetov et al. [Nat. Phys. 9, 442 (2013), 10.1038/nphys2641] with the ΘI I-loop-current order parameter that is argued to be relevant for explaining important aspects of the physics of the pseudogap phase displayed in the underdoped cuprates.

  4. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  5. Numerical Modelling of Effects of Biphasic Layers of Corrosion Products to the Degradation of Magnesium Metal In Vitro

    PubMed Central

    Ahmed, Safia K.; Ward, John P.; Liu, Yang

    2017-01-01

    Magnesium (Mg) is becoming increasingly popular for orthopaedic implant materials. Its mechanical properties are closer to bone than other implant materials, allowing for more natural healing under stresses experienced during recovery. Being biodegradable, it also eliminates the requirement of further surgery to remove the hardware. However, Mg rapidly corrodes in clinically relevant aqueous environments, compromising its use. This problem can be addressed by alloying the Mg, but challenges remain at optimising the properties of the material for clinical use. In this paper, we present a mathematical model to provide a systematic means of quantitatively predicting Mg corrosion in aqueous environments, providing a means of informing standardisation of in vitro investigation of Mg alloy corrosion to determine implant design parameters. The model describes corrosion through reactions with water, to produce magnesium hydroxide Mg(OH)2, and subsequently with carbon dioxide to form magnesium carbonate MgCO3. The corrosion products produce distinct protective layers around the magnesium block that are modelled as porous media. The resulting model of advection–diffusion equations with multiple moving boundaries was solved numerically using asymptotic expansions to deal with singular cases. The model has few free parameters, and it is shown that these can be tuned to predict a full range of corrosion rates, reflecting differences between pure magnesium or magnesium alloys. Data from practicable in vitro experiments can be used to calibrate the model’s free parameters, from which model simulations using in vivo relevant geometries provide a cheap first step in optimising Mg-based implant materials. PMID:29267244

  6. Questioning the Relevance of Model-Based Probability Statements on Extreme Weather and Future Climate

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2007-12-01

    We question the relevance of climate-model based Bayesian (or other) probability statements for decision support and impact assessment on spatial scales less than continental and temporal averages less than seasonal. Scientific assessment of higher resolution space and time scale information is urgently needed, given the commercial availability of "products" at high spatiotemporal resolution, their provision by nationally funded agencies for use both in industry decision making and governmental policy support, and their presentation to the public as matters of fact. Specifically we seek to establish necessary conditions for probability forecasts (projections conditioned on a model structure and a forcing scenario) to be taken seriously as reflecting the probability of future real-world events. We illustrate how risk management can profitably employ imperfect models of complicated chaotic systems, following NASA's study of near-Earth PHOs (Potentially Hazardous Objects). Our climate models will never be perfect, nevertheless the space and time scales on which they provide decision- support relevant information is expected to improve with the models themselves. Our aim is to establish a set of baselines of internal consistency; these are merely necessary conditions (not sufficient conditions) that physics based state-of-the-art models are expected to pass if their output is to be judged decision support relevant. Probabilistic Similarity is proposed as one goal which can be obtained even when our models are not empirically adequate. In short, probabilistic similarity requires that, given inputs similar to today's empirical observations and observational uncertainties, we expect future models to produce similar forecast distributions. Expert opinion on the space and time scales on which we might reasonably expect probabilistic similarity may prove of much greater utility than expert elicitation of uncertainty in parameter values in a model that is not empirically adequate; this may help to explain the reluctance of experts to provide information on "parameter uncertainty." Probability statements about the real world are always conditioned on some information set; they may well be conditioned on "False" making them of little value to a rational decision maker. In other instances, they may be conditioned on physical assumptions not held by any of the modellers whose model output is being cast as a probability distribution. Our models will improve a great deal in the next decades, and our insight into the likely climate fifty years hence will improve: maintaining the credibility of the science and the coherence of science based decision support, as our models improve, require a clear statement of our current limitations. What evidence do we have that today's state-of-the-art models provide decision-relevant probability forecasts? What space and time scales do we currently have quantitative, decision-relevant information on for 2050? 2080?

  7. A new method for calculation of water saturation in shale gas reservoirs using V P -to-V S ratio and porosity

    NASA Astrophysics Data System (ADS)

    Liu, Kun; Sun, Jianmeng; Zhang, Hongpan; Liu, Haitao; Chen, Xiangyang

    2018-02-01

    Total water saturation is an important parameter for calculating the free gas content of shale gas reservoirs. Owing to the limitations of the Archie formula and its extended solutions in zones rich in organic or conductive minerals, a new method was proposed to estimate total water saturation according to the relationship between total water saturation, V P -to-V S ratio and total porosity. Firstly, the ranges of the relevant parameters in the viscoelastic BISQ model in shale gas reservoirs were estimated. Then, the effects of relevant parameters on the V P -to-V S ratio were simulated based on the partially saturated viscoelastic BISQ model. These parameters were total water saturation, total porosity, permeability, characteristic squirt-flow length, fluid viscosity and sonic frequency. The simulation results showed that the main factors influencing V P -to-V S ratio were total porosity and total water saturation. When the permeability and the characteristic squirt-flow length changed slightly for a particular shale gas reservoir, their influences could be neglected. Then an empirical equation for total water saturation with respect to total porosity and V P -to-V S ratio was obtained according to the experimental data. Finally, the new method was successfully applied to estimate total water saturation in a sequence formation of shale gas reservoirs. Practical applications have shown good agreement with the results calculated by the Archie model.

  8. Cellular signaling identifiability analysis: a case study.

    PubMed

    Roper, Ryan T; Pia Saccomani, Maria; Vicini, Paolo

    2010-05-21

    Two primary purposes for mathematical modeling in cell biology are (1) simulation for making predictions of experimental outcomes and (2) parameter estimation for drawing inferences from experimental data about unobserved aspects of biological systems. While the former purpose has become common in the biological sciences, the latter is less common, particularly when studying cellular and subcellular phenomena such as signaling-the focus of the current study. Data are difficult to obtain at this level. Therefore, even models of only modest complexity can contain parameters for which the available data are insufficient for estimation. In the present study, we use a set of published cellular signaling models to address issues related to global parameter identifiability. That is, we address the following question: assuming known time courses for some model variables, which parameters is it theoretically impossible to estimate, even with continuous, noise-free data? Following an introduction to this problem and its relevance, we perform a full identifiability analysis on a set of cellular signaling models using DAISY (Differential Algebra for the Identifiability of SYstems). We use our analysis to bring to light important issues related to parameter identifiability in ordinary differential equation (ODE) models. We contend that this is, as of yet, an under-appreciated issue in biological modeling and, more particularly, cell biology. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-01-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  10. Failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-08-01

    Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  11. LEOrbit: A program to calculate parameters relevant to modeling Low Earth Orbit spacecraft-plasma interaction

    NASA Astrophysics Data System (ADS)

    Marchand, R.; Purschke, D.; Samson, J.

    2013-03-01

    Understanding the physics of interaction between satellites and the space environment is essential in planning and exploiting space missions. Several computer models have been developed over the years to study this interaction. In all cases, simulations are carried out in the reference frame of the spacecraft and effects such as charging, the formation of electrostatic sheaths and wakes are calculated for given conditions of the space environment. In this paper we present a program used to compute magnetic fields and a number of space plasma and space environment parameters relevant to Low Earth Orbits (LEO) spacecraft-plasma interaction modeling. Magnetic fields are obtained from the International Geophysical Reference Field (IGRF) and plasma parameters are obtained from the International Reference Ionosphere (IRI) model. All parameters are computed in the spacecraft frame of reference as a function of its six Keplerian elements. They are presented in a format that can be used directly in most spacecraft-plasma interaction models. Catalogue identifier: AENY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 270308 No. of bytes in distributed program, including test data, etc.: 2323222 Distribution format: tar.gz Programming language: FORTRAN 90. Computer: Non specific. Operating system: Non specific. RAM: 7.1 MB Classification: 19, 4.14. External routines: IRI, IGRF (included in the package). Nature of problem: Compute magnetic field components, direction of the sun, sun visibility factor and approximate plasma parameters in the reference frame of a Low Earth Orbit satellite. Solution method: Orbit integration, calls to IGRF and IRI libraries and transformation of coordinates from geocentric to spacecraft frame reference. Restrictions: Low Earth orbits, altitudes between 150 and 2000 km. Running time: Approximately two seconds to parameterize a full orbit with 1000 points.

  12. Construction and identification of a D-Vine model applied to the probability distribution of modal parameters in structural dynamics

    NASA Astrophysics Data System (ADS)

    Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.

    2018-01-01

    This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.

  13. The muon g - 2 for low-mass pseudoscalar Higgs in the general 2HDM

    NASA Astrophysics Data System (ADS)

    Cherchiglia, Adriano; Stöckinger, Dominik; Stöckinger-Kim, Hyejung

    2018-05-01

    The two-Higgs doublet model is a simple and attractive extension of the Standard Model. It provides a possibility to explain the large deviation between theory and experiment in the muon g - 2 in an interesting parameter region: light pseudoscalar Higgs A, large Yukawa coupling to τ-leptons, and general, non-type II Yukawa couplings are preferred. This parameter region is explored, experimental limits on the relevant Yukawa couplings are obtained, and the maximum possible contributions to the muon g - 2 are discussed. Presented at Workshop on Flavour Changing and Conserving Processes (FCCP2017), September 2017

  14. Mechanistic modelling of drug release from a polymer matrix using magnetic resonance microimaging.

    PubMed

    Kaunisto, Erik; Tajarobi, Farhad; Abrahmsen-Alami, Susanna; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders

    2013-03-12

    In this paper a new model describing drug release from a polymer matrix tablet is presented. The utilization of the model is described as a two step process where, initially, polymer parameters are obtained from a previously published pure polymer dissolution model. The results are then combined with drug parameters obtained from literature data in the new model to predict solvent and drug concentration profiles and polymer and drug release profiles. The modelling approach was applied to the case of a HPMC matrix highly loaded with mannitol (model drug). The results showed that the drug release rate can be successfully predicted, using the suggested modelling approach. However, the model was not able to accurately predict the polymer release profile, possibly due to the sparse amount of usable pure polymer dissolution data. In addition to the case study, a sensitivity analysis of model parameters relevant to drug release was performed. The analysis revealed important information that can be useful in the drug formulation process. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Reconstructing the hidden states in time course data of stochastic models.

    PubMed

    Zimmer, Christoph

    2015-11-01

    Parameter estimation is central for analyzing models in Systems Biology. The relevance of stochastic modeling in the field is increasing. Therefore, the need for tailored parameter estimation techniques is increasing as well. Challenges for parameter estimation are partial observability, measurement noise, and the computational complexity arising from the dimension of the parameter space. This article extends the multiple shooting for stochastic systems' method, developed for inference in intrinsic stochastic systems. The treatment of extrinsic noise and the estimation of the unobserved states is improved, by taking into account the correlation between unobserved and observed species. This article demonstrates the power of the method on different scenarios of a Lotka-Volterra model, including cases in which the prey population dies out or explodes, and a Calcium oscillation system. Besides showing how the new extension improves the accuracy of the parameter estimates, this article analyzes the accuracy of the state estimates. In contrast to previous approaches, the new approach is well able to estimate states and parameters for all the scenarios. As it does not need stochastic simulations, it is of the same order of speed as conventional least squares parameter estimation methods with respect to computational time. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Prediction of flunixin tissue residue concentrations in livers from diseased cattle.

    PubMed

    Wu, H; Baynes, R E; Tell, L A; Riviere, J E

    2013-12-01

    Flunixin, a widely used non-steroidal anti-inflammatory drug, was a leading cause of violative residues in cattle. The objective of this analysis was to explore how the changes in pharmacokinetic (PK) parameters that may be associated with diseased animals affect the predicted liver residue of flunixin in cattle. Monte Carlo simulations for liver residues of flunixin were performed using the PK model structure and relevant PK parameter estimates from a previously published population PK model for flunixin in cattle. The magnitude of a change in the PK parameter value that resulted in a violative residue issue in more than one percent of a cattle population was compared. In this regard, elimination clearance and volume of distribution affected withdrawal times. Pathophysiological factors that can change these parameters may contribute to the occurrence of violative residues of flunixin.

  17. Chemometric analysis of correlations between electronic absorption characteristics and structural and/or physicochemical parameters for ampholytic substances of biological and pharmaceutical relevance.

    PubMed

    Judycka-Proma, U; Bober, L; Gajewicz, A; Puzyn, T; Błażejowski, J

    2015-03-05

    Forty ampholytic compounds of biological and pharmaceutical relevance were subjected to chemometric analysis based on unsupervised and supervised learning algorithms. This enabled relations to be found between empirical spectral characteristics derived from electronic absorption data and structural and physicochemical parameters predicted by quantum chemistry methods or phenomenological relationships based on additivity rules. It was found that the energies of long wavelength absorption bands are correlated through multiparametric linear relationships with parameters reflecting the bulkiness features of the absorbing molecules as well as their nucleophilicity and electrophilicity. These dependences enable the quantitative analysis of spectral features of the compounds, as well as a comparison of their similarities and certain pharmaceutical and biological features. Three QSPR models to predict the energies of long-wavelength absorption in buffers with pH=2.5 and pH=7.0, as well as in methanol, were developed and validated in this study. These models can be further used to predict the long-wavelength absorption energies of untested substances (if they are structurally similar to the training compounds). Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Tunable Collagen I Hydrogels for Engineered Physiological Tissue Micro-Environments

    PubMed Central

    Antoine, Elizabeth E.; Vlachos, Pavlos P.; Rylander, Marissa N.

    2015-01-01

    Collagen I hydrogels are commonly used to mimic the extracellular matrix (ECM) for tissue engineering applications. However, the ability to design collagen I hydrogels similar to the properties of physiological tissues has been elusive. This is primarily due to the lack of quantitative correlations between multiple fabrication parameters and resulting material properties. This study aims to enable informed design and fabrication of collagen hydrogels in order to reliably and reproducibly mimic a variety of soft tissues. We developed empirical predictive models relating fabrication parameters with material and transport properties. These models were obtained through extensive experimental characterization of these properties, which include compression modulus, pore and fiber diameter, and diffusivity. Fabrication parameters were varied within biologically relevant ranges and included collagen concentration, polymerization pH, and polymerization temperature. The data obtained from this study elucidates previously unknown fabrication-property relationships, while the resulting equations facilitate informed a priori design of collagen hydrogels with prescribed properties. By enabling hydrogel fabrication by design, this study has the potential to greatly enhance the utility and relevance of collagen hydrogels in order to develop physiological tissue microenvironments for a wide range of tissue engineering applications. PMID:25822731

  19. Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.

    2008-01-01

    A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).

  20. Toward a better integration of roughness in rockfall simulations - a sensitivity study with the RockyFor3D model

    NASA Astrophysics Data System (ADS)

    Monnet, Jean-Matthieu; Bourrier, Franck; Milenkovic, Milutin

    2017-04-01

    Advances in numerical simulation and analysis of real-size field experiments have supported the development of process-based rockfall simulation models. Availability of high resolution remote sensing data and high-performance computing now make it possible to implement them for operational applications, e.g. risk zoning and protection structure design. One key parameter regarding rock propagation is the surface roughness, sometimes defined as the variation in height perpendicular to the slope (Pfeiffer and Bowen, 1989). Roughness-related input parameters for rockfall models are usually determined by experts on the field. In the RockyFor3D model (Dorren, 2015), three values related to the distribution of obstacles (deposited rocks, stumps, fallen trees,... as seen from the incoming rock) relatively to the average slope are estimated. The use of high resolution digital terrain models (DTMs) questions both the scale usually adopted by experts for roughness assessment and the relevance of modeling hypotheses regarding the rock / ground interaction. Indeed, experts interpret the surrounding terrain as obstacles or ground depending on the overall visibility and on the nature of objects. Digital models represent the terrain with a certain amount of smoothing, depending on the sensor capacities. Besides, the rock rebound on the ground is modeled by changes in the velocities of the gravity center of the block due to impact. Thus, the use of a DTM with resolution smaller than the block size might have little relevance while increasing computational burden. The objective of this work is to investigate the issue of scale relevance with simulations based on RockyFor3D in order to derive guidelines for roughness estimation by field experts. First a sensitivity analysis is performed to identify the combinations of parameters (slope, soil roughness parameter, rock size) where the roughness values have a critical effect on rock propagation on a regular hillside. Second, a more complex hillside is simulated by combining three components: a) a global trend (planar surface), b) local systematic components (sine waves), c) random roughness (Gaussian, zero-mean noise). The parameters for simulating these components are estimated for three typical scenarios of rockfall terrains: soft soil, fine scree and coarse scree, based on expert knowledge and available airborne and terrestrial laser scanning data. For each scenario, the reference terrain is created and used to compute input data for RockyFor3D simulations at different scales, i.e. DTMs with resolutions from 0.5 m to 20 m and associated roughness parameters. Subsequent analysis mainly focuses on the sensitivity of simulations both in terms of run-out envelope and kinetic energy distribution. Guidelines drawn from the results are expected to help experts handle the scale issue while integrating remote sensing data and field measurements of roughness in rockfall simulations.

  1. The fate of the littlest Higgs model with T -parity under 13 TeV LHC data

    NASA Astrophysics Data System (ADS)

    Dercks, Daniel; Moortgat-Pick, Gudrid; Reuter, Jürgen; Shim, So Young

    2018-05-01

    We exploit all LHC available Run 2 data at center-of-mass energies of 8 and 13 TeV for searches for physics beyond the Standard Model. We scrutinize the allowed parameter space of Little Higgs models with the concrete symmetry of T -parity by providing comprehensive analyses of all relevant production channels of heavy vectors, top partners, heavy quarks and heavy leptons and all phenomenologically relevant decay channels. Constraints on the model, particularly the symmetry breaking scale f, will be derived from the signatures of jets and missing energy or leptons and missing energy. Besides the symmetric case, we also study the case of T-parity violation. Furthermore, we give an extrapolation to the LHC high-luminosity phase at 14 TeV as well.

  2. Accessing and Utilizing Remote Sensing Data for Vectorborne Infectious Diseases Surveillance and Modeling

    NASA Technical Reports Server (NTRS)

    Kiang, Richard; Adimi, Farida; Kempler, Steven

    2008-01-01

    Background: The transmission of vectorborne infectious diseases is often influenced by environmental, meteorological and climatic parameters, because the vector life cycle depends on these factors. For example, the geophysical parameters relevant to malaria transmission include precipitation, surface temperature, humidity, elevation, and vegetation type. Because these parameters are routinely measured by satellites, remote sensing is an important technological tool for predicting, preventing, and containing a number of vectorborne infectious diseases, such as malaria, dengue, West Nile virus, etc. Methods: A variety of NASA remote sensing data can be used for modeling vectorborne infectious disease transmission. We will discuss both the well known and less known remote sensing data, including Landsat, AVHRR (Advanced Very High Resolution Radiometer), MODIS (Moderate Resolution Imaging Spectroradiometer), TRMM (Tropical Rainfall Measuring Mission), ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), EO-1 (Earth Observing One) ALI (Advanced Land Imager), and SIESIP (Seasonal to Interannual Earth Science Information Partner) dataset. Giovanni is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center. It provides a simple and intuitive way to visualize, analyze, and access vast amounts of Earth science remote sensing data. After remote sensing data is obtained, a variety of techniques, including generalized linear models and artificial intelligence oriented methods, t 3 can be used to model the dependency of disease transmission on these parameters. Results: The processes of accessing, visualizing and utilizing precipitation data using Giovanni, and acquiring other data at additional websites are illustrated. Malaria incidence time series for some parts of Thailand and Indonesia are used to demonstrate that malaria incidences are reasonably well modeled with generalized linear models and artificial intelligence based techniques. Conclusions: Remote sensing data relevant to the transmission of vectorborne infectious diseases can be conveniently accessed at NASA and some other websites. These data are useful for vectorborne infectious disease surveillance and modeling.

  3. Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2018-01-01

    Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (σ NEE ˜ 4.0-13.5 g C m-2 yr-1 with perturbed parameters, meteorological forcings and initial states). We conclude that LAI and NEE uncertainty with CLM is clearly underestimated if uncertain meteorological forcings and initial states are not taken into account.

  4. Physically based model for extracting dual permeability parameters using non-Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Abou Najm, M. R.; Basset, C.; Stewart, R. D.; Hauswirth, S.

    2017-12-01

    Dual permeability models are effective for the assessment of flow and transport in structured soils with two dominant structures. The major challenge to those models remains in the ability to determine appropriate and unique parameters through affordable, simple, and non-destructive methods. This study investigates the use of water and a non-Newtonian fluid in saturated flow experiments to derive physically-based parameters required for improved flow predictions using dual permeability models. We assess the ability of these two fluids to accurately estimate the representative pore sizes in dual-domain soils, by determining the effective pore sizes of macropores and micropores. We developed two sub-models that solve for the effective macropore size assuming either cylindrical (e.g., biological pores) or planar (e.g., shrinkage cracks and fissures) pore geometries, with the micropores assumed to be represented by a single effective radius. Furthermore, the model solves for the percent contribution to flow (wi) corresponding to the representative macro and micro pores. A user-friendly solver was developed to numerically solve the system of equations, given that relevant non-Newtonian viscosity models lack forms conducive to analytical integration. The proposed dual-permeability model is a unique attempt to derive physically based parameters capable of measuring dual hydraulic conductivities, and therefore may be useful in reducing parameter uncertainty and improving hydrologic model predictions.

  5. Global Precipitation Measurement: Methods, Datasets and Applications

    NASA Technical Reports Server (NTRS)

    Tapiador, Francisco; Turk, Francis J.; Petersen, Walt; Hou, Arthur Y.; Garcia-Ortega, Eduardo; Machado, Luiz, A. T.; Angelis, Carlos F.; Salio, Paola; Kidd, Chris; Huffman, George J.; hide

    2011-01-01

    This paper reviews the many aspects of precipitation measurement that are relevant to providing an accurate global assessment of this important environmental parameter. Methods discussed include ground data, satellite estimates and numerical models. First, the methods for measuring, estimating, and modeling precipitation are discussed. Then, the most relevant datasets gathering precipitation information from those three sources are presented. The third part of the paper illustrates a number of the many applications of those measurements and databases. The aim of the paper is to organize the many links and feedbacks between precipitation measurement, estimation and modeling, indicating the uncertainties and limitations of each technique in order to identify areas requiring further attention, and to show the limits within which datasets can be used.

  6. Causal Inference in Retrospective Studies.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Rubin, Donald B.

    1988-01-01

    The problem of drawing causal inferences from retrospective case-controlled studies is considered. A model for causal inference in prospective studies is applied to retrospective studies. Limitations of case-controlled studies are formulated concerning relevant parameters that can be estimated in such studies. A coffee-drinking/myocardial…

  7. Another look at confidence intervals: Proposal for a more relevant and transparent approach

    NASA Astrophysics Data System (ADS)

    Biller, Steven D.; Oser, Scott M.

    2015-02-01

    The behaviors of various confidence/credible interval constructions are explored, particularly in the region of low event numbers where methods diverge most. We highlight a number of challenges, such as the treatment of nuisance parameters, and common misconceptions associated with such constructions. An informal survey of the literature suggests that confidence intervals are not always defined in relevant ways and are too often misinterpreted and/or misapplied. This can lead to seemingly paradoxical behaviors and flawed comparisons regarding the relevance of experimental results. We therefore conclude that there is a need for a more pragmatic strategy which recognizes that, while it is critical to objectively convey the information content of the data, there is also a strong desire to derive bounds on model parameter values and a natural instinct to interpret things this way. Accordingly, we attempt to put aside philosophical biases in favor of a practical view to propose a more transparent and self-consistent approach that better addresses these issues.

  8. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    PubMed

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Hydrological Relevant Parameters from Remote Sensing - Spatial Modelling Input and Validation Basis

    NASA Astrophysics Data System (ADS)

    Hochschild, V.

    2012-12-01

    This keynote paper will demonstrate how multisensoral remote sensing data is used as spatial input for mesoscale hydrological modeling as well as for sophisticated validation purposes. The tasks of Water Resources Management are subject as well as the role of remote sensing in regional catchment modeling. Parameters derived from remote sensing discussed in this presentation will be land cover, topographical information from digital elevation models, biophysical vegetation parameters, surface soil moisture, evapotranspiration estimations, lake level measurements, determination of snow covered area, lake ice cycles, soil erosion type, mass wasting monitoring, sealed area, flash flood estimation. The actual possibilities of recent satellite and airborne systems are discussed, as well as the data integration into GIS and hydrological modeling, scaling issues and quality assessment will be mentioned. The presentation will provide an overview of own research examples from Germany, Tibet and Africa (Ethiopia, South Africa) as well as other international research activities. Finally the paper gives an outlook on upcoming sensors and concludes the possibilities of remote sensing in hydrology.

  10. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    NASA Astrophysics Data System (ADS)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  11. Parameter estimation in a human operator describing function model for a two-dimensional tracking task

    NASA Technical Reports Server (NTRS)

    Vanlunteren, A.

    1977-01-01

    A previously described parameter estimation program was applied to a number of control tasks, each involving a human operator model consisting of more than one describing function. One of these experiments is treated in more detail. It consisted of a two dimensional tracking task with identical controlled elements. The tracking errors were presented on one display as two vertically moving horizontal lines. Each loop had its own manipulator. The two forcing functions were mutually independent and consisted each of 9 sine waves. A human operator model was chosen consisting of 4 describing functions, thus taking into account possible linear cross couplings. From the Fourier coefficients of the relevant signals the model parameters were estimated after alignment, averaging over a number of runs and decoupling. The results show that for the elements in the main loops the crossover model applies. A weak linear cross coupling existed with the same dynamics as the elements in the main loops but with a negative sign.

  12. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    NASA Astrophysics Data System (ADS)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.

  13. Self consistent solution of the tJ model in the overdoped regime

    NASA Astrophysics Data System (ADS)

    Shastry, B. Sriram; Hansen, Daniel

    2013-03-01

    Detailed results from a recent microscopic theory of extremely correlated Fermi liquids, applied to the t-J model in two dimensions, are presented. The theory is to second order in a parameter λ, and is valid in the overdoped regime of the tJ model. The solution reported here is from Ref, where relevant equations given in Ref are self consistently solved for the square lattice. Thermodynamic variables and the resistivity are displayed at various densities and T for two sets of band parameters. The momentum distribution function and the renormalized electronic dispersion, its width and asymmetry are reported along principal directions of the zone. The optical conductivity is calculated. The electronic spectral function A (k , ω) probed in ARPES, is detailed with different elastic scattering parameters to account for the distinction between LASER and synchrotron ARPES. A high (binding) energy waterfall feature, sensitively dependent on the band hopping parameter t' is noted. This work was supported by DOE under Grant No. FG02-06ER46319.

  14. Endogenous Crisis Waves: Stochastic Model with Synchronized Collective Behavior

    NASA Astrophysics Data System (ADS)

    Gualdi, Stanislao; Bouchaud, Jean-Philippe; Cencetti, Giulia; Tarzia, Marco; Zamponi, Francesco

    2015-02-01

    We propose a simple framework to understand commonly observed crisis waves in macroeconomic agent-based models, which is also relevant to a variety of other physical or biological situations where synchronization occurs. We compute exactly the phase diagram of the model and the location of the synchronization transition in parameter space. Many modifications and extensions can be studied, confirming that the synchronization transition is extremely robust against various sources of noise or imperfections.

  15. Contact tracing of tuberculosis: a systematic review of transmission modelling studies.

    PubMed

    Begun, Matt; Newall, Anthony T; Marks, Guy B; Wood, James G

    2013-01-01

    The WHO recommended intervention of Directly Observed Treatment, Short-course (DOTS) appears to have been less successful than expected in reducing the burden of TB in some high prevalence settings. One strategy for enhancing DOTS is incorporating active case-finding through screening contacts of TB patients as widely used in low-prevalence settings. Predictive models that incorporate population-level effects on transmission provide one means of predicting impacts of such interventions. We aim to identify all TB transmission modelling studies addressing contact tracing and to describe and critically assess their modelling assumptions, parameter choices and relevance to policy. We searched MEDLINE, SCOPUS, COMPENDEX, Google Scholar and Web of Science databases for relevant English language publications up to February 2012. Of the 1285 studies identified, only 5 studies met our inclusion criteria of models of TB transmission dynamics in human populations designed to incorporate contact tracing as an intervention. Detailed implementation of contact processes was only present in two studies, while only one study presented a model for a high prevalence, developing world setting. Some use of relevant data for parameter estimation was made in each study however validation of the predicted impact of interventions was not attempted in any of the studies. Despite a large body of literature on TB transmission modelling, few published studies incorporate contact tracing. There is considerable scope for future analyses to make better use of data and to apply individual based models to facilitate more realistic patterns of infectious contact. Combined with a focus on high burden settings this would greatly increase the potential for models to inform the use of contract tracing as a TB control policy. Our findings highlight the potential for collaborative work between clinicians, epidemiologists and modellers to gather data required to enhance model development and validation and hence better inform future public health policy.

  16. Contact Tracing of Tuberculosis: A Systematic Review of Transmission Modelling Studies

    PubMed Central

    Begun, Matt; Newall, Anthony T.; Marks, Guy B.; Wood, James G.

    2013-01-01

    The WHO recommended intervention of Directly Observed Treatment, Short-course (DOTS) appears to have been less successful than expected in reducing the burden of TB in some high prevalence settings. One strategy for enhancing DOTS is incorporating active case-finding through screening contacts of TB patients as widely used in low-prevalence settings. Predictive models that incorporate population-level effects on transmission provide one means of predicting impacts of such interventions. We aim to identify all TB transmission modelling studies addressing contact tracing and to describe and critically assess their modelling assumptions, parameter choices and relevance to policy. We searched MEDLINE, SCOPUS, COMPENDEX, Google Scholar and Web of Science databases for relevant English language publications up to February 2012. Of the 1285 studies identified, only 5 studies met our inclusion criteria of models of TB transmission dynamics in human populations designed to incorporate contact tracing as an intervention. Detailed implementation of contact processes was only present in two studies, while only one study presented a model for a high prevalence, developing world setting. Some use of relevant data for parameter estimation was made in each study however validation of the predicted impact of interventions was not attempted in any of the studies. Despite a large body of literature on TB transmission modelling, few published studies incorporate contact tracing. There is considerable scope for future analyses to make better use of data and to apply individual based models to facilitate more realistic patterns of infectious contact. Combined with a focus on high burden settings this would greatly increase the potential for models to inform the use of contract tracing as a TB control policy. Our findings highlight the potential for collaborative work between clinicians, epidemiologists and modellers to gather data required to enhance model development and validation and hence better inform future public health policy. PMID:24023742

  17. Modeling of Mitochondria Bioenergetics Using a Composable Chemiosmotic Energy Transduction Rate Law: Theory and Experimental Validation

    PubMed Central

    Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre

    2011-01-01

    Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally fitted parameters to those of existing models. Finally, we present a complete perturbation study of these parameters to reveal how they can significantly and differentially influence global flux and operational thresholds, suggesting that this modeling approach could help enable the comparative analysis of mitochondria from different systems and pathological states. The procedures and results are available in Mathematica notebooks at http://www.igb.uci.edu/tools/sb/mitochondria-modeling.html. PMID:21931590

  18. Modeling of mitochondria bioenergetics using a composable chemiosmotic energy transduction rate law: theory and experimental validation.

    PubMed

    Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre

    2011-01-01

    Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally fitted parameters to those of existing models. Finally, we present a complete perturbation study of these parameters to reveal how they can significantly and differentially influence global flux and operational thresholds, suggesting that this modeling approach could help enable the comparative analysis of mitochondria from different systems and pathological states. The procedures and results are available in Mathematica notebooks at http://www.igb.uci.edu/tools/sb/mitochondria-modeling.html.

  19. A reduced-order, single-bubble cavitation model with applications to therapeutic ultrasound

    PubMed Central

    Kreider, Wayne; Crum, Lawrence A.; Bailey, Michael R.; Sapozhnikov, Oleg A.

    2011-01-01

    Cavitation often occurs in therapeutic applications of medical ultrasound such as shock-wave lithotripsy (SWL) and high-intensity focused ultrasound (HIFU). Because cavitation bubbles can affect an intended treatment, it is important to understand the dynamics of bubbles in this context. The relevant context includes very high acoustic pressures and frequencies as well as elevated temperatures. Relative to much of the prior research on cavitation and bubble dynamics, such conditions are unique. To address the relevant physics, a reduced-order model of a single, spherical bubble is proposed that incorporates phase change at the liquid-gas interface as well as heat and mass transport in both phases. Based on the energy lost during the inertial collapse and rebound of a millimeter-sized bubble, experimental observations were used to tune and test model predictions. In addition, benchmarks from the published literature were used to assess various aspects of model performance. Benchmark comparisons demonstrate that the model captures the basic physics of phase change and diffusive transport, while it is quantitatively sensitive to specific model assumptions and implementation details. Given its performance and numerical stability, the model can be used to explore bubble behaviors across a broad parameter space relevant to therapeutic ultrasound. PMID:22088026

  20. A reduced-order, single-bubble cavitation model with applications to therapeutic ultrasound.

    PubMed

    Kreider, Wayne; Crum, Lawrence A; Bailey, Michael R; Sapozhnikov, Oleg A

    2011-11-01

    Cavitation often occurs in therapeutic applications of medical ultrasound such as shock-wave lithotripsy (SWL) and high-intensity focused ultrasound (HIFU). Because cavitation bubbles can affect an intended treatment, it is important to understand the dynamics of bubbles in this context. The relevant context includes very high acoustic pressures and frequencies as well as elevated temperatures. Relative to much of the prior research on cavitation and bubble dynamics, such conditions are unique. To address the relevant physics, a reduced-order model of a single, spherical bubble is proposed that incorporates phase change at the liquid-gas interface as well as heat and mass transport in both phases. Based on the energy lost during the inertial collapse and rebound of a millimeter-sized bubble, experimental observations were used to tune and test model predictions. In addition, benchmarks from the published literature were used to assess various aspects of model performance. Benchmark comparisons demonstrate that the model captures the basic physics of phase change and diffusive transport, while it is quantitatively sensitive to specific model assumptions and implementation details. Given its performance and numerical stability, the model can be used to explore bubble behaviors across a broad parameter space relevant to therapeutic ultrasound.

  1. In Silico Prediction of Toxicokinetic Parameters for Environmentally Relevant Chemicals for Risk-Based Prioritization

    EPA Science Inventory

    Toxicokinetic (TK) models can address an important component of chemical risk assessments by helping bridge the gap between chemical exposure and measured toxicity endpoints. The metabolic clearance rate (CLint) and fraction of a chemical unbound by plasma proteins (Fub) are crit...

  2. Review: To be or not to be an identifiable model. Is this a relevant question in animal science modelling?

    PubMed

    Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P

    2018-04-01

    What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.

  3. Documentation of a ground hydrology parameterization for use in the GISS atmospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Lin, J. D.; Aleano, J.; Bock, P.

    1978-01-01

    The moisture transport processes related to the earth's surface relevant to the ground circulation model GCM are presented. The GHM parametrizations considered are: (1) ground wetness and soil parameters; (2) precipitation; (3) evapotranspiration; (4) surface storage of snow and ice; and (5) runout. The computational aspects of the GHM using computer programs and flow charts are described.

  4. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  5. Global sensitivity analysis of groundwater transport

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Soltani, S.; Vigouroux, G.

    2015-12-01

    In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.

  6. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

    USGS Publications Warehouse

    Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.

    2013-01-01

    At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

  7. Spectral Induced Polarization approaches to characterize reactive transport parameters and processes

    NASA Astrophysics Data System (ADS)

    Schmutz, M.; Franceschi, M.; Revil, A.; Peruzzo, L.; Maury, T.; Vaudelet, P.; Ghorbani, A.; Hubbard, S. S.

    2017-12-01

    For almost a decade, geophysical methods have explored the potential for characterization of reactive transport parameters and processes relevant to hydrogeology, contaminant remediation, and oil and gas applications. Spectral Induced Polarization (SIP) methods show particular promise in this endeavour, given the sensitivity of the SIP signature to geological material electrical double layer properties and the critical role of the electrical double layer on reactive transport processes, such as adsorption. In this presentation, we discuss results from several recent studies that have been performed to quantify the value of SIP parameters for characterizing reactive transport parameters. The advances have been realized through performing experimental studies and interpreting their responses using theoretical and numerical approaches. We describe a series of controlled experimental studies that have been performed to quantify the SIP responses to variations in grain size and specific surface area, pore fluid geochemistry, and other factors. We also model chemical reactions at the interface fluid/matrix linked to part of our experimental data set. For some examples, both geochemical modelling and measurements are integrated into a SIP physico-chemical based model. Our studies indicate both the potential of and the opportunity for using SIP to estimate reactive transport parameters. In case of well sorted granulometry of the samples, we find that the grain size characterization (as well as the permeabililty for some specific examples) value can be estimated using SIP. We show that SIP is sensitive to physico-chemical conditions at the fluid/mineral interface, including the different pore fluid dissolved ions (Na+, Cu2+, Zn2+, Pb2+) due to their different adsorption behavior. We also showed the relevance of our approach to characterize the fluid/matrix interaction for various organic contents (wetting and non-wetting oils). We also discuss early efforts to jointly interpret SIP and other information for improved estimation, approaches to use SIP information to constrain mechanistic flow and transport models, and the potential to apply some of the approaches to field scale applications.

  8. Optimal experimental design for parameter estimation of a cell signaling model.

    PubMed

    Bandara, Samuel; Schlöder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias

    2009-11-01

    Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay.

  9. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  10. Surgeon Reported Outcome Measure for Spine Trauma: An International Expert Survey Identifying Parameters Relevant for the Outcome of Subaxial Cervical Spine Injuries.

    PubMed

    Sadiqi, Said; Verlaan, Jorrit-Jan; Lehr, A Mechteld; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, S; Schnake, Klaus J; Vaccaro, Alexander R; Oner, F Cumhur

    2016-12-15

    International web-based survey. To identify clinical and radiological parameters that spine surgeons consider most relevant when evaluating clinical and functional outcomes of subaxial cervical spine trauma patients. Although an outcome instrument that reflects the patients' perspective is imperative, there is also a need for a surgeon reported outcome measure to reflect the clinicians' perspective adequately. A cross-sectional online survey was conducted among a selected number of spine surgeons from all five AOSpine International world regions. They were asked to indicate the relevance of a compilation of 21 parameters, both for the short term (3 mo-2 yr) and long term (≥2 yr), on a five-point scale. The responses were analyzed using descriptive statistics, frequency analysis, and Kruskal-Wallis test. Of the 279 AOSpine International and International Spinal Cord Society members who received the survey, 108 (38.7%) participated in the study. Ten parameters were identified as relevant both for short term and long term by at least 70% of the participants. Neurological status, implant failure within 3 months, and patient satisfaction were most relevant. Bony fusion was the only parameter for the long term, whereas five parameters were identified for the short term. The remaining six parameters were not deemed relevant. Minor differences were observed when analyzing the responses according to each world region, or spine surgeons' degree of experience. The perspective of an international sample of highly experienced spine surgeons was explored on the most relevant parameters to evaluate and predict outcomes of subaxial cervical spine trauma patients. These results form the basis for the development of a disease-specific surgeon reported outcome measure, which will be a helpful tool in research and clinical practice. 4.

  11. The System Dynamics Model for Development of Organic Agriculture

    NASA Astrophysics Data System (ADS)

    Rozman, Črtomir; Škraba, Andrej; Kljajić, Miroljub; Pažek, Karmen; Bavec, Martina; Bavec, Franci

    2008-10-01

    Organic agriculture is the highest environmentally valuable agricultural system, and has strategic importance at national level that goes beyond the interests of agricultural sector. In this paper we address development of organic farming simulation model based on a system dynamics methodology (SD). The system incorporates relevant variables, which affect the development of the organic farming. The group decision support system (GDSS) was used in order to identify most relevant variables for construction of causal loop diagram and further model development. The model seeks answers to strategic questions related to the level of organically utilized area, levels of production and crop selection in a long term dynamic context and will be used for simulation of different policy scenarios for organic farming and their impact on economic and environmental parameters of organic production at an aggregate level.

  12. Cherenkov-like emission of Z bosons

    NASA Astrophysics Data System (ADS)

    Colladay, D.; Noordmans, J. P.; Potting, R.

    2017-07-01

    We study CPT and Lorentz violation in the electroweak gauge sector of the Standard Model in the context of the Standard-Model Extension (SME). In particular, we show that any non-zero value of a certain relevant Lorentz violation parameter that is thus far unbounded by experiment would imply that for sufficiently large energies one of the helicity modes of the Z boson should propagate with spacelike four-momentum and become stable against decay in vacuum. In this scenario, Cherenkov-like radiation of Z bosons by ultra-high-energy cosmic-ray protons becomes possible. We deduce a bound on the Lorentz violation parameter from the observational data on ultra-high energy cosmic rays.

  13. An open simulation approach to identify chances and limitations for vulnerable road user (VRU) active safety.

    PubMed

    Seiniger, Patrick; Bartels, Oliver; Pastor, Claus; Wisch, Marcus

    2013-01-01

    It is commonly agreed that active safety will have a significant impact on reducing accident figures for pedestrians and probably also bicyclists. However, chances and limitations for active safety systems have only been derived based on accident data and the current state of the art, based on proprietary simulation models. The objective of this article is to investigate these chances and limitations by developing an open simulation model. This article introduces a simulation model, incorporating accident kinematics, driving dynamics, driver reaction times, pedestrian dynamics, performance parameters of different autonomous emergency braking (AEB) generations, as well as legal and logical limitations. The level of detail for available pedestrian accident data is limited. Relevant variables, especially timing of the pedestrian appearance and the pedestrian's moving speed, are estimated using assumptions. The model in this article uses the fact that a pedestrian and a vehicle in an accident must have been in the same spot at the same time and defines the impact position as a relevant accident parameter, which is usually available from accident data. The calculations done within the model identify the possible timing available for braking by an AEB system as well as the possible speed reduction for different accident scenarios as well as for different system configurations. The simulation model identifies the lateral impact position of the pedestrian as a significant parameter for system performance, and the system layout is designed to brake when the accident becomes unavoidable by the vehicle driver. Scenarios with a pedestrian running from behind an obstruction are the most demanding scenarios and will very likely never be avoidable for all vehicle speeds due to physical limits. Scenarios with an unobstructed person walking will very likely be treatable for a wide speed range for next generation AEB systems.

  14. KABAM Version 1.0 User's Guide and Technical Documentation - Appendix C - Explanation of Default Values Representing Biotic Characteristics of Aquatic Ecosystem, Including Food Web Structure

    EPA Pesticide Factsheets

    Information relevant to KABAM and explanations of default parameters used to define the 7 trophic levels. KABAM is a simulation model used to predict pesticide concentrations in aquatic regions for use in exposure assessments.

  15. In Silico Prediction of Toxicokinetic Parameters for Environmentally Relevant Chemicals with Application to Risk-Based Prioritization

    EPA Science Inventory

    Toxicokinetic (TK) models can help bridge the gap between chemical exposure and measured toxicity endpoints, thereby addressing an important component of chemical risk assessments. The fraction of a chemical unbound by plasma proteins (Fub) and metabolic clearance rate (CLint) ar...

  16. The Dynamics Of Plucking

    NASA Astrophysics Data System (ADS)

    Griffel, D. H.

    1994-08-01

    A mathematical model of the excitation of a vibrating system by a plucking action is studied. The mechanism is of the type used in musical instruments. The effectiveness of the mechanism is computed over a considerable range of the relevant parameters. As the speed of the pluck is increased, with other parameters held fixed, the amplitude of the vibration produced rises to a maximum and then decreases to zero. The optimum speed increases with the stiffness of the plectrum. Other aspects of the behaviour of the system are discussed.

  17. Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment

    DTIC Science & Technology

    2014-04-01

    Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in the following...have been harvested.  All harvested samples have been scanned by µCT and evaluated for multiple parameters .  All samples have been mechanically... Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model. J.Biomed.Mater.Res.B Appl.Biomater. 2005;74(2):712-7. 4. De Ranieri, A., Virdi, A. S

  18. Qualitative and temporal reasoning in engine behavior analysis

    NASA Technical Reports Server (NTRS)

    Dietz, W. E.; Stamps, M. E.; Ali, M.

    1987-01-01

    Numerical simulation models, engine experts, and experimental data are used to generate qualitative and temporal representations of abnormal engine behavior. Engine parameters monitored during operation are used to generate qualitative and temporal representations of actual engine behavior. Similarities between the representations of failure scenarios and the actual engine behavior are used to diagnose fault conditions which have already occurred, or are about to occur; to increase the surveillance by the monitoring system of relevant engine parameters; and to predict likely future engine behavior.

  19. Surgical stent planning: simulation parameter study for models based on DICOM standards.

    PubMed

    Scherer, S; Treichel, T; Ritter, N; Triebel, G; Drossel, W G; Burgert, O

    2011-05-01

    Endovascular Aneurysm Repair (EVAR) can be facilitated by a realistic simulation model of stent-vessel-interaction. Therefore, numerical feasibility and integrability in the clinical environment was evaluated. The finite element method was used to determine necessary simulation parameters for stent-vessel-interaction in EVAR. Input variables and result data of the simulation model were examined for their standardization using DICOM supplements. The study identified four essential parameters for the stent-vessel simulation: blood pressure, intima constitution, plaque occurrence and the material properties of vessel and plaque. Output quantities such as radial force of the stent and contact pressure between stent/vessel can help the surgeon to evaluate implant fixation and sealing. The model geometry can be saved with DICOM "Surface Segmentation" objects and the upcoming "Implant Templates" supplement. Simulation results can be stored using the "Structured Report". A standards-based general simulation model for optimizing stent-graft selection may be feasible. At present, there are limitations due to specification of individual vessel material parameters and for simulating the proximal fixation of stent-grafts with hooks. Simulation data with clinical relevance for documentation and presentation can be stored using existing or new DICOM extensions.

  20. Using Poisson-gamma model to evaluate the duration of recruitment process when historical trials are available.

    PubMed

    Minois, Nathan; Lauwers-Cances, Valérie; Savy, Stéphanie; Attal, Michel; Andrieu, Sandrine; Anisimov, Vladimir; Savy, Nicolas

    2017-10-15

    At the design of clinical trial operation, a question of a paramount interest is how long it takes to recruit a given number of patients. Modelling the recruitment dynamics is the necessary step to answer this question. Poisson-gamma model provides very convenient, flexible and realistic approach. This model allows predicting the trial duration using data collected at an interim time with very good accuracy. A natural question arises: how to evaluate the parameters of recruitment model before the trial begins? The question is harder to handle as there are no recruitment data available for this trial. However, if there exist similar completed trials, it is appealing to use data from these trials to investigate feasibility of the recruitment process. In this paper, the authors explore the recruitment data of two similar clinical trials (Intergroupe Francais du Myélome 2005 and 2009). It is shown that the natural idea of plugging the historical rates estimated from the completed trial in the same centres of the new trial for predicting recruitment is not a relevant strategy. In contrast, using the parameters of a gamma distribution of the rates estimated from the completed trial in the recruitment dynamic model of the new trial provides reasonable predictive properties with relevant confidence intervals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Edge instability in incompressible planar active fluids

    NASA Astrophysics Data System (ADS)

    Nesbitt, David; Pruessner, Gunnar; Lee, Chiu Fan

    2017-12-01

    Interfacial instability is highly relevant to many important biological processes. A key example arises in wound healing experiments, which observe that an epithelial layer with an initially straight edge does not heal uniformly. We consider the phenomenon in the context of active fluids. Improving upon the approximation used by Zimmermann, Basan, and Levine [Eur. Phys. J.: Spec. Top. 223, 1259 (2014), 10.1140/epjst/e2014-02189-7], we perform a linear stability analysis on a two-dimensional incompressible hydrodynamic model of an active fluid with an open interface. We categorize the stability of the model and find that for experimentally relevant parameters, fingering instability is always absent in this minimal model. Our results point to the crucial role of density variation in the fingering instability in tissue regeneration.

  2. Probabilistic calibration of the SPITFIRE fire spread model using Earth observation data

    NASA Astrophysics Data System (ADS)

    Gomez-Dans, Jose; Wooster, Martin; Lewis, Philip; Spessa, Allan

    2010-05-01

    There is a great interest in understanding how fire affects vegetation distribution and dynamics in the context of global vegetation modelling. A way to include these effects is through the development of embedded fire spread models. However, fire is a complex phenomenon, thus difficult to model. Statistical models based on fire return intervals, or fire danger indices need large amounts of data for calibration, and are often prisoner to the epoch they were calibrated to. Mechanistic models, such as SPITFIRE, try to model the complete fire phenomenon based on simple physical rules, making these models mostly independent of calibration data. However, the processes expressed in models such as SPITFIRE require many parameters. These parametrisations are often reliant on site-specific experiments, or in some other cases, paremeters might not be measured directly. Additionally, in many cases, changes in temporal and/or spatial resolution result in parameters becoming effective. To address the difficulties with parametrisation and the often-used fitting methodologies, we propose using a probabilistic framework to calibrate some areas of the SPITFIRE fire spread model. We calibrate the model against Earth Observation (EO) data, a global and ever-expanding source of relevant data. We develop a methodology that tries to incorporate the limitations of the EO data, reasonable prior values for parameters and that results in distributions of parameters, which can be used to infer uncertainty due to parameter estimates. Additionally, the covariance structure of parameters and observations is also derived, whcih can help inform data gathering efforts and model development, respectively. For this work, we focus on Southern African savannas, an important ecosystem for fire studies, and one with a good amount of EO data relevnt to fire studies. As calibration datasets, we use burned area data, estimated number of fires and vegetation moisture dynamics.

  3. Study of market model describing the contrary behaviors of informed and uninformed agents: Being minority and being majority

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Xia; Liao, Hao; Medo, Matus; Shang, Ming-Sheng; Yeung, Chi Ho

    2016-05-01

    In this paper we analyze the contrary behaviors of the informed investors and uniformed investors, and then construct a competition model with two groups of agents, namely agents who intend to stay in minority and those who intend to stay in majority. We find two kinds of competitions, inter- and intra-groups. The model shows periodic fluctuation feature. The average distribution of strategies illustrates a prominent central peak which is relevant to the peak-fat-tail character of price change distribution in stock markets. Furthermore, in the modified model the tolerance time parameter makes the agents diversified. Finally, we compare the strategies distribution with the price change distribution in real stock market, and we conclude that contrary behavior rules and tolerance time parameter are indeed valid in the description of market model.

  4. Estimation of sum-to-one constrained parameters with non-Gaussian extensions of ensemble-based Kalman filters: application to a 1D ocean biogeochemical model

    NASA Astrophysics Data System (ADS)

    Simon, E.; Bertino, L.; Samuelsen, A.

    2011-12-01

    Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.

  5. Black hole algorithm for determining model parameter in self-potential data

    NASA Astrophysics Data System (ADS)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  6. Effective lepton flavor violating H ℓiℓj vertex from right-handed neutrinos within the mass insertion approximation

    NASA Astrophysics Data System (ADS)

    Arganda, E.; Herrero, M. J.; Marcano, X.; Morales, R.; Szynkman, A.

    2017-05-01

    In this work we present a new computation of the lepton flavor violating Higgs boson decays that are generated radiatively to one-loop from heavy right-handed neutrinos. We work within the context of the inverse seesaw model with three νR and three extra singlets X , but the results could be generalized to other low scale seesaw models. The novelty of our computation is that it uses a completely different method by means of the mass insertion approximation which works with the electroweak interaction states instead of the usual 9 physical neutrino mass eigenstates of the inverse seesaw model. This method also allows us to write the analytical results explicitly in terms of the most relevant model parameters, that are the neutrino Yukawa coupling matrix Yν and the right-handed mass matrix MR, which is very convenient for a phenomenological analysis. This Yν matrix, being generically nondiagonal in flavor space, is the only one responsible for the induced charged lepton flavor violating processes of our interest. We perform the calculation of the decay amplitude up to order O (Yν2+Yν4). We also study numerically the goodness of the mass insertion approximation results. In the last part we present the computation of the relevant one-loop effective vertex H ℓiℓj for the lepton flavor violating Higgs decay which is derived from a large MR mass expansion of the form factors. We believe that our simple formula found for this effective vertex can be of interest for other researchers who wish to estimate the H →ℓiℓ¯j rates in a fast way in terms of their own preferred input values for the relevant model parameters Yν and MR.

  7. Developing force fields when experimental data is sparse: AMBER/GAFF-compatible parameters for inorganic and alkyl oxoanions.

    PubMed

    Kashefolgheta, Sadra; Vila Verde, Ana

    2017-08-09

    We present a set of Lennard-Jones parameters for classical, all-atom models of acetate and various alkylated and non-alkylated forms of sulfate, sulfonate and phosphate ions, optimized to reproduce their interactions with water and with the physiologically relevant sodium, ammonium and methylammonium cations. The parameters are internally consistent and are fully compatible with the Generalized Amber Force Field (GAFF), the AMBER force field for proteins, the accompanying TIP3P water model and the sodium model of Joung and Cheatham. The parameters were developed primarily relying on experimental information - hydration free energies and solution activity derivatives at 0.5 m concentration - with ab initio, gas phase calculations being used for the cases where experimental information is missing. The ab initio parameterization scheme presented here is distinct from other approaches because it explicitly connects gas phase binding energies to intermolecular interactions in solution. We demonstrate that the original GAFF/AMBER parameters often overestimate anion-cation interactions, leading to an excessive number of contact ion pairs in solutions of carboxylate ions, and to aggregation in solutions of divalent ions. GAFF/AMBER parameters lead to excessive numbers of salt bridges in proteins and of contact ion pairs between sodium and acidic protein groups, issues that are resolved by using the optimized parameters presented here.

  8. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Microthrix parvicella abundance associates with activated sludge settling velocity and rheology - Quantifying and modelling filamentous bulking.

    PubMed

    Wágner, Dorottya S; Ramin, Elham; Szabo, Peter; Dechesne, Arnaud; Plósz, Benedek Gy

    2015-07-01

    The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through altering the hindered settling velocity and yield stress parameter. Strikingly, this is not the case for Chloroflexi, occurring in more than double the abundance of M. parvicella, and forming filaments primarily protruding from the flocs. The transient and compression settling parameters show a comparably high variability, and no significant association with filamentous abundance. A two-dimensional, axi-symmetrical computational fluid dynamics (CFD) model was used to assess calibration scenarios to model filamentous bulking. Our results suggest that model predictions can significantly benefit from explicitly accounting for filamentous bulking by calibrating the hindered settling velocity function. Furthermore, accounting for the transient and compression settling velocity in the computational domain is crucial to improve model accuracy when modelling filamentous bulking. However, the case-specific calibration of transient and compression settling parameters as well as yield stress is not necessary, and an average parameter set - obtained under bulking and good settling conditions - can be used. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Stress-based animal models of depression: Do we actually know what we are doing?

    PubMed

    Yin, Xin; Guven, Nuri; Dietis, Nikolas

    2016-12-01

    Depression is one of the leading causes of disability and a significant health-concern worldwide. Much of our current understanding on the pathogenesis of depression and the pharmacology of antidepressant drugs is based on pre-clinical models. Three of the most popular stress-based rodent models are the forced swimming test, the chronic mild stress paradigm and the learned helplessness model. Despite their recognizable advantages and limitations, they are associated with an immense variability due to the high number of design parameters that define them. Only few studies have reported how minor modifications of these parameters affect the model phenotype. Thus, the existing variability in how these models are used has been a strong barrier for drug development as well as benchmark and evaluation of these pre-clinical models of depression. It also has been the source of confusing variability in the experimental outcomes between research groups using the same models. In this review, we summarize the known variability in the experimental protocols, identify the main and relevant parameters for each model and describe the variable values using characteristic examples. Our view of depression and our efforts to discover novel and effective antidepressants is largely based on our detailed knowledge of these testing paradigms, and requires a sound understanding around the importance of individual parameters to optimize and improve these pre-clinical models. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Static aeroelastic behavior of a subsonic plate wing

    NASA Astrophysics Data System (ADS)

    Berci, M.

    2017-07-01

    The static aeroelastic behavior of a subsonic plate wing is here described by semi-analytical means. Within a generalised modal formulation, any distribution of the plate's properties is allowed. Modified strip theory is employed for the aerodynamic modelling and a linear aeroelastic model is eventually derived. Numerical results are then shown for the plate's aeroelastic stability in terms of divergence speed, with respect to the most relevant aero-structural parameters.

  12. A theoretical framework to model DSC-MRI data acquired in the presence of contrast agent extravasation

    NASA Astrophysics Data System (ADS)

    Quarles, C. C.; Gochberg, D. F.; Gore, J. C.; Yankeelov, T. E.

    2009-10-01

    Dynamic susceptibility contrast (DSC) MRI methods rely on compartmentalization of the contrast agent such that a susceptibility gradient can be induced between the contrast-containing compartment and adjacent spaces, such as between intravascular and extravascular spaces. When there is a disruption of the blood-brain barrier, as is frequently the case with brain tumors, a contrast agent leaks out of the vasculature, resulting in additional T1, T2 and T*2 relaxation effects in the extravascular space, thereby affecting the signal intensity time course and reducing the reliability of the computed hemodynamic parameters. In this study, a theoretical model describing these dynamic intra- and extravascular T1, T2 and T*2 relaxation interactions is proposed. The applicability of using the proposed model to investigate the influence of relevant MRI pulse sequences (e.g. echo time, flip angle), and physical (e.g. susceptibility calibration factors, pre-contrast relaxation rates) and physiological parameters (e.g. permeability, blood flow, compartmental volume fractions) on DSC-MRI signal time curves is demonstrated. Such a model could yield important insights into the biophysical basis of contrast-agent-extravasastion-induced effects on measured DSC-MRI signals and provide a means to investigate pulse sequence optimization and appropriate data analysis methods for the extraction of physiologically relevant imaging metrics.

  13. Nonlinear flow model of multiple fractured horizontal wells with stimulated reservoir volume including the quadratic gradient term

    NASA Astrophysics Data System (ADS)

    Ren, Junjie; Guo, Ping

    2017-11-01

    The real fluid flow in porous media is consistent with the mass conservation which can be described by the nonlinear governing equation including the quadratic gradient term (QGT). However, most of the flow models have been established by ignoring the QGT and little work has been conducted to incorporate the QGT into the flow model of the multiple fractured horizontal (MFH) well with stimulated reservoir volume (SRV). This paper first establishes a semi-analytical model of an MFH well with SRV including the QGT. Introducing the transformed pressure and flow-rate function, the nonlinear model of a point source in a composite system including the QGT is linearized. Then the Laplace transform, principle of superposition, numerical discrete method, Gaussian elimination method and Stehfest numerical inversion are employed to establish and solve the seepage model of the MFH well with SRV. Type curves are plotted and the effects of relevant parameters are analyzed. It is found that the nonlinear effect caused by the QGT can increase the flow capacity of fluid flow and influence the transient pressure positively. The relevant parameters not only have an effect on the type curve but also affect the error in the pressure calculated by the conventional linear model. The proposed model, which is consistent with the mass conservation, reflects the nonlinear process of the real fluid flow, and thus it can be used to obtain more accurate transient pressure of an MFH well with SRV.

  14. Understanding Lymphatic Valve Function via Computational Modeling

    NASA Astrophysics Data System (ADS)

    Wolf, Ki; Nepiyushchikh, Zhanna; Razavi, Mohammad; Dixon, Brandon; Alexeev, Alexander

    2017-11-01

    The lymphatic system is a crucial part to the circulatory system with many important functions, such as transport of interstitial fluid, fatty acid, and immune cells. Lymphatic vessels' contractile walls and valves allow lymph flow against adverse pressure gradients and prevent back flow. Yet, the effect of lymphatic valves' geometric and mechanical properties to pumping performance and lymphatic dysfunctions like lymphedema is not well understood. Our coupled fluid-solid computational model based on lattice Boltzmann model and lattice spring model investigates the dynamics and effectiveness of lymphatic valves in resistance minimization, backflow prevention, and viscoelastic response under different geometric and mechanical properties, suggesting the range of lymphatic valve parameters with effective pumping performance. Our model also provides more physiologically relevant relations of the valve response under varied conditions to a lumped parameter model of the lymphatic system giving an integrative insight into lymphatic system performance, including its failure due to diseases. NSF CMMI-1635133.

  15. Modelling and identification for control of gas bearings

    NASA Astrophysics Data System (ADS)

    Theisen, Lukas R. S.; Niemann, Hans H.; Santos, Ilmar F.; Galeazzi, Roberto; Blanke, Mogens

    2016-03-01

    Gas bearings are popular for their high speed capabilities, low friction and clean operation, but suffer from poor damping, which poses challenges for safe operation in presence of disturbances. Feedback control can achieve enhanced damping but requires low complexity models of the dominant dynamics over its entire operating range. Models from first principles are complex and sensitive to parameter uncertainty. This paper presents an experimental technique for "in situ" identification of a low complexity model of a rotor-bearing-actuator system and demonstrates identification over relevant ranges of rotational speed and gas injection pressure. This is obtained using parameter-varying linear models that are found to capture the dominant dynamics. The approach is shown to be easily applied and to suit subsequent control design. Based on the identified models, decentralised proportional control is designed and shown to obtain the required damping in theory and in a laboratory test rig.

  16. In silico modeling of the yeast protein and protein family interaction network

    NASA Astrophysics Data System (ADS)

    Goh, K.-I.; Kahng, B.; Kim, D.

    2004-03-01

    Understanding of how protein interaction networks of living organisms have evolved or are organized can be the first stepping stone in unveiling how life works on a fundamental ground. Here we introduce an in silico ``coevolutionary'' model for the protein interaction network and the protein family network. The essential ingredient of the model includes the protein family identity and its robustness under evolution, as well as the three previously proposed: gene duplication, divergence, and mutation. This model produces a prototypical feature of complex networks in a wide range of parameter space, following the generalized Pareto distribution in connectivity. Moreover, we investigate other structural properties of our model in detail with some specific values of parameters relevant to the yeast Saccharomyces cerevisiae, showing excellent agreement with the empirical data. Our model indicates that the physical constraints encoded via the domain structure of proteins play a crucial role in protein interactions.

  17. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    NASA Astrophysics Data System (ADS)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.

  18. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  19. Leaf photosynthesis and respiration of three bioenergy crops in relation to temperature and leaf nitrogen: how conserved are biochemical model parameters among crop species?

    PubMed Central

    Archontoulis, S. V.; Yin, X.; Vos, J.; Danalatos, N. G.; Struik, P. C.

    2012-01-01

    Given the need for parallel increases in food and energy production from crops in the context of global change, crop simulation models and data sets to feed these models with photosynthesis and respiration parameters are increasingly important. This study provides information on photosynthesis and respiration for three energy crops (sunflower, kenaf, and cynara), reviews relevant information for five other crops (wheat, barley, cotton, tobacco, and grape), and assesses how conserved photosynthesis parameters are among crops. Using large data sets and optimization techniques, the C3 leaf photosynthesis model of Farquhar, von Caemmerer, and Berry (FvCB) and an empirical night respiration model for tested energy crops accounting for effects of temperature and leaf nitrogen were parameterized. Instead of the common approach of using information on net photosynthesis response to CO2 at the stomatal cavity (An–Ci), the model was parameterized by analysing the photosynthesis response to incident light intensity (An–Iinc). Convincing evidence is provided that the maximum Rubisco carboxylation rate or the maximum electron transport rate was very similar whether derived from An–Ci or from An–Iinc data sets. Parameters characterizing Rubisco limitation, electron transport limitation, the degree to which light inhibits leaf respiration, night respiration, and the minimum leaf nitrogen required for photosynthesis were then determined. Model predictions were validated against independent sets. Only a few FvCB parameters were conserved among crop species, thus species-specific FvCB model parameters are needed for crop modelling. Therefore, information from readily available but underexplored An–Iinc data should be re-analysed, thereby expanding the potential of combining classical photosynthetic data and the biochemical model. PMID:22021569

  20. Integral projection models for finite populations in a stochastic environment.

    PubMed

    Vindenes, Yngvild; Engen, Steinar; Saether, Bernt-Erik

    2011-05-01

    Continuous types of population structure occur when continuous variables such as body size or habitat quality affect the vital parameters of individuals. These structures can give rise to complex population dynamics and interact with environmental conditions. Here we present a model for continuously structured populations with finite size, including both demographic and environmental stochasticity in the dynamics. Using recent methods developed for discrete age-structured models we derive the demographic and environmental variance of the population growth as functions of a continuous state variable. These two parameters, together with the expected population growth rate, are used to define a one-dimensional diffusion approximation of the population dynamics. Thus, a substantial reduction in complexity is achieved as the dynamics of the complex structured model can be described by only three population parameters. We provide methods for numerical calculation of the model parameters and demonstrate the accuracy of the diffusion approximation by computer simulation of specific examples. The general modeling framework makes it possible to analyze and predict future dynamics and extinction risk of populations with various types of structure, and to explore consequences of changes in demography caused by, e.g., climate change or different management decisions. Our results are especially relevant for small populations that are often of conservation concern.

  1. Scaling of hydrologic and erosion parameters derived from rainfall simulation

    NASA Astrophysics Data System (ADS)

    Sheridan, Gary; Lane, Patrick; Noske, Philip; Sherwin, Christopher

    2010-05-01

    Rainfall simulation experiments conducted at the temporal scale of minutes and the spatial scale of meters are often used to derive parameters for erosion and water quality models that operate at much larger temporal and spatial scales. While such parameterization is convenient, there has been little effort to validate this approach via nested experiments across these scales. In this paper we first review the literature relevant to some of these long acknowledged issues. We then present rainfall simulation and erosion plot data from a range of sources, including mining, roading, and forestry, to explore the issues associated with the scaling of parameters such as infiltration properties and erodibility coefficients.

  2. Bridging the gap between measurements and modelling: a cardiovascular functional avatar.

    PubMed

    Casas, Belén; Lantz, Jonas; Viola, Federica; Cedersund, Gunnar; Bolger, Ann F; Carlhäll, Carl-Johan; Karlsson, Matts; Ebbers, Tino

    2017-07-24

    Lumped parameter models of the cardiovascular system have the potential to assist researchers and clinicians to better understand cardiovascular function. The value of such models increases when they are subject specific. However, most approaches to personalize lumped parameter models have thus far required invasive measurements or fall short of being subject specific due to a lack of the necessary clinical data. Here, we propose an approach to personalize parameters in a model of the heart and the systemic circulation using exclusively non-invasive measurements. The personalized model is created using flow data from four-dimensional magnetic resonance imaging and cuff pressure measurements in the brachial artery. We term this personalized model the cardiovascular avatar. In our proof-of-concept study, we evaluated the capability of the avatar to reproduce pressures and flows in a group of eight healthy subjects. Both quantitatively and qualitatively, the model-based results agreed well with the pressure and flow measurements obtained in vivo for each subject. This non-invasive and personalized approach can synthesize medical data into clinically relevant indicators of cardiovascular function, and estimate hemodynamic variables that cannot be assessed directly from clinical measurements.

  3. Microeconomics of 300-mm process module control

    NASA Astrophysics Data System (ADS)

    Monahan, Kevin M.; Chatterjee, Arun K.; Falessi, Georges; Levy, Ady; Stoller, Meryl D.

    2001-08-01

    Simple microeconomic models that directly link metrology, yield, and profitability are rare or non-existent. In this work, we validate and apply such a model. Using a small number of input parameters, we explain current yield management practices in 200 mm factories. The model is then used to extrapolate requirements for 300 mm factories, including the impact of simultaneous technology transitions to 130nm lithography and integrated metrology. To support our conclusions, we use examples relevant to factory-wide photo module control.

  4. Relating ground truth collection to model sensitivity

    NASA Technical Reports Server (NTRS)

    Amar, Faouzi; Fung, Adrian K.; Karam, Mostafa A.; Mougin, Eric

    1993-01-01

    The importance of collecting high quality ground truth before a SAR mission over a forested area is two fold. First, the ground truth is used in the analysis and interpretation of the measured backscattering properties; second, it helps to justify the use of a scattering model to fit the measurements. Unfortunately, ground truth is often collected based on visual assessment of what is perceived to be important without regard to the mission itself. Sites are selected based on brief surveys of large areas, and the ground truth is collected by a process of selecting and grouping different scatterers. After the fact, it may turn out that some of the relevant parameters are missing. A three-layer canopy model based on the radiative transfer equations is used to determine, before hand, the relevant parameters to be collected. Detailed analysis of the contribution to scattering and attenuation of various forest components is carried out. The goal is to identify the forest parameters which most influence the backscattering as a function of frequency (P-, L-, and C-bands) and incident angle. The influence on backscattering and attenuation of branch diameters, lengths, angular distribution, and permittivity; trunk diameters, lengths, and permittivity; and needle sizes, their angular distribution, and permittivity are studied in order to maximize the efficiency of the ground truth collection efforts. Preliminary results indicate that while a scatterer may not contribute to the total backscattering, its contribution to attenuation may be significant depending on the frequency.

  5. Five easy equations for patient flow through an emergency department.

    PubMed

    Madsen, Thomas Lill; Kofoed-Enevoldsen, Allan

    2011-10-01

    Queue models are effective tools for framing management decisions and Danish hospitals could benefit from awareness of such models. Currently, as emergency departments (ED) are under reorganization, we deem it timely to empirically investigate the applicability of the standard "M/M/1" queue model in order to document its relevance. We compared actual versus theoretical distributions of hourly patient flow from 27,000 patient cases seen at Frederiksberg Hospital's ED. Formulating equations for arrivals and capacity, we wrote and tested a five equation simulation model. The Poisson distribution fitted arrivals with an hour-of-the-day specific parameter. Treatment times exceeding 15 minutes were well-described by an exponential distribution. The ED can be modelled as a black box with an hourly capacity that can be estimated either as admissions per hour when the ED operates full hilt Poisson distribution or from the linear dependency of waiting times on queue number. The results show that our ED capacity is surprisingly constant despite variations in staffing. These findings led to the formulation of a model giving a compact framework for assessing the behaviour of the ED under different assumptions about opening hours, capacity and workload. The M/M/1 almost perfectly fits our. Thus modeling and simulations have contributed to the management process. not relevant. not relevant.

  6. Spatio-Temporal Regression Based Clustering of Precipitation Extremes in a Presence of Systematically Missing Covariates

    NASA Astrophysics Data System (ADS)

    Kaiser, Olga; Martius, Olivia; Horenko, Illia

    2017-04-01

    Regression based Generalized Pareto Distribution (GPD) models are often used to describe the dynamics of hydrological threshold excesses relying on the explicit availability of all of the relevant covariates. But, in real application the complete set of relevant covariates might be not available. In this context, it was shown that under weak assumptions the influence coming from systematically missing covariates can be reflected by a nonstationary and nonhomogenous dynamics. We present a data-driven, semiparametric and an adaptive approach for spatio-temporal regression based clustering of threshold excesses in a presence of systematically missing covariates. The nonstationary and nonhomogenous behavior of threshold excesses is describes by a set of local stationary GPD models, where the parameters are expressed as regression models, and a non-parametric spatio-temporal hidden switching process. Exploiting nonparametric Finite Element time-series analysis Methodology (FEM) with Bounded Variation of the model parameters (BV) for resolving the spatio-temporal switching process, the approach goes beyond strong a priori assumptions made is standard latent class models like Mixture Models and Hidden Markov Models. Additionally, the presented FEM-BV-GPD provides a pragmatic description of the corresponding spatial dependence structure by grouping together all locations that exhibit similar behavior of the switching process. The performance of the framework is demonstrated on daily accumulated precipitation series over 17 different locations in Switzerland from 1981 till 2013 - showing that the introduced approach allows for a better description of the historical data.

  7. Mining manufacturing data for discovery of high productivity process characteristics.

    PubMed

    Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou

    2010-06-01

    Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.

  8. Opposing effects of cationic antimicrobial peptides and divalent cations on bacterial lipopolysaccharides

    NASA Astrophysics Data System (ADS)

    Smart, Matthew; Rajagopal, Aruna; Liu, Wing-Ki; Ha, Bae-Yeun

    2017-10-01

    The permeability of the bacterial outer membrane, enclosing Gram-negative bacteria, depends on the interactions of the outer, lipopolysaccharide (LPS) layer, with surrounding ions and molecules. We present a coarse-grained model for describing how cationic amphiphilic molecules (e.g., antimicrobial peptides) interact with and perturb the LPS layer in a biologically relevant medium, containing monovalent and divalent salt ions (e.g., Mg2+). In our approach, peptide binding is driven by electrostatic and hydrophobic interactions and is assumed to expand the LPS layer, eventually priming it for disruption. Our results suggest that in parameter ranges of biological relevance (e.g., at micromolar concentrations) the antimicrobial peptide magainin 2 effectively disrupts the LPS layer, even though it has to compete with Mg2+ for the layer. They also show how the integrity of LPS is restored with an increasing concentration of Mg2+. Using the approach, we make a number of predictions relevant for optimizing peptide parameters against Gram-negative bacteria and for understanding bacterial strategies to develop resistance against cationic peptides.

  9. An Overview of Kinematic and Calibration Models Using Internal/External Sensors or Constraints to Improve the Behavior of Spatial Parallel Mechanisms

    PubMed Central

    Majarena, Ana C.; Santolaria, Jorge; Samper, David; Aguilar, Juan J.

    2010-01-01

    This paper presents an overview of the literature on kinematic and calibration models of parallel mechanisms, the influence of sensors in the mechanism accuracy and parallel mechanisms used as sensors. The most relevant classifications to obtain and solve kinematic models and to identify geometric and non-geometric parameters in the calibration of parallel robots are discussed, examining the advantages and disadvantages of each method, presenting new trends and identifying unsolved problems. This overview tries to answer and show the solutions developed by the most up-to-date research to some of the most frequent questions that appear in the modelling of a parallel mechanism, such as how to measure, the number of sensors and necessary configurations, the type and influence of errors or the number of necessary parameters. PMID:22163469

  10. Cosmological constraints on extended Galileon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Tsujikawa, Shinji, E-mail: antoniod@nu.ac.th, E-mail: shinji@rs.kagu.tus.ac.jp

    2012-03-01

    The extended Galileon models possess tracker solutions with de Sitter attractors along which the dark energy equation of state is constant during the matter-dominated epoch, i.e. w{sub DE} = −1−s, where s is a positive constant. Even with this phantom equation of state there are viable parameter spaces in which the ghosts and Laplacian instabilities are absent. Using the observational data of the supernovae type Ia, the cosmic microwave background (CMB), and baryon acoustic oscillations, we place constraints on the tracker solutions at the background level and find that the parameter s is constrained to be s = 0.034{sub −0.034}{supmore » +0.327} (95 % CL) in the flat Universe. In order to break the degeneracy between the models we also study the evolution of cosmological density perturbations relevant to the large-scale structure (LSS) and the Integrated-Sachs-Wolfe (ISW) effect in CMB. We show that, depending on the model parameters, the LSS and the ISW effect is either positively or negatively correlated. It is then possible to constrain viable parameter spaces further from the observational data of the ISW-LSS cross-correlation as well as from the matter power spectrum.« less

  11. Exploring extended scalar sectors with di-Higgs signals: a Higgs EFT perspective

    NASA Astrophysics Data System (ADS)

    Corbett, Tyler; Joglekar, Aniket; Li, Hao-Lin; Yu, Jiang-Hao

    2018-05-01

    We consider extended scalar sectors of the Standard Model as ultraviolet complete motivations for studying the effective Higgs self-interaction operators of the Standard Model effective field theory. We investigate all motivated heavy scalar models which generate the dimension-six effective operator, | H|6, at tree level and proceed to identify the full set of tree-level dimension-six operators by integrating out the heavy scalars. Of seven models which generate | H|6 at tree level only two, quadruplets of hypercharge Y = 3 Y H and Y = Y H , generate only this operator. Next we perform global fits to constrain relevant Wilson coefficients from the LHC single Higgs measurements as well as the electroweak oblique parameters S and T. We find that the T parameter puts very strong constraints on the Wilson coefficient of the | H|6 operator in the triplet and quadruplet models, while the singlet and doublet models could still have Higgs self-couplings which deviate significantly from the standard model prediction. To determine the extent to which the | H|6 operator could be constrained, we study the di-Higgs signatures at the future 100 TeV collider and explore future sensitivity of this operator. Projected onto the Higgs potential parameters of the extended scalar sectors, with 30 ab-1 luminosity data we will be able to explore the Higgs potential parameters in all seven models.

  12. Analysis of electric vehicle extended range misalignment based on rigid-flexible dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Xiaowei; Lv, Mingliang; Chen, Zibo; Ji, Wei; Gao, Ruiceng

    2017-04-01

    The safety of the extended range electric vehicle is seriously affected by the misalignment fault. Therefore, this paper analyzed the electric vehicle extended range misalignment based on rigid-flexible dynamics. Through comprehensively applied the hybrid modeling of rigid-flexible and the method of fault diagnosis of machinery and equipment comprehensively, it established a extender hybrid rigid flexible mechanical model by means of the software ADAMS and ANSYS. By setting the relevant parameters to simulate the misalignment of shafting, the failure phenomenon, the spectrum analysis and the evolution rules were analyzed. It concluded that 0.5th and 1 harmonics are considered as the characteristic parameters of misalignment diagnostics for electric vehicle extended range.

  13. Local operators in kinetic wealth distribution

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    2016-05-01

    The statistical mechanics approach to wealth distribution is based on the conservative kinetic multi-agent model for money exchange, where the local interaction rule between the agents is analogous to the elastic particle scattering process. Here, we discuss the role of a class of conservative local operators, and we show that, depending on the values of their parameters, they can be used to generate all the relevant distributions. We also show numerically that in order to generate the power-law tail, an heterogeneous risk aversion model is required. By changing the parameters of these operators, one can also fine tune the resulting distributions in order to provide support for the emergence of a more egalitarian wealth distribution.

  14. Season-ahead water quality forecasts for the Schuylkill River, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Leung, K.

    2013-12-01

    Anticipating and preparing for elevated water quality parameter levels in critical water sources, using weather forecasts, is not uncommon. In this study, we explore the feasibility of extending this prediction scale to a season-ahead for the Schuylkill River in Philadelphia, utilizing both statistical and dynamical prediction models, to characterize the season. This advance information has relevance for recreational activities, ecosystem health, and water treatment, as the Schuylkill provides 40% of Philadelphia's water supply. The statistical model associates large-scale climate drivers with streamflow and water quality parameter levels; numerous variables from NOAA's CFSv2 model are evaluated for the dynamical approach. A multi-model combination is also assessed. Results indicate moderately skillful prediction of average summertime total coliform and wintertime turbidity, using season-ahead oceanic and atmospheric variables, predominantly from the North Atlantic Ocean. Models predicting the number of elevated turbidity events across the wintertime season are also explored.

  15. Electro-optical parameters of bond polarizability model for aluminosilicates.

    PubMed

    Smirnov, Konstantin S; Bougeard, Daniel; Tandon, Poonam

    2006-04-06

    Electro-optical parameters (EOPs) of bond polarizability model (BPM) for aluminosilicate structures were derived from quantum-chemical DFT calculations of molecular models. The tensor of molecular polarizability and the derivatives of the tensor with respect to the bond length are well reproduced with the BPM, and the EOPs obtained are in a fair agreement with available experimental data. The parameters derived were found to be transferable to larger molecules. This finding suggests that the procedure used can be applied to systems with partially ionic chemical bonds. The transferability of the parameters to periodic systems was tested in molecular dynamics simulation of the polarized Raman spectra of alpha-quartz. It appeared that the molecular Si-O bond EOPs failed to reproduce the intensity of peaks in the spectra. This limitation is due to large values of the longitudinal components of the bond polarizability and its derivative found in the molecular calculations as compared to those obtained from periodic DFT calculations of crystalline silica polymorphs by Umari et al. (Phys. Rev. B 2001, 63, 094305). It is supposed that the electric field of the solid is responsible for the difference of the parameters. Nevertheless, the EOPs obtained can be used as an initial set of parameters for calculations of polarizability related characteristics of relevant systems in the framework of BPM.

  16. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  17. Hierarchical relaxation dynamics in a tilted two-band Bose-Hubbard model

    NASA Astrophysics Data System (ADS)

    Cosme, Jayson G.

    2018-04-01

    We numerically examine slow and hierarchical relaxation dynamics of interacting bosons described by a tilted two-band Bose-Hubbard model. The system is found to exhibit signatures of quantum chaos within the spectrum and the validity of the eigenstate thermalization hypothesis for relevant physical observables is demonstrated for certain parameter regimes. Using the truncated Wigner representation in the semiclassical limit of the system, dynamics of relevant observables reveal hierarchical relaxation and the appearance of prethermalized states is studied from the perspective of statistics of the underlying mean-field trajectories. The observed prethermalization scenario can be attributed to different stages of glassy dynamics in the mode-time configuration space due to dynamical phase transition between ergodic and nonergodic trajectories.

  18. Predicting Drug Concentration‐Time Profiles in Multiple CNS Compartments Using a Comprehensive Physiologically‐Based Pharmacokinetic Model

    PubMed Central

    Yamamoto, Yumi; Välitalo, Pyry A.; Huntjens, Dymphy R.; Proost, Johannes H.; Vermeulen, An; Krauwinkel, Walter; Beukers, Margot W.; van den Berg, Dirk‐Jan; Hartman, Robin; Wong, Yin Cheong; Danhof, Meindert; van Hasselt, John G. C.

    2017-01-01

    Drug development targeting the central nervous system (CNS) is challenging due to poor predictability of drug concentrations in various CNS compartments. We developed a generic physiologically based pharmacokinetic (PBPK) model for prediction of drug concentrations in physiologically relevant CNS compartments. System‐specific and drug‐specific model parameters were derived from literature and in silico predictions. The model was validated using detailed concentration‐time profiles from 10 drugs in rat plasma, brain extracellular fluid, 2 cerebrospinal fluid sites, and total brain tissue. These drugs, all small molecules, were selected to cover a wide range of physicochemical properties. The concentration‐time profiles for these drugs were adequately predicted across the CNS compartments (symmetric mean absolute percentage error for the model prediction was <91%). In conclusion, the developed PBPK model can be used to predict temporal concentration profiles of drugs in multiple relevant CNS compartments, which we consider valuable information for efficient CNS drug development. PMID:28891201

  19. Hydrological response in catchments whit debris covered glaciers in the semi-arid Andes, Chile

    NASA Astrophysics Data System (ADS)

    Caro, A.; McPhee, J.; MacDonell, S.; Pellicciotti, F.; Ayala, A.

    2016-12-01

    Glaciers in the semi-arid Andes Cordillera in Chile have shrank rapidly during the 20th century. Negative mass balance contributes to increase the surface area of debris-covered glaciers. Recent research in Chile suggests that contributions from glaciers to summer season river flow in dry years is very important, however hydrological processes determining the glacier contribution are still poorly understood in the region. This work seeks to determine appropriate parameters for the simulation of melt volume in two watersheds dominated by debris-covered glaciers, in order to understand its variability in time and space, in the area with the largest population in Chile. The hydrological simulation is performed for the Tapado (30°S) and Pirámide (33ºS) glaciers, which can be defined as cold and temperate respectively. To simulate the hydrological behaviour we adopt the physically-based TOPographic Kinematic wave APproximation model (TOPKAPI-ETH). The hydrometeorological records necessary model runs have been collected through fieldwork from 2013 to 2015. Regarding the calibration of the model parameters melting ETI, its observed that the value for TF in Pirámide is a third of the value for Tapado glacier, while SRF is half in Tapado regarding to Pirámide. The runoff in the glaciers, the constant snow and ice storage are higher in Tapado regarding Pirámide. Results show a contribution of glacial outflow to runoff during 2015 of 55% in Tapado and 77% in Pirámide, with maximum contributions between January and March in Tapado and Pirámide between November and March, presenting the relevance of the permanence of snow cover during spring and shelter that provides debris-covered in reducing the melting glacier. The results have allowed to know the relevance of the glacier contribution to mountain streams, allowing to know the calibration parameters most relevant in the hydrology balance of glacier basins in the Andes.

  20. Supernatural inflation: inflation from supersymmetry with no (very) small parameters

    NASA Astrophysics Data System (ADS)

    Randall, Lisa; SoljačiĆ, Marin; Guth, Alan H.

    1996-02-01

    Most models of inflation have small parameters, either to guarantee sufficient inflation or the correct magnitude of the density perturbations. In this paper we show that, in supersymmetric theories with weak-scale supersymmetry breaking, one can construct viable inflationary models in which the requisite parameters appear naturally in the form of the ratio of mass scales that are already present in the theory. Successful inflationary models can be constructed from the flat-direction fields of a renormalizable supersymmetric potential, and such models can be realized even in the context of a simple GUT extension of the MSSM. We evade naive ``naturalness'' arguments by allowing for more than one field to be relevant to inflation, as in ``hybrid inflation'' models, and we argue that this is the most natural possibility if inflation fields are to be associated with flat direction fields of a supersymmetric theory. Such models predict a very low Hubble constant during inflation, of order 103-104 GeV, a scalar density perturbation index n which is very close to or greater than unity, and negligible tensor perturbations. In addition, these models lead to a large spike in the density perturbation spectrum at short wavelengths.

  1. Band gaps in grid structure with periodic local resonator subsystems

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaoqin; Wang, Jun; Wang, Rongqi; Lin, Jieqiong

    2017-09-01

    The grid structure is widely used in architectural and mechanical field for its high strength and saving material. This paper will present a study on an acoustic metamaterial beam (AMB) based on the normal square grid structure with local resonators owning both flexible band gaps and high static stiffness, which have high application potential in vibration control. Firstly, the AMB with variable cross-section frame is analytically modeled by the beam-spring-mass model that is provided by using the extended Hamilton’s principle and Bloch’s theorem. The above model is used for computing the dispersion relation of the designed AMB in terms of the design parameters, and the influences of relevant parameters on band gaps are discussed. Then a two-dimensional finite element model of the AMB is built and analyzed in COMSOL Multiphysics, both the dispersion properties of unit cell and the wave attenuation in a finite AMB have fine agreement with the derived model. The effects of design parameters of the two-dimensional model in band gaps are further examined, and the obtained results can well verify the analytical model. Finally, the wave attenuation performances in three-dimensional AMBs with equal and unequal thickness are presented and discussed.

  2. THE MAYAK WORKER DOSIMETRY SYSTEM (MWDS-2013) FOR INTERNALLY DEPOSITED PLUTONIUM: AN OVERVIEW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchall, A.; Vostrotin, V.; Puncher, M.

    The Mayak Worker Dosimetry System (MWDS-2013) is a system for interpreting measurement data from Mayak workers from both internal and external sources. This paper is concerned with the calculation of annual organ doses for Mayak workers exposed to plutonium aerosols, where the measurement data consists mainly of activity of plutonium in urine samples. The system utilises the latest biokinetic and dosimetric models, and unlike its predecessors, takes explicit account of uncertainties in both the measurement data and model parameters. The aim of this paper is to describe the complete MWDS-2013 system (including model parameter values and their uncertainties) and themore » methodology used (including all the relevant equations) and the assumptions made. Where necessary, supplementary papers which justify specific assumptions are cited.« less

  3. Identifying and exploiting trait-relevant tissues with multiple functional annotations in genome-wide association studies

    PubMed Central

    Zhang, Shujun

    2018-01-01

    Genome-wide association studies (GWASs) have identified many disease associated loci, the majority of which have unknown biological functions. Understanding the mechanism underlying trait associations requires identifying trait-relevant tissues and investigating associations in a trait-specific fashion. Here, we extend the widely used linear mixed model to incorporate multiple SNP functional annotations from omics studies with GWAS summary statistics to facilitate the identification of trait-relevant tissues, with which to further construct powerful association tests. Specifically, we rely on a generalized estimating equation based algorithm for parameter inference, a mixture modeling framework for trait-tissue relevance classification, and a weighted sequence kernel association test constructed based on the identified trait-relevant tissues for powerful association analysis. We refer to our analytic procedure as the Scalable Multiple Annotation integration for trait-Relevant Tissue identification and usage (SMART). With extensive simulations, we show how our method can make use of multiple complementary annotations to improve the accuracy for identifying trait-relevant tissues. In addition, our procedure allows us to make use of the inferred trait-relevant tissues, for the first time, to construct more powerful SNP set tests. We apply our method for an in-depth analysis of 43 traits from 28 GWASs using tissue-specific annotations in 105 tissues derived from ENCODE and Roadmap. Our results reveal new trait-tissue relevance, pinpoint important annotations that are informative of trait-tissue relationship, and illustrate how we can use the inferred trait-relevant tissues to construct more powerful association tests in the Wellcome trust case control consortium study. PMID:29377896

  4. Electroencephalographic neurofeedback: Level of evidence in mental and brain disorders and suggestions for good clinical practice.

    PubMed

    Micoulaud-Franchi, J-A; McGonigal, A; Lopez, R; Daudet, C; Kotwas, I; Bartolomei, F

    2015-12-01

    The technique of electroencephalographic neurofeedback (EEG NF) emerged in the 1970s and is a technique that measures a subject's EEG signal, processes it in real time, extracts a parameter of interest and presents this information in visual or auditory form. The goal is to effectuate a behavioural modification by modulating brain activity. The EEG NF opens new therapeutic possibilities in the fields of psychiatry and neurology. However, the development of EEG NF in clinical practice requires (i) a good level of evidence of therapeutic efficacy of this technique, (ii) a good practice guide for this technique. Firstly, this article investigates selected trials with the following criteria: study design with controlled, randomized, and open or blind protocol, primary endpoint related to the mental and brain disorders treated and assessed with standardized measurement tools, identifiable EEG neurophysiological targets, underpinned by pathophysiological relevance. Trials were found for: epilepsies, migraine, stroke, chronic insomnia, attentional-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, major depressive disorder, anxiety disorders, addictive disorders, psychotic disorders. Secondly, this article investigates the principles of neurofeedback therapy in line with learning theory. Different underlying therapeutic models are presented didactically between two continua: a continuum between implicit and explicit learning and a continuum between the biomedical model (centred on "the disease") and integrative biopsychosocial model of health (centred on "the illness"). The main relevant learning model is to link neurofeedback therapy with the field of cognitive remediation techniques. The methodological specificity of neurofeedback is to be guided by biologically relevant neurophysiological parameters. Guidelines for good clinical practice of EEG NF concerning technical issues of electrophysiology and of learning are suggested. These require validation by institutional structures for the clinical practice of EEG NF. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  5. Mapping model behaviour using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Gupta, H. V.; Casper, M. C.

    2009-03-01

    Hydrological model evaluation and identification essentially involves extracting and processing information from model time series. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by the distributed conceptual watershed model NASIM. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  6. Mapping model behaviour using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Gupta, H. V.; Casper, M. C.

    2008-12-01

    Hydrological model evaluation and identification essentially depends on the extraction of information from model time series and its processing. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by a distributed conceptual watershed model. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  7. Mapping Surface Cover Parameters Using Aggregation Rules and Remotely Sensed Cover Classes. Version 1.9

    NASA Technical Reports Server (NTRS)

    Arain, Altaf M.; Shuttleworth, W. James; Yang, Z-Liang; Michaud, Jene; Dolman, Johannes

    1997-01-01

    A coupled model, which combines the Biosphere-Atmosphere Transfer Scheme (BATS) with an advanced atmospheric boundary-layer model, was used to validate hypothetical aggregation rules for BATS-specific surface cover parameters. The model was initialized and tested with observations from the Anglo-Brazilian Amazonian Climate Observational Study and used to simulate surface fluxes for rain forest and pasture mixes at a site near Manaus in Brazil. The aggregation rules are shown to estimate parameters which give area-average surface fluxes similar to those calculated with explicit representation of forest and pasture patches for a range of meteorological and surface conditions relevant to this site, but the agreement deteriorates somewhat when there are large patch-to-patch differences in soil moisture. The aggregation rules, validated as above, were then applied to remotely sensed 1 km land cover data set to obtain grid-average values of BATS vegetation parameters for 2.8 deg x 2.8 deg and 1 deg x 1 deg grids within the conterminous United States. There are significant differences in key vegetation parameters (aerodynamic roughness length, albedo, leaf area index, and stomatal resistance) when aggregate parameters are compared to parameters for the single, dominant cover within the grid. However, the surface energy fluxes calculated by stand-alone BATS with the 2-year forcing, data from the International Satellite Land Surface Climatology Project (ISLSCP) CDROM were reasonably similar using aggregate-vegetation parameters and dominant-cover parameters, but there were some significant differences, particularly in the western USA.

  8. Can Condensing Organic Aerosols Lead to Less Cloud Particles?

    NASA Astrophysics Data System (ADS)

    Gao, C. Y.; Tsigaridis, K.; Bauer, S.

    2017-12-01

    We examined the impact of condensing organic aerosols on activated cloud number concentration in a new aerosol microphysics box model, MATRIX-VBS. The model includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) that resolves aerosol mass and number concentrations and aerosol mixing state. Preliminary results show that by including the condensation of organic aerosols, the new model (MATRIX-VBS) has less activated particles compared to the original model (MATRIX), which treats organic aerosols as non-volatile. Parameters such as aerosol chemical composition, mass and number concentrations, and particle sizes which affect activated cloud number concentration are thoroughly evaluated via a suite of Monte-Carlo simulations. The Monte-Carlo simulations also provide information on which climate-relevant parameters play a critical role in the aerosol evolution in the atmosphere. This study also helps simplifying the newly developed box model which will soon be implemented in the global model GISS ModelE as a module.

  9. Laboratory Modelling of Volcano Plumbing Systems: a review

    NASA Astrophysics Data System (ADS)

    Galland, Olivier; Holohan, Eoghan P.; van Wyk de Vries, Benjamin; Burchardt, Steffi

    2015-04-01

    Earth scientists have, since the XIX century, tried to replicate or model geological processes in controlled laboratory experiments. In particular, laboratory modelling has been used study the development of volcanic plumbing systems, which sets the stage for volcanic eruptions. Volcanic plumbing systems involve complex processes that act at length scales of microns to thousands of kilometres and at time scales from milliseconds to billions of years, and laboratory models appear very suitable to address them. This contribution reviews laboratory models dedicated to study the dynamics of volcano plumbing systems (Galland et al., Accepted). The foundation of laboratory models is the choice of relevant model materials, both for rock and magma. We outline a broad range of suitable model materials used in the literature. These materials exhibit very diverse rheological behaviours, so their careful choice is a crucial first step for the proper experiment design. The second step is model scaling, which successively calls upon: (1) the principle of dimensional analysis, and (2) the principle of similarity. The dimensional analysis aims to identify the dimensionless physical parameters that govern the underlying processes. The principle of similarity states that "a laboratory model is equivalent to his geological analogue if the dimensionless parameters identified in the dimensional analysis are identical, even if the values of the governing dimensional parameters differ greatly" (Barenblatt, 2003). The application of these two steps ensures a solid understanding and geological relevance of the laboratory models. In addition, this procedure shows that laboratory models are not designed to exactly mimic a given geological system, but to understand underlying generic processes, either individually or in combination, and to identify or demonstrate physical laws that govern these processes. From this perspective, we review the numerous applications of laboratory models to understand the distinct key features of volcanic plumbing systems: dykes, cone sheets, sills, laccoliths, caldera-related structures, ground deformation, magma/fault interactions, and explosive vents. Barenblatt, G.I., 2003. Scaling. Cambridge University Press, Cambridge. Galland, O., Holohan, E.P., van Wyk de Vries, B., Burchardt, S., Accepted. Laboratory modelling of volcanic plumbing systems: A review, in: Breitkreuz, C., Rocchi, S. (Eds.), Laccoliths, sills and dykes: Physical geology of shallow level magmatic systems. Springer.

  10. The low-power low-pressure flow resonance in a natural circulation cooled boiling water reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagen, T.H.J.J. van der; Stekelenburg, A.J.C.

    1995-09-01

    The last few years the possibility of flow resonances during the start-up phase of natural circulation cooled BWRs has been put forward by several authors. The present paper reports on actual oscillations observed at the Dodewaard reactor, the world`s only operating BWR cooled by natural circulation. In addition, results of a parameter study performed by means of a simple theoretical model are presented. The influence of relevant parameters on the resonance characteristics, being the decay ratio and the resonance frequency, is investigated and explained.

  11. A parameter optimization tool for evaluating the physical consistency of the plot-scale water budget of the integrated eco-hydrological model GEOtop in complex terrain

    NASA Astrophysics Data System (ADS)

    Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg

    2017-04-01

    In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain regions, since it considers the effect of topography on radiation and water fluxes and integrates a snow module. A new automatic sensitivity and optimization tool based on the Particle Swarm Optimization theory has been developed, available as R package on https://github.com/EURAC-Ecohydro/geotopOptim2. The model, once calibrated for soil and vegetation parameters, predicts the plot-scale temporal SMC dynamics of SMC and ET with a RMSE of about 0.05 m3/m3 and 40 W/m2, respectively. However, the model tends to underestimate ET during summer months over apple orchards. Results show how most sensitive parameters are both soil and canopy structural properties. However, ranking is affected by the choice of the target function and local topographic conditions. In particular, local slope/aspect influences results in stations located over hillslopes, but with marked seasonal differences. Results for locations in the valley floor are strongly controlled by the choice of the bottom water flux boundary condition. The poorer model performances in simulating ET over apple orchards could be explained by a model structural deficiency in representing the stomatal control on vapor pressure deficit for this particular type of vegetation. The results of this sensitivity could be extended to other physically distributed models, and also provide valuable insights for optimizing new experimental designs.

  12. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  13. Thermodynamic curvature for a two-parameter spin model with frustration.

    PubMed

    Ruppeiner, George; Bellucci, Stefano

    2015-01-01

    Microscopic models of realistic thermodynamic systems usually involve a number of parameters, not all of equal macroscopic relevance. We examine a decorated (1+3) Ising spin chain containing two microscopic parameters: a stiff parameter K mediating the long-range interactions, and a sloppy J operating within local spin groups. We show that K dominates the macroscopic behavior, with varying J having only a weak effect, except in regions where J brings about transitions between phases through its conditioning of the local spin groups with which K interacts. We calculate the heat capacity C(H), the magnetic susceptibility χ(T), and the thermodynamic curvature R. For large |J/K|, we identify four magnetic phases: ferromagnetic, antiferromagnetic, and two ferrimagnetic, according to the signs of K and J. We argue that for characterizing these phases, the strongest picture is offered by the thermodynamic geometric invariant R, proportional to the correlation length ξ. This picture has correspondences to other cases, such as fluids.

  14. Gravitational wave as probe of superfluid dark matter

    NASA Astrophysics Data System (ADS)

    Cai, Rong-Gen; Liu, Tong-Bo; Wang, Shao-Jiang

    2018-02-01

    In recent years, superfluid dark matter (SfDM) has become a competitive model of emergent modified Newtonian dynamics (MOND) scenario: MOND phenomenons naturally emerge as a derived concept due to an extra force mediated between baryons by phonons as a result of axionlike particles condensed as superfluid at galactic scales; Beyond galactic scales, these axionlike particles behave as normal fluid without phonon-mediated MOND-like force between baryons, therefore SfDM also maintains the usual success of Λ CDM at cosmological scales. In this paper, we use gravitational waves (GWs) to probe the relevant parameter space of SfDM. GWs through Bose-Einstein condensate (BEC) could propagate with a speed slightly deviation from the speed-of-light due to the change in the effective refractive index, which depends on the SfDM parameters and GW-source properties. We find that Five hundred meter Aperture Spherical Telescope (FAST), Square Kilometre Array (SKA) and International Pulsar Timing Array (IPTA) are the most promising means as GW probe of relevant parameter space of SfDM. Future space-based GW detectors are also capable of probing SfDM if a multimessenger approach is adopted.

  15. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  16. SBRML: a markup language for associating systems biology data with models.

    PubMed

    Dada, Joseph O; Spasić, Irena; Paton, Norman W; Mendes, Pedro

    2010-04-01

    Research in systems biology is carried out through a combination of experiments and models. Several data standards have been adopted for representing models (Systems Biology Markup Language) and various types of relevant experimental data (such as FuGE and those of the Proteomics Standards Initiative). However, until now, there has been no standard way to associate a model and its entities to the corresponding datasets, or vice versa. Such a standard would provide a means to represent computational simulation results as well as to frame experimental data in the context of a particular model. Target applications include model-driven data analysis, parameter estimation, and sharing and archiving model simulations. We propose the Systems Biology Results Markup Language (SBRML), an XML-based language that associates a model with several datasets. Each dataset is represented as a series of values associated with model variables, and their corresponding parameter values. SBRML provides a flexible way of indexing the results to model parameter values, which supports both spreadsheet-like data and multidimensional data cubes. We present and discuss several examples of SBRML usage in applications such as enzyme kinetics, microarray gene expression and various types of simulation results. The XML Schema file for SBRML is available at http://www.comp-sys-bio.org/SBRML under the Academic Free License (AFL) v3.0.

  17. Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis

    PubMed Central

    Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas

    2016-01-01

    The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246

  18. Optimization of Modeled Land-Atmosphere Exchanges of Water and Energy in an Isotopically-Enabled Land Surface Model by Bayesian Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Wong, T. E.; Noone, D. C.; Kleiber, W.

    2014-12-01

    The single largest uncertainty in climate model energy balance is the surface latent heating over tropical land. Furthermore, the partitioning of the total latent heat flux into contributions from surface evaporation and plant transpiration is of great importance, but notoriously poorly constrained. Resolving these issues will require better exploiting information which lies at the interface between observations and advanced modeling tools, both of which are imperfect. There are remarkably few observations which can constrain these fluxes, placing strict requirements on developing statistical methods to maximize the use of limited information to best improve models. Previous work has demonstrated the power of incorporating stable water isotopes into land surface models for further constraining ecosystem processes. We present results from a stable water isotopically-enabled land surface model (iCLM4), including model experiments partitioning the latent heat flux into contributions from plant transpiration and surface evaporation. It is shown that the partitioning results are sensitive to the parameterization of kinetic fractionation used. We discuss and demonstrate an approach to calibrating select model parameters to observational data in a Bayesian estimation framework, requiring Markov Chain Monte Carlo sampling of the posterior distribution, which is shown to constrain uncertain parameters as well as inform relevant values for operational use. Finally, we discuss the application of the estimation scheme to iCLM4, including entropy as a measure of information content and specific challenges which arise in calibration models with a large number of parameters.

  19. Isolated heart models: cardiovascular system studies and technological advances.

    PubMed

    Olejnickova, Veronika; Novakova, Marie; Provaznik, Ivo

    2015-07-01

    Isolated heart model is a relevant tool for cardiovascular system studies. It represents a highly reproducible model for studying broad spectrum of biochemical, physiological, morphological, and pharmaceutical parameters, including analysis of intrinsic heart mechanics, metabolism, and coronary vascular response. Results obtained in this model are under no influence of other organ systems, plasma concentration of hormones or ions and influence of autonomic nervous system. The review describes various isolated heart models, the modes of heart perfusion, and advantages and limitations of various experimental setups. It reports the improvements of perfusion setup according to Langendorff introduced by the authors.

  20. Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects

    NASA Astrophysics Data System (ADS)

    Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.

    2013-08-01

    In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100%) and overestimation of light extinction (up to 20%). The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.

  1. Natural variability of biochemical biomarkers in the macro-zoobenthos: Dependence on life stage and environmental factors.

    PubMed

    Scarduelli, Lucia; Giacchini, Roberto; Parenti, Paolo; Migliorati, Sonia; Di Brisco, Agnese Maria; Vighi, Marco

    2017-11-01

    Biomarkers are widely used in ecotoxicology as indicators of exposure to toxicants. However, their ability to provide ecologically relevant information remains controversial. One of the major problems is understanding whether the measured responses are determined by stress factors or lie within the natural variability range. In a previous work, the natural variability of enzymatic levels in invertebrates sampled in pristine rivers was proven to be relevant across both space and time. In the present study, the experimental design was improved by considering different life stages of the selected taxa and by measuring more environmental parameters. The experimental design considered sampling sites in 2 different rivers, 8 sampling dates covering the whole seasonal cycle, 4 species from 3 different taxonomic groups (Plecoptera, Perla grandis; Ephemeroptera, Baetis alpinus and Epeorus alpicula; Tricoptera, Hydropsyche pellucidula), different life stages for each species, and 4 enzymes (acetylcholinesterase, glutathione S-transferase, alkaline phosphatase, and catalase). Biomarker levels were related to environmental (physicochemical) parameters to verify any kind of dependence. Data were statistically elaborated using hierarchical multilevel Bayesian models. Natural variability was found to be relevant across both space and time. The results of the present study proved that care should be paid when interpreting biomarker results. Further research is needed to better understand the dependence of the natural variability on environmental parameters. Environ Toxicol Chem 2017;36:3158-3167. © 2017 SETAC. © 2017 SETAC.

  2. Combined Heat Transfer in High-Porosity High-Temperature Fibrous Insulations: Theory and Experimental Validation

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Cunnington, George R.; Miller, Steve D.; Knutson, Jeffry R.

    2010-01-01

    Combined radiation and conduction heat transfer through various high-temperature, high-porosity, unbonded (loose) fibrous insulations was modeled based on first principles. The diffusion approximation was used for modeling the radiation component of heat transfer in the optically thick insulations. The relevant parameters needed for the heat transfer model were derived from experimental data. Semi-empirical formulations were used to model the solid conduction contribution of heat transfer in fibrous insulations with the relevant parameters inferred from thermal conductivity measurements at cryogenic temperatures in a vacuum. The specific extinction coefficient for radiation heat transfer was obtained from high-temperature steady-state thermal measurements with large temperature gradients maintained across the sample thickness in a vacuum. Standard gas conduction modeling was used in the heat transfer formulation. This heat transfer modeling methodology was applied to silica, two types of alumina, and a zirconia-based fibrous insulation, and to a variation of opacified fibrous insulation (OFI). OFI is a class of insulations manufactured by embedding efficient ceramic opacifiers in various unbonded fibrous insulations to significantly attenuate the radiation component of heat transfer. The heat transfer modeling methodology was validated by comparison with more rigorous analytical solutions and with standard thermal conductivity measurements. The validated heat transfer model is applicable to various densities of these high-porosity insulations as long as the fiber properties are the same (index of refraction, size distribution, orientation, and length). Furthermore, the heat transfer data for these insulations can be obtained at any static pressure in any working gas environment without the need to perform tests in various gases at various pressures.

  3. Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook

    NASA Astrophysics Data System (ADS)

    Sobolev, Andrej M.; Gray, Malcolm D.

    2012-07-01

    Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.

  4. Numerical simulation of the geodynamo reaches Earth's core dynamical regime

    NASA Astrophysics Data System (ADS)

    Aubert, J.; Gastine, T.; Fournier, A.

    2016-12-01

    Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.

  5. On the Efficiency Costs of De-Tracking Secondary Schools in Europe

    ERIC Educational Resources Information Center

    Brunello, Giorgio; Rocco, Lorenzo; Ariga, Kenn; Iwahashi, Roki

    2012-01-01

    Many European countries have delayed the time when school tracking starts in order to pursue equality of opportunity. What are the efficiency costs of de-tracking secondary schools? This paper builds a stylized model of the optimal time of tracking, estimates the relevant parameters using micro data for 11 European countries and computes the…

  6. Evaluation of hydrodynamic ocean models as a first step in larval dispersal modelling

    NASA Astrophysics Data System (ADS)

    Vasile, Roxana; Hartmann, Klaas; Hobday, Alistair J.; Oliver, Eric; Tracey, Sean

    2018-01-01

    Larval dispersal modelling, a powerful tool in studying population connectivity and species distribution, requires accurate estimates of the ocean state, on a high-resolution grid in both space (e.g. 0.5-1 km horizontal grid) and time (e.g. hourly outputs), particularly of current velocities and water temperature. These estimates are usually provided by hydrodynamic models based on which larval trajectories and survival are computed. In this study we assessed the accuracy of two hydrodynamic models around Australia - Bluelink ReANalysis (BRAN) and Hybrid Coordinate Ocean Model (HYCOM) - through comparison with empirical data from the Australian National Moorings Network (ANMN). We evaluated the models' predictions of seawater parameters most relevant to larval dispersal - temperature, u and v velocities and current speed and direction - on the continental shelf where spawning and nursery areas for major fishery species are located. The performance of each model in estimating ocean parameters was found to depend on the parameter investigated and to vary from one geographical region to another. Both BRAN and HYCOM models systematically overestimated the mean water temperature, particularly in the top 140 m of water column, with over 2 °C bias at some of the mooring stations. HYCOM model was more accurate than BRAN for water temperature predictions in the Great Australian Bight and along the east coast of Australia. Skill scores between each model and the in situ observations showed lower accuracy in the models' predictions of u and v ocean current velocities compared to water temperature predictions. For both models, the lowest accuracy in predicting ocean current velocities, speed and direction was observed at 200 m depth. Low accuracy of both model predictions was also observed in the top 10 m of the water column. BRAN had more accurate predictions of both u and v velocities in the upper 50 m of water column at all mooring station locations. While HYCOM predictions of ocean current speed were generally more accurate than BRAN, BRAN predictions of both ocean current speed and direction were more accurate than HYCOM along the southeast coast of Australia and Tasmania. This study identified important inaccuracies in the hydrodynamic models' estimations of the real ocean parameters and on time scales relevant to larval dispersal studies. These findings highlight the importance of the choice and validation of hydrodynamic models, and calls for estimates of such bias to be incorporated in dispersal studies.

  7. Biophysical stimulation for in vitro engineering of functional cardiac tissues.

    PubMed

    Korolj, Anastasia; Wang, Erika Yan; Civitarese, Robert A; Radisic, Milica

    2017-07-01

    Engineering functional cardiac tissues remains an ongoing significant challenge due to the complexity of the native environment. However, our growing understanding of key parameters of the in vivo cardiac microenvironment and our ability to replicate those parameters in vitro are resulting in the development of increasingly sophisticated models of engineered cardiac tissues (ECT). This review examines some of the most relevant parameters that may be applied in culture leading to higher fidelity cardiac tissue models. These include the biochemical composition of culture media and cardiac lineage specification, co-culture conditions, electrical and mechanical stimulation, and the application of hydrogels, various biomaterials, and scaffolds. The review will also summarize some of the recent functional human tissue models that have been developed for in vivo and in vitro applications. Ultimately, the creation of sophisticated ECT that replicate native structure and function will be instrumental in advancing cell-based therapeutics and in providing advanced models for drug discovery and testing. © 2017 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  8. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    NASA Astrophysics Data System (ADS)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  9. Comparsion analysis of data mining models applied to clinical research in traditional Chinese medicine.

    PubMed

    Zhao, Yufeng; Xie, Qi; He, Liyun; Liu, Baoyan; Li, Kun; Zhang, Xiang; Bai, Wenjing; Luo, Lin; Jing, Xianghong; Huo, Ruili

    2014-10-01

    To help researchers selecting appropriate data mining models to provide better evidence for the clinical practice of Traditional Chinese Medicine (TCM) diagnosis and therapy. Clinical issues based on data mining models were comprehensively summarized from four significant elements of the clinical studies: symptoms, symptom patterns, herbs, and efficacy. Existing problems were further generalized to determine the relevant factors of the performance of data mining models, e.g. data type, samples, parameters, variable labels. Combining these relevant factors, the TCM clinical data features were compared with regards to statistical characters and informatics properties. Data models were compared simultaneously from the view of applied conditions and suitable scopes. The main application problems were the inconsistent data type and the small samples for the used data mining models, which caused the inappropriate results, even the mistake results. These features, i.e. advantages, disadvantages, satisfied data types, tasks of data mining, and the TCM issues, were summarized and compared. By aiming at the special features of different data mining models, the clinical doctors could select the suitable data mining models to resolve the TCM problem.

  10. Theoretical and experimental analysis of injection seeding a Q-switched alexandrite laser

    NASA Technical Reports Server (NTRS)

    Prasad, C. R.; Lee, H. S.; Glesne, T. R.; Monosmith, B.; Schwemmer, G. K.

    1991-01-01

    Injection seeding is a method for achieving linewidths of less than 500 MHz in the output of broadband, tunable, solid state lasers. Dye lasers, CW and pulsed diode lasers, and other solid state lasers have been used as injection seeders. By optimizing the fundamental laser parameters of pump energy, Q-switched pulse build-up time, injection seed power and mode matching, one can achieve significant improvements in the spectral purity of the Q-switched output. These parameters are incorporated into a simple model for analyzing spectral purity and pulse build-up processes in a Q-switched, injection-seeded laser. Experiments to optimize the relevant parameters of an alexandrite laser show good agreement.

  11. Inferring transit time distributions from atmospheric tracer data: Assessment of the predictive capacities of Lumped Parameter Models on a 3D crystalline aquifer model

    NASA Astrophysics Data System (ADS)

    Marçais, J.; de Dreuzy, J.-R.; Ginn, T. R.; Rousseau-Gueutin, P.; Leray, S.

    2015-06-01

    While central in groundwater resources and contaminant fate, Transit Time Distributions (TTDs) are never directly accessible from field measurements but always deduced from a combination of tracer data and more or less involved models. We evaluate the predictive capabilities of approximate distributions (Lumped Parameter Models abbreviated as LPMs) instead of fully developed aquifer models. We develop a generic assessment methodology based on synthetic aquifer models to establish references for observable quantities as tracer concentrations and prediction targets as groundwater renewal times. Candidate LPMs are calibrated on the observable tracer concentrations and used to infer renewal time predictions, which are compared with the reference ones. This methodology is applied to the produced crystalline aquifer of Plœmeur (Brittany, France) where flows leak through a micaschists aquitard to reach a sloping aquifer where they radially converge to the producing well, issuing broad rather than multi-modal TTDs. One, two and three parameters LPMs were calibrated to a corresponding number of simulated reference anthropogenic tracer concentrations (CFC-11, 85Kr and SF6). Extensive statistical analysis over the aquifer shows that a good fit of the anthropogenic tracer concentrations is neither a necessary nor a sufficient condition to reach acceptable predictive capability. Prediction accuracy is however strongly conditioned by the use of a priori relevant LPMs. Only adequate LPM shapes yield unbiased estimations. In the case of Plœmeur, relevant LPMs should have two parameters to capture the mean and the standard deviation of the residence times and cover the first few decades [0; 50 years]. Inverse Gaussian and shifted exponential performed equally well for the wide variety of the reference TTDs from strongly peaked in recharge zones where flows are diverging to broadly distributed in more converging zones. When using two sufficiently different atmospheric tracers like CFC-11 and 85Kr, groundwater renewal time predictions are accurate at 1-5 years for estimating mean transit times of some decades (10-50 years). 1-parameter LPMs calibrated on a single atmospheric tracer lead to substantially larger errors of the order of 10 years, while 3-parameter LPMs calibrated with a third atmospheric tracers (SF6) do not improve the prediction capabilities. Based on a specific site, this study highlights the high predictive capacities of two atmospheric tracers on the same time range with sufficiently different atmospheric concentration chronicles.

  12. Applying Probabilistic Decision Models to Clinical Trial Design

    PubMed Central

    Smith, Wade P; Phillips, Mark H

    2018-01-01

    Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075

  13. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  14. Optimization of an angle-beam ultrasonic approach for characterization of impact damage in composites

    NASA Astrophysics Data System (ADS)

    Henry, Christine; Kramb, Victoria; Welter, John T.; Wertz, John N.; Lindgren, Eric A.; Aldrin, John C.; Zainey, David

    2018-04-01

    Advances in NDE method development are greatly improved through model-guided experimentation. In the case of ultrasonic inspections, models which provide insight into complex mode conversion processes and sound propagation paths are essential for understanding the experimental data and inverting the experimental data into relevant information. However, models must also be verified using experimental data obtained under well-documented and understood conditions. Ideally, researchers would utilize the model simulations and experimental approach to efficiently converge on the optimal solution. However, variability in experimental parameters introduce extraneous signals that are difficult to differentiate from the anticipated response. This paper discusses the results of an ultrasonic experiment designed to evaluate the effect of controllable variables on the anticipated signal, and the effect of unaccounted for experimental variables on the uncertainty in those results. Controlled experimental parameters include the transducer frequency, incidence beam angle and focal depth.

  15. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  16. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  17. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  18. A hybrid PSO-SVM-based method for predicting the friction coefficient between aircraft tire and coating

    NASA Astrophysics Data System (ADS)

    Zhan, Liwei; Li, Chengwei

    2017-02-01

    A hybrid PSO-SVM-based model is proposed to predict the friction coefficient between aircraft tire and coating. The presented hybrid model combines a support vector machine (SVM) with particle swarm optimization (PSO) technique. SVM has been adopted to solve regression problems successfully. Its regression accuracy is greatly related to optimizing parameters such as the regularization constant C , the parameter gamma γ corresponding to RBF kernel and the epsilon parameter \\varepsilon in the SVM training procedure. However, the friction coefficient which is predicted based on SVM has yet to be explored between aircraft tire and coating. The experiment reveals that drop height and tire rotational speed are the factors affecting friction coefficient. Bearing in mind, the friction coefficient can been predicted using the hybrid PSO-SVM-based model by the measured friction coefficient between aircraft tire and coating. To compare regression accuracy, a grid search (GS) method and a genetic algorithm (GA) are used to optimize the relevant parameters (C , γ and \\varepsilon ), respectively. The regression accuracy could be reflected by the coefficient of determination ({{R}2} ). The result shows that the hybrid PSO-RBF-SVM-based model has better accuracy compared with the GS-RBF-SVM- and GA-RBF-SVM-based models. The agreement of this model (PSO-RBF-SVM) with experiment data confirms its good performance.

  19. From individual to population level effects of toxicants in the tubicifid Branchiura sowerbyi using threshold effect models in a Bayesian framework.

    PubMed

    Ducrot, Virginie; Billoir, Elise; Péry, Alexandre R R; Garric, Jeanne; Charles, Sandrine

    2010-05-01

    Effects of zinc were studied in the freshwater worm Branchiura sowerbyi using partial and full life-cycle tests. Only newborn and juveniles were sensitive to zinc, displaying effects on survival, growth, and age at first brood at environmentally relevant concentrations. Threshold effect models were proposed to assess toxic effects on individuals. They were fitted to life-cycle test data using Bayesian inference and adequately described life-history trait data in exposed organisms. The daily asymptotic growth rate of theoretical populations was then simulated with a matrix population model, based upon individual-level outputs. Population-level outputs were in accordance with existing literature for controls. Working in a Bayesian framework allowed incorporating parameter uncertainty in the simulation of the population-level response to zinc exposure, thus increasing the relevance of test results in the context of ecological risk assessment.

  20. A Fly-Inspired Mushroom Bodies Model for Sensory-Motor Control Through Sequence and Subsequence Learning.

    PubMed

    Arena, Paolo; Calí, Marco; Patané, Luca; Portera, Agnese; Strauss, Roland

    2016-09-01

    Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies neuropile. The network devoted to context formation is able to reconstruct the learned sequence and also to trace the subsequences present in the provided input. A sensitivity analysis to parameter variation and noise is reported. Experiments on a roving robot are reported to show the capabilities of the architecture used as a neural controller.

  1. Generalized Polynomial Chaos Based Uncertainty Quantification for Planning MRgLITT Procedures

    PubMed Central

    Fahrenholtz, S.; Stafford, R. J.; Maier, F.; Hazle, J. D.; Fuentes, D.

    2014-01-01

    Purpose A generalized polynomial chaos (gPC) method is used to incorporate constitutive parameter uncertainties within the Pennes representation of bioheat transfer phenomena. The stochastic temperature predictions of the mathematical model are critically evaluated against MR thermometry data for planning MR-guided Laser Induced Thermal Therapies (MRgLITT). Methods Pennes bioheat transfer model coupled with a diffusion theory approximation of laser tissue interaction was implemented as the underlying deterministic kernel. A probabilistic sensitivity study was used to identify parameters that provide the most variance in temperature output. Confidence intervals of the temperature predictions are compared to MR temperature imaging (MRTI) obtained during phantom and in vivo canine (n=4) MRgLITT experiments. The gPC predictions were quantitatively compared to MRTI data using probabilistic linear and temporal profiles as well as 2-D 60 °C isotherms. Results Within the range of physically meaningful constitutive values relevant to the ablative temperature regime of MRgLITT, the sensitivity study indicated that the optical parameters, particularly the anisotropy factor, created the most variance in the stochastic model's output temperature prediction. Further, within the statistical sense considered, a nonlinear model of the temperature and damage dependent perfusion, absorption, and scattering is captured within the confidence intervals of the linear gPC method. Multivariate stochastic model predictions using parameters with the dominant sensitivities show good agreement with experimental MRTI data. Conclusions Given parameter uncertainties and mathematical modeling approximations of the Pennes bioheat model, the statistical framework demonstrates conservative estimates of the therapeutic heating and has potential for use as a computational prediction tool for thermal therapy planning. PMID:23692295

  2. Characterization of human passive muscles for impact loads using genetic algorithm and inverse finite element methods.

    PubMed

    Chawla, A; Mukherjee, S; Karthikeyan, B

    2009-02-01

    The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.

  3. Modeling of the phase equilibria of polystyrene in methylcyclohexane with semi-empirical quantum mechanical methods I.

    PubMed

    Wilczura-Wachnik, Hanna; Jónsdóttir, Svava Osk

    2003-04-01

    A method for calculating interaction parameters traditionally used in phase-equilibrium computations in low-molecular systems has been extended for the prediction of solvent activities of aromatic polymer solutions (polystyrene+methylcyclohexane). Using ethylbenzene as a model compound for the repeating unit of the polymer, the intermolecular interaction energies between the solvent molecule and the polymer were simulated. The semiempirical quantum chemical method AM1, and a method for sampling relevant internal orientations for a pair of molecules developed previously were used. Interaction energies are determined for three molecular pairs, the solvent and the model molecule, two solvent molecules and two model molecules, and used to calculated UNIQUAC interaction parameters, a(ij) and a(ji). Using these parameters, the solvent activities of the polystyrene 90,000 amu+methylcyclohexane system, and the total vapor pressures of the methylcyclohexane+ethylbenzene system were calculated. The latter system was compared to experimental data, giving qualitative agreement. Figure Solvent activities for the methylcylcohexane(1)+polystyrene(2) system at 316 K. Parameters aij (blue line) obtained with the AM1 method; parameters aij (pink line) from VLE data for the ethylbenzene+methylcyclohexane system. The abscissa is the polymer weight fraction defined as y2(x1)=(1mx1)M2/[x1M1+(1mx1)M2], where x1 is the solvent mole fraction and Mi are the molecular weights of the components.

  4. Electroweak baryogenesis in two Higgs doublet models and B meson anomalies

    NASA Astrophysics Data System (ADS)

    Cline, James M.; Kainulainen, Kimmo; Trott, Michael

    2011-11-01

    Motivated by 3.9 σ evidence of a CP-violating phase beyond the standard model in the like-sign dimuon asymmetry reported by D∅, we examine the potential for two Higgs doublet models (2HDMs) to achieve successful electroweak baryogenesis (EWBG) while explaining the dimuon anomaly. Our emphasis is on the minimal flavour violating 2HDM, but our numerical scans of model parameter space include type I and type II models as special cases. We incorporate relevant particle physics constraints, including electroweak precision data, b → sγ, the neutron electric dipole moment, R b , and perturbative coupling bounds to constrain the model. Surprisingly, we find that a large enough baryon asymmetry is only consistently achieved in a small subset of parameter space in 2HDMs, regardless of trying to simultaneously account for any B physics anomaly. There is some tension between simultaneous explanation of the dimuon anomaly and baryogenesis, but using a Markov chain Monte Carlo we find several models within 1 σ of the central values. We point out shortcomings with previous studies that reached different conclusions. The restricted parameter space that allows for EWBG makes this scenario highly predictive for collider searches. We discuss the most promising signatures to pursue at the LHC for EWBG-compatible models.

  5. The use and QA of biologically related models for treatment planning: short report of the TG-166 of the therapy physics committee of the AAPM.

    PubMed

    Allen Li, X; Alber, Markus; Deasy, Joseph O; Jackson, Andrew; Ken Jee, Kyung-Wook; Marks, Lawrence B; Martel, Mary K; Mayo, Charles; Moiseenko, Vitali; Nahum, Alan E; Niemierko, Andrzej; Semenenko, Vladimir A; Yorke, Ellen D

    2012-03-01

    Treatment planning tools that use biologically related models for plan optimization and/or evaluation are being introduced for clinical use. A variety of dose-response models and quantities along with a series of organ-specific model parameters are included in these tools. However, due to various limitations, such as the limitations of models and available model parameters, the incomplete understanding of dose responses, and the inadequate clinical data, the use of biologically based treatment planning system (BBTPS) represents a paradigm shift and can be potentially dangerous. There will be a steep learning curve for most planners. The purpose of this task group is to address some of these relevant issues before the use of BBTPS becomes widely spread. In this report, the authors (1) discuss strategies, limitations, conditions, and cautions for using biologically based models and parameters in clinical treatment planning; (2) demonstrate the practical use of the three most commonly used commercially available BBTPS and potential dosimetric differences between biologically model based and dose-volume based treatment plan optimization and evaluation; (3) identify the desirable features and future directions in developing BBTPS; and (4) provide general guidelines and methodology for the acceptance testing, commissioning, and routine quality assurance (QA) of BBTPS.

  6. Shock Formation and Energy Dissipation of Slow Magnetosonic Waves in Coronal Plumes

    NASA Technical Reports Server (NTRS)

    Cuntz, M.; Suess, S. T.

    2003-01-01

    We study the shock formation and energy dissipation of slow magnetosonic waves in coronal plumes. The wave parameters and the spreading function of the plumes as well as the base magnetic field strength are given by empirical constraints mostly from SOHO/UVCS. Our models show that shock formation occurs at low coronal heights, i.e., within 1.3 bun, depending on the model parameters. In addition, following analytical estimates, we show that scale height of energy dissipation by the shocks ranges between 0.15 and 0.45 Rsun. This implies that shock heating by slow magnetosonic waves is relevant at most heights, even though this type of waves is apparently not a solely operating energy supply mechanism.

  7. i3Drive, a 3D interactive driving simulator.

    PubMed

    Ambroz, Miha; Prebil, Ivan

    2010-01-01

    i3Drive, a wheeled-vehicle simulator, can accurately simulate vehicles of various configurations with up to eight wheels in real time on a desktop PC. It presents the vehicle dynamics as an interactive animation in a virtual 3D environment. The application is fully GUI-controlled, giving users an easy overview of the simulation parameters and letting them adjust those parameters interactively. It models all relevant vehicle systems, including the mechanical models of the suspension, power train, and braking and steering systems. The simulation results generally correspond well with actual measurements, making the system useful for studying vehicle performance in various driving scenarios. i3Drive is thus a worthy complement to other, more complex tools for vehicle-dynamics simulation and analysis.

  8. Optimized model tuning in medical systems.

    PubMed

    Kléma, Jirí; Kubalík, Jirí; Lhotská, Lenka

    2005-12-01

    In medical systems it is often advantageous to utilize specific problem situations (cases) in addition to or instead of a general model. Decisions are then based on relevant past cases retrieved from a case memory. The reliability of such decisions depends directly on the ability to identify cases of practical relevance to the current situation. This paper discusses issues of automated tuning in order to obtain a proper definition of mutual case similarity in a specific medical domain. The main focus is on a reasonably time-consuming optimization of the parameters that determine case retrieval and further utilization in decision making/ prediction. The two case studies - mortality prediction after cardiological intervention, and resource allocation at a spa - document that the optimization process is influenced by various characteristics of the problem domain.

  9. Kinetic modeling in PET imaging of hypoxia

    PubMed Central

    Li, Fan; Joergensen, Jesper T; Hansen, Anders E; Kjaer, Andreas

    2014-01-01

    Tumor hypoxia is associated with increased therapeutic resistance leading to poor treatment outcome. Therefore the ability to detect and quantify intratumoral oxygenation could play an important role in future individual personalized treatment strategies. Positron Emission Tomography (PET) can be used for non-invasive mapping of tissue oxygenation in vivo and several hypoxia specific PET tracers have been developed. Evaluation of PET data in the clinic is commonly based on visual assessment together with semiquantitative measurements e.g. standard uptake value (SUV). However, dynamic PET contains additional valuable information on the temporal changes in tracer distribution. Kinetic modeling can be used to extract relevant pharmacokinetic parameters of tracer behavior in vivo that reflects relevant physiological processes. In this paper, we review the potential contribution of kinetic analysis for PET imaging of hypoxia. PMID:25250200

  10. Cosmological attractor inflation from the RG-improved Higgs sector of finite gauge theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elizalde, Emilio; Odintsov, Sergei D.; Pozdeeva, Ekaterina O.

    2016-02-01

    The possibility to construct an inflationary scenario for renormalization-group improved potentials corresponding to the Higgs sector of finite gauge models is investigated. Taking into account quantum corrections to the renormalization-group potential which sums all leading logs of perturbation theory is essential for a successful realization of the inflationary scenario, with very reasonable parameter values. The inflationary models thus obtained are seen to be in good agreement with the most recent and accurate observational data. More specifically, the values of the relevant inflationary parameters, n{sub s} and r, are close to the corresponding ones in the R{sup 2} and Higgs-driven inflationmore » scenarios. It is shown that the model here constructed and Higgs-driven inflation belong to the same class of cosmological attractors.« less

  11. Transient Mobility on Submonolayer Island Growth: An Exploration of Asymptotic Effects in Modeling

    NASA Astrophysics Data System (ADS)

    Morales-Cifuentes, Josue; Einstein, Theodore L.; Pimpinelli, Alberto

    In studies of epitaxial growth, modeling of the smallest stable cluster (i+1 monomers, with i the critical nucleus size), is paramount in understanding growth dynamics. Our previous work has tackled submonolayer growth by modeling the effect of ballistic monomers, hot-precursors, on diffusive dynamics. Different scaling regimes and energies were predicted, with initial confirmation by applying to para-hexaphenyl submonolayer studies. Lingering questions about the applicability and behavior of the model are addressed. First, we show how an asymptotic approximation based on the growth exponent, α (N Fα) allows for robustness of modeling to experimental data; second, we answer questions about non-monotonicity by exploring the behavior of the growth exponent across realizable parameter spaces; third, we revisit our previous para-hexaphenyl work and examine relevant physical parameters, namely the speed of the hot-monomers. We conclude with an exploration of how the new asymptotic approximation can be used to strengthen the application of our model to other physical systems.

  12. Review of simulation techniques for Aquifer Thermal Energy Storage (ATES)

    NASA Astrophysics Data System (ADS)

    Mercer, J. W.; Faust, C. R.; Miller, W. J.; Pearson, F. J., Jr.

    1981-03-01

    The analysis of aquifer thermal energy storage (ATES) systems rely on the results from mathematical and geochemical models. Therefore, the state-of-the-art models relevant to ATES were reviewed and evaluated. These models describe important processes active in ATES including ground-water flow, heat transport (heat flow), solute transport (movement of contaminants), and geochemical reactions. In general, available models of the saturated ground-water environment are adequate to address most concerns associated with ATES; that is, design, operation, and environmental assessment. In those cases where models are not adequate, development should be preceded by efforts to identify significant physical phenomena and relate model parameters to measurable quantities.

  13. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  14. Quantitative pharmacokinetic-pharmacodynamic modelling of baclofen-mediated cardiovascular effects using BP and heart rate in rats.

    PubMed

    Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie-Eve; Snow, Debra; Mettetal, Jerome T; Bialecki, Russell A

    2016-10-01

    While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen-mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular-relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Han-Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg(-1) , p.o.) at 4 h intervals and baclofen-mediated changes in parameters recorded. A pharmacokinetic-pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. The systems pharmacology model developed fits baclofen-mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. © 2016 The British Pharmacological Society.

  15. Quantitative pharmacokinetic–pharmacodynamic modelling of baclofen‐mediated cardiovascular effects using BP and heart rate in rats

    PubMed Central

    Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie‐Eve; Snow, Debra

    2016-01-01

    Background and Purpose While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen‐mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular‐relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Experimental Approach Han‐Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg−1, p.o.) at 4 h intervals and baclofen‐mediated changes in parameters recorded. A pharmacokinetic–pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Key Results Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. Conclusions and Implications The systems pharmacology model developed fits baclofen‐mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. PMID:27448216

  16. Uncertainty for calculating transport on Titan: A probabilistic description of bimolecular diffusion parameters

    NASA Astrophysics Data System (ADS)

    Plessis, S.; McDougall, D.; Mandt, K.; Greathouse, T.; Luspay-Kuti, A.

    2015-11-01

    Bimolecular diffusion coefficients are important parameters used by atmospheric models to calculate altitude profiles of minor constituents in an atmosphere. Unfortunately, laboratory measurements of these coefficients were never conducted at temperature conditions relevant to the atmosphere of Titan. Here we conduct a detailed uncertainty analysis of the bimolecular diffusion coefficient parameters as applied to Titan's upper atmosphere to provide a better understanding of the impact of uncertainty for this parameter on models. Because temperature and pressure conditions are much lower than the laboratory conditions in which bimolecular diffusion parameters were measured, we apply a Bayesian framework, a problem-agnostic framework, to determine parameter estimates and associated uncertainties. We solve the Bayesian calibration problem using the open-source QUESO library which also performs a propagation of uncertainties in the calibrated parameters to temperature and pressure conditions observed in Titan's upper atmosphere. Our results show that, after propagating uncertainty through the Massman model, the uncertainty in molecular diffusion is highly correlated to temperature and we observe no noticeable correlation with pressure. We propagate the calibrated molecular diffusion estimate and associated uncertainty to obtain an estimate with uncertainty due to bimolecular diffusion for the methane molar fraction as a function of altitude. Results show that the uncertainty in methane abundance due to molecular diffusion is in general small compared to eddy diffusion and the chemical kinetics description. However, methane abundance is most sensitive to uncertainty in molecular diffusion above 1200 km where the errors are nontrivial and could have important implications for scientific research based on diffusion models in this altitude range.

  17. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less

  18. Robust H∞ control of active vehicle suspension under non-stationary running

    NASA Astrophysics Data System (ADS)

    Guo, Li-Xin; Zhang, Li-Ping

    2012-12-01

    Due to complexity of the controlled objects, the selection of control strategies and algorithms in vehicle control system designs is an important task. Moreover, the control problem of automobile active suspensions has been become one of the important relevant investigations due to the constrained peculiarity and parameter uncertainty of mathematical models. In this study, after establishing the non-stationary road surface excitation model, a study on the active suspension control for non-stationary running condition was conducted using robust H∞ control and linear matrix inequality optimization. The dynamic equation of a two-degree-of-freedom quarter car model with parameter uncertainty was derived. The H∞ state feedback control strategy with time-domain hard constraints was proposed, and then was used to design the active suspension control system of the quarter car model. Time-domain analysis and parameter robustness analysis were carried out to evaluate the proposed controller stability. Simulation results show that the proposed control strategy has high systemic stability on the condition of non-stationary running and parameter uncertainty (including suspension mass, suspension stiffness and tire stiffness). The proposed control strategy can achieve a promising improvement on ride comfort and satisfy the requirements of dynamic suspension deflection, dynamic tire loads and required control forces within given constraints, as well as non-stationary running condition.

  19. Emulating Simulations of Cosmic Dawn for 21 cm Power Spectrum Constraints on Cosmology, Reionization, and X-Ray Heating

    NASA Astrophysics Data System (ADS)

    Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley

    2017-10-01

    Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.

  20. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Preliminary results on the dynamics of large and flexible space structures in Halo orbits

    NASA Astrophysics Data System (ADS)

    Colagrossi, Andrea; Lavagna, Michèle

    2017-05-01

    The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around one of the Earth-Moon collinear Lagrangian points, L1 or L2, is discussed to point out some relevant outcomes for the potential implementation of such a mission.

  2. Stimulus Sensitivity of a Spiking Neural Network Model

    NASA Astrophysics Data System (ADS)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  3. Mathematics as a Conduit for Translational Research in Post-Traumatic Osteoarthritis

    PubMed Central

    Ayati, Bruce P.; Kapitanov, Georgi I.; Coleman, Mitchell C.; Anderson, Donald D.; Martin, James A.

    2016-01-01

    Biomathematical models offer a powerful method of clarifying complex temporal interactions and the relationships among multiple variables in a system. We present a coupled in silico biomathematical model of articular cartilage degeneration in response to impact and/or aberrant loading such as would be associated with injury to an articular joint. The model incorporates fundamental biological and mechanical information obtained from explant and small animal studies to predict post-traumatic osteoarthritis (PTOA) progression, with an eye toward eventual application in human patients. In this sense, we refer to the mathematics as a “conduit of translation”. The new in silico framework presented in this paper involves a biomathematical model for the cellular and biochemical response to strains computed using finite element analysis. The model predicts qualitative responses presently, utilizing system parameter values largely taken from the literature. To contribute to accurate predictions, models need to be accurately parameterized with values that are based on solid science. We discuss a parameter identification protocol that will enable us to make increasingly accurate predictions of PTOA progression using additional data from smaller scale explant and small animal assays as they become available. By distilling the data from the explant and animal assays into parameters for biomathematical models, mathematics can translate experimental data to clinically relevant knowledge. PMID:27653021

  4. Simulating future supply of and requirements for human resources for health in high-income OECD countries.

    PubMed

    Tomblin Murphy, Gail; Birch, Stephen; MacKenzie, Adrian; Rigby, Janet

    2016-12-12

    As part of efforts to inform the development of a global human resources for health (HRH) strategy, a comprehensive methodology for estimating HRH supply and requirements was described in a companion paper. The purpose of this paper is to demonstrate the application of that methodology, using data publicly available online, to simulate the supply of and requirements for midwives, nurses, and physicians in the 32 high-income member countries of the Organisation for Economic Co-operation and Development (OECD) up to 2030. A model combining a stock-and-flow approach to simulate the future supply of each profession in each country-adjusted according to levels of HRH participation and activity-and a needs-based approach to simulate future HRH requirements was used. Most of the data to populate the model were obtained from the OECD's online indicator database. Other data were obtained from targeted internet searches and documents gathered as part of the companion paper. Relevant recent measures for each model parameter were found for at least one of the included countries. In total, 35% of the desired current data elements were found; assumed values were used for the other current data elements. Multiple scenarios were used to demonstrate the sensitivity of the simulations to different assumed future values of model parameters. Depending on the assumed future values of each model parameter, the simulated HRH gaps across the included countries could range from shortfalls of 74 000 midwives, 3.2 million nurses, and 1.2 million physicians to surpluses of 67 000 midwives, 2.9 million nurses, and 1.0 million physicians by 2030. Despite important gaps in the data publicly available online and the short time available to implement it, this paper demonstrates the basic feasibility of a more comprehensive, population needs-based approach to estimating HRH supply and requirements than most of those currently being used. HRH planners in individual countries, working with their respective stakeholder groups, would have more direct access to data on the relevant planning parameters and would thus be in an even better position to implement such an approach.

  5. Demographic inference under the coalescent in a spatial continuum.

    PubMed

    Guindon, Stéphane; Guo, Hongbin; Welch, David

    2016-10-01

    Understanding population dynamics from the analysis of molecular and spatial data requires sound statistical modeling. Current approaches assume that populations are naturally partitioned into discrete demes, thereby failing to be relevant in cases where individuals are scattered on a spatial continuum. Other models predict the formation of increasingly tight clusters of individuals in space, which, again, conflicts with biological evidence. Building on recent theoretical work, we introduce a new genealogy-based inference framework that alleviates these issues. This approach effectively implements a stochastic model in which the distribution of individuals is homogeneous and stationary, thereby providing a relevant null model for the fluctuation of genetic diversity in time and space. Importantly, the spatial density of individuals in a population and their range of dispersal during the course of evolution are two parameters that can be inferred separately with this method. The validity of the new inference framework is confirmed with extensive simulations and the analysis of influenza sequences collected over five seasons in the USA. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Derivation and calibration of a gas metal arc welding (GMAW) dynamic droplet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reutzel, E.W.; Einerson, C.J.; Johnson, J.A.

    1996-12-31

    A rudimentary, existing dynamic model for droplet growth and detachment in gas metal arc welding (GMAW) was improved and calibrated to match experimental data. The model simulates droplets growing at the end of an imaginary spring. Mass is added to the drop as the electrode melts, the droplet grows, and the spring is displaced. Detachment occurs when one of two criteria is met, and the amount of mass that is detached is a function of the droplet velocity at the time of detachment. Improvements to the model include the addition of a second criterion for drop detachment, a more sophisticatedmore » model of the power supply and secondary electric circuit, and the incorporation of a variable electrode resistance. Relevant physical parameters in the model were adjusted during model calibration. The average current, droplet frequency, and parameter-space location of globular-to-streaming mode transition were used as criteria for tuning the model. The average current predicted by the calibrated model matched the experimental average current to within 5% over a wide range of operating conditions.« less

  7. Event-scale power law recession analysis: quantifying methodological uncertainty

    NASA Astrophysics Data System (ADS)

    Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.

    2017-01-01

    The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.

  8. A strategy to establish Food Safety Model Repositories.

    PubMed

    Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M

    2015-07-02

    Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.

  9. Determining relevant parameters for a statistical tropical cyclone genesis tool based upon global model output

    NASA Astrophysics Data System (ADS)

    Halperin, D.; Hart, R. E.; Fuelberg, H. E.; Cossuth, J.

    2013-12-01

    Predicting tropical cyclone (TC) genesis has been a vexing problem for forecasters. While the literature describes environmental conditions which are necessary for TC genesis, predicting if and when a specific disturbance will organize and become a TC remains a challenge. As recently as 5-10 years ago, global models possessed little if any skill in forecasting TC genesis. However, due to increased resolution and more advanced model parameterizations, we have reached the point where global models can provide useful TC genesis guidance to operational forecasters. A recent study evaluated five global models' ability to predict TC genesis out to four days over the North Atlantic basin (Halperin et al. 2013). The results indicate that the models are indeed able to capture the genesis time and location correctly a fair percentage of the time. The study also uncovered model biases. For example, probability of detection and false alarm rate varies spatially within the basin. Also, as expected, the models' performance decreases with increasing lead time. In order to explain these and other biases, it is useful to analyze the model-indicated genesis events further to determine whether or not there are systematic differences between successful forecasts (hits), false alarms, and miss events. This study will examine composites of a number of physically-relevant environmental parameters (e.g., magnitude of vertical wind shear, aerially averaged mid-level relative humidity) and disturbance-based parameters (e.g., 925 hPa maximum wind speed, vertical alignment of relative vorticity) among each TC genesis event classification (i.e., hit, false alarm, miss). We will use standard statistical tests (e.g., Student's t test, Mann-Whitney-U Test) to calculate whether or not any differences are statistically significant. We also plan to discuss how these composite results apply to a few illustrative case studies. The results may help determine which aspects of the forecast are (in)correct and whether the incorrect aspects can be bias-corrected. This, in turn, may allow us to further enhance probabilistic forecasts of TC genesis.

  10. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  11. On a sparse pressure-flow rate condensation of rigid circulation models

    PubMed Central

    Schiavazzi, D. E.; Hsia, T. Y.; Marsden, A. L.

    2015-01-01

    Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol’ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219

  12. An action potential-driven model of soleus muscle activation dynamics for locomotor-like movements

    NASA Astrophysics Data System (ADS)

    Kim, Hojeong; Sandercock, Thomas G.; Heckman, C. J.

    2015-08-01

    Objective. The goal of this study was to develop a physiologically plausible, computationally robust model for muscle activation dynamics (A(t)) under physiologically relevant excitation and movement. Approach. The interaction of excitation and movement on A(t) was investigated comparing the force production between a cat soleus muscle and its Hill-type model. For capturing A(t) under excitation and movement variation, a modular modeling framework was proposed comprising of three compartments: (1) spikes-to-[Ca2+]; (2) [Ca2+]-to-A; and (3) A-to-force transformation. The individual signal transformations were modeled based on physiological factors so that the parameter values could be separately determined for individual modules directly based on experimental data. Main results. The strong dependency of A(t) on excitation frequency and muscle length was found during both isometric and dynamically-moving contractions. The identified dependencies of A(t) under the static and dynamic conditions could be incorporated in the modular modeling framework by modulating the model parameters as a function of movement input. The new modeling approach was also applicable to cat soleus muscles producing waveforms independent of those used to set the model parameters. Significance. This study provides a modeling framework for spike-driven muscle responses during movement, that is suitable not only for insights into molecular mechanisms underlying muscle behaviors but also for large scale simulations.

  13. Chameleon dark energy models with characteristic signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gannouji, Radouane; Department of Physics, Faculty of Science, Tokyo University of Science, 1-3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601; Moraes, Bruno

    2010-12-15

    In chameleon dark energy models, local gravity constraints tend to rule out parameters in which observable cosmological signatures can be found. We study viable chameleon potentials consistent with a number of recent observational and experimental bounds. A novel chameleon field potential, motivated by f(R) gravity, is constructed where observable cosmological signatures are present both at the background evolution and in the growth rate of the perturbations. We study the evolution of matter density perturbations on low redshifts for this potential and show that the growth index today {gamma}{sub 0} can have significant dispersion on scales relevant for large scale structures.more » The values of {gamma}{sub 0} can be even smaller than 0.2 with large variations of {gamma} on very low redshifts for the model parameters constrained by local gravity tests. This gives a possibility to clearly distinguish these chameleon models from the {Lambda}-cold-dark-matter ({Lambda}CDM) model in future high-precision observations.« less

  14. Parameter estimation of qubit states with unknown phase parameter

    NASA Astrophysics Data System (ADS)

    Suzuki, Jun

    2015-02-01

    We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.

  15. Population Pharmacokinetic Modeling of Diltiazem in Chinese Renal Transplant Recipients.

    PubMed

    Guan, Xiao-Feng; Li, Dai-Yang; Yin, Wen-Jun; Ding, Jun-Jie; Zhou, Ling-Yun; Wang, Jiang-Lin; Ma, Rong-Rong; Zuo, Xiao-Cong

    2018-02-01

    Diltiazem is a benzothiazepine calcium blocker and widely used in renal transplant patients since it improves the level of tacrolimus or cyclosporine A concentration. Several population pharmacokinetic (PopPK) models had been established for cyclosporine A and tacrolimus but no specific PopPK model was established for diltiazem. The aim of the study is to develop a PopPK model for diltiazem in renal transplant recipients and provide relevant pharmacokinetic parameters of diltiazem for further pharmacokinetic interaction study. Patients received tacrolimus as primary immunosuppressant agent after renal transplant and started administration of diltiazem 90 mg twice daily on 5th day. The concentration of diltiazem at 0, 0.5, 1, 2, 8, and 12 h was measured by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). Genotyping for CYP3A4*1G, CYP3A5*3, and MDR1 3435 was conducted by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP). 25 covariates were considered in the stepwise covariate model (SCM) building procedure. One-compartment structural pharmacokinetic model with first-order absorption and elimination was used to describe the pharmacokinetic characteristics of diltiazem. Total bilirubin (TBIL) influenced apparent volume of distribution (V/F) of diltiazem in the forward selection. The absorption rate constant (K a ), V/F, and apparent oral clearance (CL/F) of the final population pharmacokinetic (PopPK) model of diltiazem were 1.96/h, 3550 L, and 92.4 L/h, respectively. A PopPK model of diltiazem is established in Chinese renal transplant recipients and it will provide relevant pharmacokinetic parameters of diltiazem for further pharmacokinetic interaction study.

  16. Analytical investigation of third grade nanofluidic flow over a riga plate using Cattaneo-Christov model

    NASA Astrophysics Data System (ADS)

    Naseem, Anum; Shafiq, Anum; Zhao, Lifeng; Farooq, M. U.

    2018-06-01

    This article addresses third grade nanofluidic flow instigated by riga plate and Cattaneo-Christov theory is adopted to investigate thermal and mass diffusions with the incorporation of newly eminent zero nanoparticles mass flux condition. The governing system of equations is nondimensionalized through relevant similarity transformations and significatory findings are attained by using optimal homotopy analysis method. The behaviors of affecting parameters for velocity, temperature and concentration profiles are depicted graphically and also verified through three dimensional patterns for some parameters. Values of skin friction coefficient and Nusselt number with the apposite discussion have been recorded. The current results reveal that temperature and concentration profiles experience decline when thermal and concentration relaxation parameters are augmented respectively.

  17. Precision constraints on the top-quark effective field theory at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Durieux, G.

    We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.

  18. Multiparameter bifurcations and mixed-mode oscillations in Q-switched CO2 lasers.

    PubMed

    Doedel, Eusebius J; Pando L, Carlos L

    2014-05-01

    We study the origin of mixed-mode oscillations and related bifurcations in a fully molecular laser model that describes CO2 monomode lasers with a slow saturable absorber. Our study indicates that the presence of isolas of periodic mixed-mode oscillations, as the pump parameter and the cavity-frequency detuning change, is inherent to Q-switched CO2 monomode lasers. We compare this model, known as the dual four-level model, to the more conventional 3:2 model and to a CO2 laser model for fast saturable absorbers. In these models, we find similarities as well as qualitative differences, such as the different nature of the homoclinic tangency to a relevant unstable periodic orbit, where the Gavrilov-Shilnikov theory and its extensions may hold. We also show that there are isolas of periodic mixed-mode oscillations in a model for CO2 lasers with modulated losses, as the pump parameter varies. The coarse-grained bifurcation diagrams of the periodic mixed-mode oscillations in these models suggest that these oscillations belong to similar classes.

  19. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  20. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  1. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  2. A study on the role of powertrain system dynamics on vehicle driveability

    NASA Astrophysics Data System (ADS)

    Castellazzi, Luca; Tonoli, Andrea; Amati, Nicola; Galliera, Enrico

    2017-07-01

    Vehicle driveability describes the complex interactions between the driver and the vehicle, mainly related to longitudinal vibrations. Today, a relevant part of the driveability process optimisation is realised by means of track tests, which require a considerable effort due to the number of parameters (such as stiffness and damping components) affecting this behaviour. The drawback of this approach is that it is carried on at a stage when a design iteration becomes very expensive in terms of time and cost. The objective of this work is to propose a light and accurate tool to represent the relevant quantities involved in the driveability analysis, and to understand which are the main vehicle parameters that influence the torsional vibrations transmitted to the driver. Particular attention is devoted to the role of the tyre, the engine mount, the dual mass flywheel and their possible interactions. The presented nonlinear dynamic model has been validated in time and frequency domain and, through linearisation of its nonlinear components, allows to exploit modal and energy analysis. Objective indexes regarding the driving comfort are additionally considered in order to evaluate possible driveability improvements related to the sensitivity of powertrain parameters.

  3. Quantifying the brush structure and assembly of mixed brush nanoparticles in solution

    NASA Astrophysics Data System (ADS)

    Koski, Jason; Frischknecht, Amalie

    The arrangement of nanoparticles in a polymer melt or solution is critical to the resulting material properties. A common strategy to control the distribution of nanoparticles is to graft polymer chains onto the surface of the nanoparticles. An emerging strategy to further control the arrangement of nanoparticles is to graft polymer chains of different types and/or different lengths onto the surface of the nanoparticle, though this considerably increases the parameter space needed to describe the system. Theoretical models that are capable of predicting the assembly of nanoparticles in a melt or solution are thus desirable to guide experiments. In this talk, I will describe a recently developed non-equilibrium method that is appealing in its ability to tractably account for fluctuations and that can directly relate to experiments. To showcase the utility of this method, I apply it to mixed brush grafted nanoparticles in solution where fluctuations are prominent. Specifically, I investigate the role of experimentally relevant parameters on the structure of the brush and the corresponding effects on the assembly of the nanoparticles in solution. These results can be directly linked to experiments to help narrow the relevant parameter space for optimizing these materials.

  4. Interference effect on a heavy Higgs resonance signal in the γ γ and Z Z channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jeonghyeon; Yoon, Yeo Woong; Jung, Sunghoon

    2016-03-24

    The resonance-continuum interference is usually neglected when the width of a resonance is small compared to the resonance mass. We reexamine this standard by studying the interference effects in high-resolution decay channels, γγ and ZZ, of the heavy Higgs boson H in nearly aligned two-Higgs-doublet models. For the H with a sub-percent width-to-mass ratio, we find that, in the parameter space where the LHC 14 TeV ZZ resonance search can be sensitive, the interference effects can modify the ZZ signal rate by O(10)% and the exclusion reach by O(10) GeV. In other parameter space where the ZZ or γγ signalmore » rate is smaller, the LHC 14 TeV reach is absent, but a resonance shape can be much more dramatically changed. In particular, the γγ signal rate can change by O(100)%. Relevant to such parameter space, we suggest variables that can characterize a general resonance shape. Furthermore, we also illustrate the relevance of the width on the interference by adding nonstandard decay modes of the heavy Higgs boson.« less

  5. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  6. Tuning of Kalman filter parameters via genetic algorithm for state-of-charge estimation in battery management system.

    PubMed

    Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.

  7. Tuning of Kalman Filter Parameters via Genetic Algorithm for State-of-Charge Estimation in Battery Management System

    PubMed Central

    Ting, T. O.; Lim, Eng Gee

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041

  8. An algorithm to estimate aircraft cruise black carbon emissions for use in developing a cruise emissions inventory.

    PubMed

    Peck, Jay; Oluwole, Oluwayemisi O; Wong, Hsi-Wu; Miake-Lye, Richard C

    2013-03-01

    To provide accurate input parameters to the large-scale global climate simulation models, an algorithm was developed to estimate the black carbon (BC) mass emission index for engines in the commercial fleet at cruise. Using a high-dimensional model representation (HDMR) global sensitivity analysis, relevant engine specification/operation parameters were ranked, and the most important parameters were selected. Simple algebraic formulas were then constructed based on those important parameters. The algorithm takes the cruise power (alternatively, fuel flow rate), altitude, and Mach number as inputs, and calculates BC emission index for a given engine/airframe combination using the engine property parameters, such as the smoke number, available in the International Civil Aviation Organization (ICAO) engine certification databank. The algorithm can be interfaced with state-of-the-art aircraft emissions inventory development tools, and will greatly improve the global climate simulations that currently use a single fleet average value for all airplanes. An algorithm to estimate the cruise condition black carbon emission index for commercial aircraft engines was developed. Using the ICAO certification data, the algorithm can evaluate the black carbon emission at given cruise altitude and speed.

  9. A delay differential model of ENSO variability: Extreme values and stability analysis

    NASA Astrophysics Data System (ADS)

    Zaliapin, I.; Ghil, M.

    2009-04-01

    We consider a delay differential equation (DDE) model for El-Niño Southern Oscillation (ENSO) variability [Ghil et al. (2008), Nonlin. Proc. Geophys., 15, 417-433.] The model combines two key mechanisms that participate in ENSO dynamics: delayed negative feedback and seasonal forcing. Toy models of this type were shown to capture major features of the ENSO phenomenon [Jin et al., Science (1994); Tziperman et al., Science (1994)]; they provide a convenient paradigm for explaining interannual ENSO variability and shed new light on its dynamical properties. So far, though, DDE model studies of ENSO have been limited to linear stability analysis of steady-state solutions, which are not typical in forced systems, case studies of particular trajectories, or one-dimensional scenarios of transition to chaos, varying a single parameter while the others are kept fixed. In this work we take several steps toward a comprehensive analysis of DDE models relevant for ENSO phenomenology and illustrate the complexity of phase-parameter space structure for even such a simple model of climate dynamics. We formulate an initial value problem for our model and prove the existence, uniqueness, and continuous dependence theorem. We then use this theoretical result to perform detailed numerical stability analyses of the model in the three-dimensional space of its physically relevant parameters: strength of seasonal forcing b, atmosphere-ocean coupling ΰ, and propagation period ? of oceanic waves across the Tropical Pacific. Two regimes of variability, stable and unstable, are reported; they are separated by a sharp neutral curve in the (b,?) plane at constant ΰ. The detailed structure of the neutral curve becomes very irregular and possibly fractal, while individual trajectories within the unstable region become highly complex and possibly chaotic, as the atmosphere-ocean coupling ΰ increases. In the unstable regime, spontaneous transitions occur in the mean temperature (i.e., thermocline depth), period, and extreme annual values, for purely periodic, seasonal forcing. The model reproduces the Devils bleachers characterizing other ENSO models, such as nonlinear, coupled systems of partial differential equations; some of the features of this behavior have been documented in general circulation models, as well as in observations. We analyze the values of annual extremes and their location within an annual cycle and report the phase-locking phenomenon, which is connected to the occurrence of El-Niño events during the boreal (Northern Hemisphere) winter. We report existence of multiple solutions and study their basins of attraction in a space of initial conditions. We also present a model-based justification for the observed quasi-biennial oscillation in Tropical Pacific SSTs. We expect similar behavior in much more detailed and realistic models, where it is harder to describe its causes as completely. The basic mechanisms used in our model (delayed feedback and forcing) may be relevant to other natural systems in which internal instabilities interact with external forcing and give rise to extreme events.

  10. Online Chapmann Layer Calculator for Simulating the Ionosphere with Undergraduate and Graduate Students

    NASA Astrophysics Data System (ADS)

    Gross, N. A.; Withers, P.; Sojka, J. J.

    2014-12-01

    The Chapman Layer Model is a "textbook" model of the ionosphere (for example, "Theory of Planetary Atmospheres" by Chamberlain and Hunten, Academic Press (1978)). The model use fundamental assumptions about the neutral atmosphere, the flux of ionizing radiation, and the recombination rate to calculation the ionization rate, and ion/electron density for a single species atmosphere. We have developed a "Chapman Layer Calculator" application that is deployed on the web using Java. It allows the user to see how various parameters control ion density, peak height, and profile of the ionospheric layer. Users can adjust parameters relevant to thermosphere scale height (temperature, gravitational acceleration, molecular weight, neutral atmosphere density) and to Extreme Ultraviolet solar flux (reference EUV, distance from the Sun, and solar Zenith Angle) and then see how the layer changes. This allows the user to simulate the ionosphere on other planets, by adjusting to the appropriate parameters. This simulation has been used as an exploratory activity for the NASA/LWS - Heliophysics Summer School 2014 and has an accompanying activity guide.

  11. Dynamical quenching and annealing in self-organization multiagent models.

    PubMed

    Burgos, E; Ceva, H; Perazzo, R P

    2001-07-01

    We study the dynamics of a generalized minority game (GMG) and of the bar attendance model (BAM) in which a number of agents self-organize to match an attendance that is fixed externally as a control parameter. We compare the usual dynamics used for the minority game with one for the BAM that makes a better use of the available information. We study the asymptotic states reached in both frameworks. We show that states that can be assimilated to either thermodynamic equilibrium or quenched configurations can appear in both models, but with different settings. We discuss the relevance of the parameter G that measures the value of the prize for winning in units of the fine for losing. We also provide an annealing protocol by which the quenched configurations of the GMG can progressively be modified to reach an asymptotic equilibrium state that coincides with the one obtained with the BAM.

  12. Dynamical quenching and annealing in self-organization multiagent models

    NASA Astrophysics Data System (ADS)

    Burgos, E.; Ceva, Horacio; Perazzo, R. P.

    2001-07-01

    We study the dynamics of a generalized minority game (GMG) and of the bar attendance model (BAM) in which a number of agents self-organize to match an attendance that is fixed externally as a control parameter. We compare the usual dynamics used for the minority game with one for the BAM that makes a better use of the available information. We study the asymptotic states reached in both frameworks. We show that states that can be assimilated to either thermodynamic equilibrium or quenched configurations can appear in both models, but with different settings. We discuss the relevance of the parameter G that measures the value of the prize for winning in units of the fine for losing. We also provide an annealing protocol by which the quenched configurations of the GMG can progressively be modified to reach an asymptotic equilibrium state that coincides with the one obtained with the BAM.

  13. A compendium of chameleon constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, Clare; Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk

    2016-11-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical andmore » laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.« less

  14. The topological Anderson insulator phase in the Kane-Mele model

    NASA Astrophysics Data System (ADS)

    Orth, Christoph P.; Sekera, Tibor; Bruder, Christoph; Schmidt, Thomas L.

    2016-04-01

    It has been proposed that adding disorder to a topologically trivial mercury telluride/cadmium telluride (HgTe/CdTe) quantum well can induce a transition to a topologically nontrivial state. The resulting state was termed topological Anderson insulator and was found in computer simulations of the Bernevig-Hughes-Zhang model. Here, we show that the topological Anderson insulator is a more universal phenomenon and also appears in the Kane-Mele model of topological insulators on a honeycomb lattice. We numerically investigate the interplay of the relevant parameters, and establish the parameter range in which the topological Anderson insulator exists. A staggered sublattice potential turns out to be a necessary condition for the transition to the topological Anderson insulator. For weak enough disorder, a calculation based on the lowest-order Born approximation reproduces quantitatively the numerical data. Our results thus considerably increase the number of candidate materials for the topological Anderson insulator phase.

  15. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  16. Effects of total pressure on non-grey gas radiation transfer in oxy-fuel combustion using the LBL, SNB, SNBCK, WSGG, and FSCK methods

    NASA Astrophysics Data System (ADS)

    Chu, Huaqiang; Gu, Mingyan; Consalvi, Jean-Louis; Liu, Fengshan; Zhou, Huaichun

    2016-03-01

    The effects of total pressure on gas radiation heat transfer are investigated in 1D parallel plate geometry containing isothermal and homogeneous media and an inhomogeneous and non-isothermal CO2-H2O mixture under conditions relevant to oxy-fuel combustion using the line-by-line (LBL), statistical narrow-band (SNB), statistical narrow-band correlated-k (SNBCK), weighted-sum-of-grey-gases (WSGG), and full-spectrum correlated-k (FSCK) models. The LBL calculations were conducted using the HITEMP2010 and CDSD-1000 databases and the LBL results serve as the benchmark solution to evaluate the accuracy of the other models. Calculations of the SNB, SNBCK, and FSCK were conducted using both the 1997 EM2C SNB parameters and their recently updated 2012 parameters to investigate how the SNB model parameters affect the results under oxy-fuel combustion conditions at high pressures. The WSGG model considered is the recently developed one by Bordbar et al. [19] for oxy-fuel combustion based on LBL calculations using HITEMP2010. The total pressure considered ranges from 1 up to 30 atm. The total pressure significantly affects gas radiation transfer primarily through the increase in molecule number density and only slightly through spectral line broadening. Using the 1997 EM2C SNB model parameters the accuracy of SNB and SNBCK is very good and remains essentially independent of the total pressure. When using the 2012 EM2C SNB model parameters the SNB and SNBCK results are less accurate and their error increases with increasing the total pressure. The WSGG model has the lowest accuracy and the best computational efficiency among the models investigated. The errors of both WSGG and FSCK using the 2012 EM2C SNB model parameters increase when the total pressure is increased from 1 to 10 atm, but remain nearly independent of the total pressure beyond 10 atm. When using the 1997 EM2C SNB model parameters the accuracy of FSCK only slightly decreases with increasing the total pressure.

  17. Suitability aero-geophysical methods for generating conceptual soil maps and their use in the modeling of process-related susceptibility maps

    NASA Astrophysics Data System (ADS)

    Tilch, Nils; Römer, Alexander; Jochum, Birgit; Schattauer, Ingrid

    2014-05-01

    In the past years, several times large-scale disasters occurred in Austria, which were characterized not only by flooding, but also by numerous shallow landslides and debris flows. Therefore, for the purpose of risk prevention, national and regional authorities also require more objective and realistic maps with information about spatially variable susceptibility of the geosphere for hazard-relevant gravitational mass movements. There are many and various proven methods and models (e.g. neural networks, logistic regression, heuristic methods) available to create such process-related (e.g. flat gravitational mass movements in soil) suszeptibility maps. But numerous national and international studies show a dependence of the suitability of a method on the quality of process data and parameter maps (f.e. Tilch & Schwarz 2011, Schwarz & Tilch 2011). In this case, it is important that also maps with detailed and process-oriented information on the process-relevant geosphere will be considered. One major disadvantage is that only occasionally area-wide process-relevant information exists. Similarly, in Austria often only soil maps for treeless areas are available. However, in almost all previous studies, randomly existing geological and geotechnical maps were used, which often have been specially adapted to the issues and objectives. This is one reason why very often conceptual soil maps must be derived from geological maps with only hard rock information, which often have a rather low quality. Based on these maps, for example, adjacent areas of different geological composition and process-relevant physical properties are razor sharp delineated, which in nature appears quite rarly. In order to obtain more realistic information about the spatial variability of the process-relevant geosphere (soil cover) and its physical properties, aerogeophysical measurements (electromagnetic, radiometric), carried out by helicopter, from different regions of Austria were interpreted. Previous studies show that, especially with radiometric measurements, the two-dimensional spatial variability of the nature of the process-relevant soil, close to the surface can be determined. In addition, the electromagnetic measurements are more important to obtain three-dimensional information of the deeper geological conditions and to improve the area-specific geological knowledge and understanding. The validation of these measurements is done with terrestrial geoelectrical measurements. So both aspects, radiometric and electromagnetic measurements, are important and subsequently, interpretation of the geophysical results can be used as the parameter maps in the modeling of more realistic susceptibility maps with respect to various processes. Within this presentation, results of geophysical measurements, the outcome and the derived parameter maps, as well as first process-oriented susceptibility maps in terms of gravitational soil mass movements will be presented. As an example results which were obtained with a heuristic method in an area in Vorarlberg (Western Austria) will be shown. References: Schwarz, L. & Tilch, N. (2011): Why are good process data so important for the modelling of landslide susceptibility maps?- EGU-Postersession "Landslide hazard and risk assessment, and landslide management" (NH 3.6), Vienna. [http://www.geologie.ac.at/fileadmin/user_upload/dokumente/pdf/poster/poster_2011_egu_schwarz_tilch_1.pdf] Tilch, N. & Schwarz, L. (2011): Spatial and scale-dependent variability in data quality and their influence on susceptibility maps for gravitational mass movements in soil, modelled by heuristic method.- EGU-Postersession "Landslide hazard and risk assessment, and landslide management" (NH 3.6); Vienna. [http://www.geologie.ac.at/fileadmin/user_upload/dokumente/pdf/poster/poster_2011_egu_tilch_schwarz.pdf

  18. MODELING GALACTIC EXTINCTION WITH DUST AND 'REAL' POLYCYCLIC AROMATIC HYDROCARBONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulas, Giacomo; Casu, Silvia; Cecchi-Pestellini, Cesare

    We investigate the remarkable apparent variety of galactic extinction curves by modeling extinction profiles with core-mantle grains and a collection of single polycyclic aromatic hydrocarbons. Our aim is to translate a synthetic description of dust into physically well-grounded building blocks through the analysis of a statistically relevant sample of different extinction curves. All different flavors of observed extinction curves, ranging from the average galactic extinction curve to virtually 'bumpless' profiles, can be described by the present model. We prove that a mixture of a relatively small number (54 species in 4 charge states each) of polycyclic aromatic hydrocarbons can reproducemore » the features of the extinction curve in the ultraviolet, dismissing an old objection to the contribution of polycyclic aromatic hydrocarbons to the interstellar extinction curve. Despite the large number of free parameters (at most the 54 Multiplication-Sign 4 column densities of each species in each ionization state included in the molecular ensemble plus the 9 parameters defining the physical properties of classical particles), we can strongly constrain some physically relevant properties such as the total number of C atoms in all species and the mean charge of the mixture. Such properties are found to be largely independent of the adopted dust model whose variation provides effects that are orthogonal to those brought about by the molecular component. Finally, the fitting procedure, together with some physical sense, suggests (but does not require) the presence of an additional component of chemically different very small carbonaceous grains.« less

  19. Uncertainties in modelling CH4 emissions from northern wetlands in glacial climates: the role of vegetation parameters

    NASA Astrophysics Data System (ADS)

    Berrittella, C.; van Huissteden, J.

    2011-10-01

    Marine Isotope Stage 3 (MIS 3) interstadials are marked by a sharp increase in the atmospheric methane (CH4) concentration, as recorded in ice cores. Wetlands are assumed to be the major source of this CH4, although several other hypotheses have been advanced. Modelling of CH4 emissions is crucial to quantify CH4 sources for past climates. Vegetation effects are generally highly generalized in modelling past and present-day CH4 fluxes, but should not be neglected. Plants strongly affect the soil-atmosphere exchange of CH4 and the net primary production of the vegetation supplies organic matter as substrate for methanogens. For modelling past CH4 fluxes from northern wetlands, assumptions on vegetation are highly relevant since paleobotanical data indicate large differences in Last Glacial (LG) wetland vegetation composition as compared to modern wetland vegetation. Besides more cold-adapted vegetation, Sphagnum mosses appear to be much less dominant during large parts of the LG than at present, which particularly affects CH4 oxidation and transport. To evaluate the effect of vegetation parameters, we used the PEATLAND-VU wetland CO2/CH4 model to simulate emissions from wetlands in continental Europe during LG and modern climates. We tested the effect of parameters influencing oxidation during plant transport (fox), vegetation net primary production (NPP, parameter symbol Pmax), plant transport rate (Vtransp), maximum rooting depth (Zroot) and root exudation rate (fex). Our model results show that modelled CH4 fluxes are sensitive to fox and Zroot in particular. The effects of Pmax, Vtransp and fex are of lesser relevance. Interactions with water table modelling are significant for Vtransp. We conducted experiments with different wetland vegetation types for Marine Isotope Stage 3 (MIS 3) stadial and interstadial climates and the present-day climate, by coupling PEATLAND-VU to high resolution climate model simulations for Europe. Experiments assuming dominance of one vegetation type (Sphagnum vs. Carex vs. Shrubs) show that Carex-dominated vegetation can increase CH4 emissions by 50% to 78% over Sphagnum-dominated vegetation depending on the modelled climate, while for shrubs this increase ranges from 42% to 72%. Consequently, during the LG northern wetlands may have had CH4 emissions similar to their present-day counterparts, despite a colder climate. Changes in dominant wetland vegetation, therefore, may drive changes in wetland CH4 fluxes, in the past as well as in the future.

  20. How robust are the natural history parameters used in chlamydia transmission dynamic models? A systematic review.

    PubMed

    Davies, Bethan; Anderson, Sarah-Jane; Turner, Katy M E; Ward, Helen

    2014-01-30

    Transmission dynamic models linked to economic analyses often form part of the decision making process when introducing new chlamydia screening interventions. Outputs from these transmission dynamic models can vary depending on the values of the parameters used to describe the infection. Therefore these values can have an important influence on policy and resource allocation. The risk of progression from infection to pelvic inflammatory disease has been extensively studied but the parameters which govern the transmission dynamics are frequently neglected. We conducted a systematic review of transmission dynamic models linked to economic analyses of chlamydia screening interventions to critically assess the source and variability of the proportion of infections that are asymptomatic, the duration of infection and the transmission probability. We identified nine relevant studies in Pubmed, Embase and the Cochrane database. We found that there is a wide variation in their natural history parameters, including an absolute difference in the proportion of asymptomatic infections of 25% in women and 75% in men, a six-fold difference in the duration of asymptomatic infection and a four-fold difference in the per act transmission probability. We consider that much of this variation can be explained by a lack of consensus in the literature. We found that a significant proportion of parameter values were referenced back to the early chlamydia literature, before the introduction of nucleic acid modes of diagnosis and the widespread testing of asymptomatic individuals. In conclusion, authors should use high quality contemporary evidence to inform their parameter values, clearly document their assumptions and make appropriate use of sensitivity analysis. This will help to make models more transparent and increase their utility to policy makers.

  1. All half-lives are wrong, but some half-lives are useful.

    PubMed

    Wright, J G; Boddy, A V

    2001-01-01

    The half-life of a drug, which expresses a change in concentration in units of time, is perhaps the most easily understood pharmacokinetic parameter and provides a succinct description of many concentration-time profiles. The calculation of a half-life implies a linear, first-order, time-invariant process. No drug perfectly obeys such assumptions, although in practise this is often a valid approximation and provides invaluable quantitative information. Nevertheless, the physiological processes underlying half-life should not be forgotten. The concept of clearance facilitates the interpretation of factors affecting drug elimination, such as enzyme inhibition or renal impairment. Relating clearance to the observed concentration-time profile is not as naturally intuitive as is the case with half-life. As such, these 2 approaches to parameterising a linear pharmacokinetic model should be viewed as complementary rather than alternatives. The interpretation of pharmacokinetic parameters when there are multiple disposition phases is more challenging. Indeed, in any pharmacokinetic model, the half-lives are only one component of the parameters required to specify the concentration-time profile. Furthermore, pharmacokinetic parameters are of little use without a dose history. Other factors influencing the relevance of each disposition phase to clinical end-points must also be considered. In summarising the pharmacokinetics of a drug, statistical aspects of the estimation of a half-life are often overlooked. Half-lives are rarely reported with confidence intervals or measures of variability in the population, and some approaches to this problem are suggested. Half-life is an important summary statistic in pharmacokinetics, but care must be taken to employ it appropriately in the context of dose history and clinically relevant pharmacodynamic end-points.

  2. Roughness characterization of the galling of metals

    NASA Astrophysics Data System (ADS)

    Hubert, C.; Marteau, J.; Deltombe, R.; Chen, Y. M.; Bigerelle, M.

    2014-09-01

    Several kinds of tests exist to characterize the galling of metals, such as that specified in ASTM Standard G98. While the testing procedure is accurate and robust, the analysis of the specimen's surfaces (area=1.2 cm) for the determination of the critical pressure of galling remains subject to operator judgment. Based on the surface's topography analyses, we propose a methodology to express the probability of galling according to the macroscopic pressure load. After performing galling tests on 304L stainless steel, a two-step segmentation of the S q parameter (root mean square of surface amplitude) computed from local roughness maps (100 μ m× 100 μ m) enables us to distinguish two tribological processes. The first step represents the abrasive wear (erosion) and the second one the adhesive wear (galling). The total areas of both regions are highly relevant to quantify galling and erosion processes. Then, a one-parameter phenomenological model is proposed to objectively determine the evolution of non-galled relative area A e versus the pressure load P, with high accuracy ({{A}e}=100/(1+a{{P}2}) with a={{0.54}+/- 0.07}× {{10}-3} M P{{a}-2} and with {{R}2}=0.98). From this model, the critical pressure of galling is found to be equal to 43MPa. The {{S}5 V} roughness parameter (the five deepest valleys in the galled region's surface) is the most relevant roughness parameter for the quantification of damages in the ‘galling region’. The significant valleys’ depths increase from 10 μm-250 μm when the pressure increases from 11-350 MPa, according to a power law ({{S}5 V}=4.2{{P}0.75}, with {{R}2}=0.93).

  3. Relativistic MHD simulations of collision-induced magnetic dissipation in poynting-flux-dominated jets/outflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Wei; Li, Hui; Zhang, Bing

    We perform 3D relativistic ideal MHD simulations to study the collisions between high-σ (Poynting- ux-dominated) blobs which contain both poloidal and toroidal magnetic field components. This is meant to mimic the interactions inside a highly variable Poynting- ux-dominated jet. We discover a significant electromagnetic field (EMF) energy dissipation with an Alfvenic rate with the efficiency around 35%. Detailed analyses show that this dissipation is mostly facilitated by the collision-induced magnetic reconnection. Additional resolution and parameter studies show a robust result that the relative EMF energy dissipation efficiency is nearly independent of the numerical resolution or most physical parameters in themore » relevant parameter range. The reconnection outflows in our simulation can potentially form the multi-orientation relativistic mini-jets as needed for several analytical models. We also find a linear relationship between the σ values before and after the major EMF energy dissipation process. In conclusion, our results give support to the proposed astrophysical models that invoke signi cant magnetic energy dissipation in Poynting- ux-dominated jets, such as the internal collision-induced magnetic reconnection and turbulence (ICMART) model for GRBs, and reconnection triggered mini-jets model for AGNs.« less

  4. Relativistic MHD simulations of collision-induced magnetic dissipation in poynting-flux-dominated jets/outflows

    DOE PAGES

    Deng, Wei; Li, Hui; Zhang, Bing; ...

    2015-05-29

    We perform 3D relativistic ideal MHD simulations to study the collisions between high-σ (Poynting- ux-dominated) blobs which contain both poloidal and toroidal magnetic field components. This is meant to mimic the interactions inside a highly variable Poynting- ux-dominated jet. We discover a significant electromagnetic field (EMF) energy dissipation with an Alfvenic rate with the efficiency around 35%. Detailed analyses show that this dissipation is mostly facilitated by the collision-induced magnetic reconnection. Additional resolution and parameter studies show a robust result that the relative EMF energy dissipation efficiency is nearly independent of the numerical resolution or most physical parameters in themore » relevant parameter range. The reconnection outflows in our simulation can potentially form the multi-orientation relativistic mini-jets as needed for several analytical models. We also find a linear relationship between the σ values before and after the major EMF energy dissipation process. In conclusion, our results give support to the proposed astrophysical models that invoke signi cant magnetic energy dissipation in Poynting- ux-dominated jets, such as the internal collision-induced magnetic reconnection and turbulence (ICMART) model for GRBs, and reconnection triggered mini-jets model for AGNs.« less

  5. Granular-flow rheology: Role of shear-rate number in transition regime

    USGS Publications Warehouse

    Chen, C.-L.; Ling, C.-H.

    1996-01-01

    This paper examines the rationale behind the semiempirical formulation of a generalized viscoplastic fluid (GVF) model in the light of the Reiner-Rivlin constitutive theory and the viscoplastic theory, thereby identifying the parameters that control the rheology of granular flow. The shear-rate number (N) proves to be among the most significant parameters identified from the GVF model. As N ??? 0 and N ??? ???, the GVF model can reduce asymptotically to the theoretical stress versus shear-rate relations in the macroviscous and graininertia regimes, respectively, where the grain concentration (C) also plays a major role in the rheology of granular flow. Using available data obtained from the rotating-cylinder experiments of neutrally buoyant solid spheres dispersing in an interstitial fluid, the shear stress for granular flow in transition between the two regimes proves dependent on N and C in addition to some material constants, such as the coefficient of restitution. The insufficiency of data on rotating-cylinder experiments cannot presently allow the GVF model to predict how a granular flow may behave in the entire range of N; however, the analyzed data provide an insight on the interrelation among the relevant dimensionless parameters.

  6. Turbulent mixing of a critical fluid: The non-perturbative renormalization

    NASA Astrophysics Data System (ADS)

    Hnatič, M.; Kalagov, G.; Nalimov, M.

    2018-01-01

    Non-perturbative Renormalization Group (NPRG) technique is applied to a stochastical model of a non-conserved scalar order parameter near its critical point, subject to turbulent advection. The compressible advecting flow is modeled by a random Gaussian velocity field with zero mean and correlation function 〈υjυi 〉 ∼ (Pji⊥ + αPji∥) /k d + ζ. Depending on the relations between the parameters ζ, α and the space dimensionality d, the model reveals several types of scaling regimes. Some of them are well known (model A of equilibrium critical dynamics and linear passive scalar field advected by a random turbulent flow), but there is a new nonequilibrium regime (universality class) associated with new nontrivial fixed points of the renormalization group equations. We have obtained the phase diagram (d, ζ) of possible scaling regimes in the system. The physical point d = 3, ζ = 4 / 3 corresponding to three-dimensional fully developed Kolmogorov's turbulence, where critical fluctuations are irrelevant, is stable for α ≲ 2.26. Otherwise, in the case of "strong compressibility" α ≳ 2.26, the critical fluctuations of the order parameter become relevant for three-dimensional turbulence. Estimations of critical exponents for each scaling regime are presented.

  7. Experimental modal analysis on fresh-frozen human hemipelvic bones employing a 3D laser vibrometer for the purpose of modal parameter identification.

    PubMed

    Neugebauer, R; Werner, M; Voigt, C; Steinke, H; Scholz, R; Scherer, S; Quickert, M

    2011-05-17

    To provide a close-to-reality simulation model, such as for improved surgery planning, this model has to be experimentally verified. The present article describes the use of a 3D laser vibrometer for determining modal parameters of human pelvic bones that can be used for verifying a finite elements model. Compared to previously used sensors, such as acceleration sensors or strain gauges, the laser vibrometric procedure used here is a non-contact and non-interacting measuring method that allows a high density of measuring points and measurement in a global coordinate system. Relevant modal parameters were extracted from the measured data and provided for verifying the model. The use of the 3D laser vibrometer allowed the establishment of a process chain for experimental examination of the pelvic bones that was optimized with respect to time and effort involved. The transfer functions determined feature good signal quality. Furthermore, a comparison of the results obtained from pairs of pelvic bones showed that repeatable measurements can be obtained with the method used. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Modelling the host-pathogen interactions of macrophages and Candida albicans using Game Theory and dynamic optimization.

    PubMed

    Dühring, Sybille; Ewald, Jan; Germerodt, Sebastian; Kaleta, Christoph; Dandekar, Thomas; Schuster, Stefan

    2017-07-01

    The release of fungal cells following macrophage phagocytosis, called non-lytic expulsion, is reported for several fungal pathogens. On one hand, non-lytic expulsion may benefit the fungus in escaping the microbicidal environment of the phagosome. On the other hand, the macrophage could profit in terms of avoiding its own lysis and being able to undergo proliferation. To analyse the causes of non-lytic expulsion and the relevance of macrophage proliferation in the macrophage- Candida albicans interaction, we employ Evolutionary Game Theory and dynamic optimization in a sequential manner. We establish a game-theoretical model describing the different strategies of the two players after phagocytosis. Depending on the parameter values, we find four different Nash equilibria and determine the influence of the systems state of the host upon the game. As our Nash equilibria are a direct consequence of the model parameterization, we can depict several biological scenarios. A parameter region, where the host response is robust against the fungal infection, is determined. We further apply dynamic optimization to analyse whether macrophage mitosis is relevant in the host-pathogen interaction of macrophages and C. albicans For this, we study the population dynamics of the macrophage- C. albicans interactions and the corresponding optimal controls for the macrophages, indicating the best macrophage strategy of switching from proliferation to attacking fungal cells. © 2017 The Author(s).

  9. A Study of the Interaction of Millimeter Wave Fields with Biological Systems.

    DTIC Science & Technology

    1984-07-01

    structurally complex proteins . The third issue is the relevance of the parameters used in previous modeling efforts. The strength of the exciton-phonon...modes of proteins in the millimeter and submillimeter regions of the electromagnetic spectrum. Specifically: o " Four separate groups of frequencies...Rhodopseudomonas Sphaeroides (4). In industrial or military environments a significant number of personnel are exposed to electromagnetic fields

  10. Patterns of Carbon Nanotubes by Flow-Directed Deposition on Substrates with Architectured Topographies.

    PubMed

    K Jawed, M; Hadjiconstantinou, N G; Parks, D M; Reis, P M

    2018-03-14

    We develop and perform continuum mechanics simulations of carbon nanotube (CNT) deployment directed by a combination of surface topography and rarefied gas flow. We employ the discrete elastic rods method to model the deposition of CNT as a slender elastic rod that evolves in time under two external forces, namely, van der Waals (vdW) and aerodynamic drag. Our results confirm that this self-assembly process is analogous to a previously studied macroscopic system, the "elastic sewing machine", where an elastic rod deployed onto a moving substrate forms nonlinear patterns. In the case of CNTs, the complex patterns observed on the substrate, such as coils and serpentines, result from an intricate interplay between van der Waals attraction, rarefied aerodynamics, and elastic bending. We systematically sweep through the multidimensional parameter space to quantify the pattern morphology as a function of the relevant material, flow, and geometric parameters. Our findings are in good agreement with available experimental data. Scaling analysis involving the relevant forces helps rationalize our observations.

  11. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  12. Optimal control problems of epidemic systems with parameter uncertainties: application to a malaria two-age-classes transmission model with asymptomatic carriers.

    PubMed

    Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo

    2015-03-01

    The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. General Pharmacokinetic Model for Topically Administered Ocular Drug Dosage Forms.

    PubMed

    Deng, Feng; Ranta, Veli-Pekka; Kidron, Heidi; Urtti, Arto

    2016-11-01

    In ocular drug development, an early estimate of drug behavior before any in vivo experiments is important. The pharmacokinetics (PK) and bioavailability depend not only on active compound and excipients but also on physicochemical properties of the ocular drug formulation. We propose to utilize PK modelling to predict how drug and formulational properties affect drug bioavailability and pharmacokinetics. A physiologically relevant PK model based on the rabbit eye was built to simulate the effect of formulation and physicochemical properties on PK of pilocarpine solutions and fluorometholone suspensions. The model consists of four compartments: solid and dissolved drug in tear fluid, drug in corneal epithelium and aqueous humor. Parameter values and in vivo PK data in rabbits were taken from published literature. The model predicted the pilocarpine and fluorometholone concentrations in the corneal epithelium and aqueous humor with a reasonable accuracy for many different formulations. The model includes a graphical user interface that enables the user to modify parameters easily and thus simulate various formulations. The model is suitable for the development of ophthalmic formulations and the planning of bioequivalence studies.

  14. Effective model approach to the dense state of QCD matter

    NASA Astrophysics Data System (ADS)

    Fukushima, Kenji

    2011-12-01

    The first-principle approach to the dense state of QCD matter, i.e. the lattice-QCD simulation at finite baryon density, is not under theoretical control for the moment. The effective model study based on QCD symmetries is a practical alternative. However the model parameters that are fixed by hadronic properties in the vacuum may have unknown dependence on the baryon chemical potential. We propose a new prescription to constrain the effective model parameters by the matching condition with the thermal Statistical Model. In the transitional region where thermal quantities blow up in the Statistical Model, deconfined quarks and gluons should smoothly take over the relevant degrees of freedom from hadrons and resonances. We use the Polyakov-loop coupled Nambu-Jona-Lasinio (PNJL) model as an effective description in the quark side and show how the matching condition is satisfied by a simple ansäatz on the Polyakov loop potential. Our results favor a phase diagram with the chiral phase transition located at slightly higher temperature than deconfinement which stays close to the chemical freeze-out points.

  15. Multiscale Models for the Two-Stream Instability

    NASA Astrophysics Data System (ADS)

    Joseph, Ilon; Dimits, Andris; Banks, Jeffrey; Berger, Richard; Brunner, Stephan; Chapman, Thomas

    2017-10-01

    Interpenetrating streams of plasma found in many important scenarios in nature and in the laboratory can develop kinetic two-stream instabilities that exchange momentum and energy between the streams. A quasilinear model for the electrostatic two-stream instability is under development as a component of a multiscale model that couples fluid simulations to kinetic theory. Parameters of the model will be validated with comparison to full kinetic simulations using LOKI and efficient strategies for numerical solution of the quasilinear model and for coupling to the fluid model will be discussed. Extending the kinetic models into the collisional regime requires an efficient treatment of the collision operator. Useful reductions of the collision operator relative to the full multi-species Landau-Fokker-Plank operator are being explored. These are further motivated both by careful consideration of the parameter orderings relevant to two-stream scenarios and by the particular 2D+2V phase space used in the LOKI code. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 17- ERD-081.

  16. Estimating skin blood saturation by selecting a subset of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Ewerlöf, Maria; Salerud, E. Göran; Strömberg, Tomas; Larsson, Marcus

    2015-03-01

    Skin blood haemoglobin saturation (?b) can be estimated with hyperspectral imaging using the wavelength (λ) range of 450-700 nm where haemoglobin absorption displays distinct spectral characteristics. Depending on the image size and photon transport algorithm, computations may be demanding. Therefore, this work aims to evaluate subsets with a reduced number of wavelengths for ?b estimation. White Monte Carlo simulations are performed using a two-layered tissue model with discrete values for epidermal thickness (?epi) and the reduced scattering coefficient (μ's ), mimicking an imaging setup. A detected intensity look-up table is calculated for a range of model parameter values relevant to human skin, adding absorption effects in the post-processing. Skin model parameters, including absorbers, are; μ's (λ), ?epi, haemoglobin saturation (?b), tissue fraction blood (?b) and tissue fraction melanin (?mel). The skin model paired with the look-up table allow spectra to be calculated swiftly. Three inverse models with varying number of free parameters are evaluated: A(?b, ?b), B(?b, ?b, ?mel) and C(all parameters free). Fourteen wavelength candidates are selected by analysing the maximal spectral sensitivity to ?b and minimizing the sensitivity to ?b. All possible combinations of these candidates with three, four and 14 wavelengths, as well as the full spectral range, are evaluated for estimating ?b for 1000 randomly generated evaluation spectra. The results show that the simplified models A and B estimated ?b accurately using four wavelengths (mean error 2.2% for model B). If the number of wavelengths increased, the model complexity needed to be increased to avoid poor estimations.

  17. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  18. A simple model for electrical charge in globular macromolecules and linear polyelectrolytes in solution

    NASA Astrophysics Data System (ADS)

    Krishnan, M.

    2017-05-01

    We present a model for calculating the net and effective electrical charge of globular macromolecules and linear polyelectrolytes such as proteins and DNA, given the concentration of monovalent salt and pH in solution. The calculation is based on a numerical solution of the non-linear Poisson-Boltzmann equation using a finite element discretized continuum approach. The model simultaneously addresses the phenomena of charge regulation and renormalization, both of which underpin the electrostatics of biomolecules in solution. We show that while charge regulation addresses the true electrical charge of a molecule arising from the acid-base equilibria of its ionizable groups, charge renormalization finds relevance in the context of a molecule's interaction with another charged entity. Writing this electrostatic interaction free energy in terms of a local electrical potential, we obtain an "interaction charge" for the molecule which we demonstrate agrees closely with the "effective charge" discussed in charge renormalization and counterion-condensation theories. The predictions of this model agree well with direct high-precision measurements of effective electrical charge of polyelectrolytes such as nucleic acids and disordered proteins in solution, without tunable parameters. Including the effective interior dielectric constant for compactly folded molecules as a tunable parameter, the model captures measurements of effective charge as well as published trends of pKa shifts in globular proteins. Our results suggest a straightforward general framework to model electrostatics in biomolecules in solution. In offering a platform that directly links theory and experiment, these calculations could foster a systematic understanding of the interrelationship between molecular 3D structure and conformation, electrical charge and electrostatic interactions in solution. The model could find particular relevance in situations where molecular crystal structures are not available or rapid, reliable predictions are desired.

  19. Dynamical recovery of SU(2) symmetry in the mass-quenched Hubbard model

    NASA Astrophysics Data System (ADS)

    Du, Liang; Fiete, Gregory A.

    2018-02-01

    We use nonequilibrium dynamical mean-field theory with iterative perturbation theory as an impurity solver to study the recovery of SU(2) symmetry in real time following a hopping integral parameter quench from a mass-imbalanced to a mass-balanced single-band Hubbard model at half filling. A dynamical order parameter γ (t ) is defined to characterize the evolution of the system towards SU(2) symmetry. By comparing the momentum-dependent occupation from an equilibrium calculation [with the SU(2) symmetric Hamiltonian after the quench at an effective temperature] with the data from our nonequilibrium calculation, we conclude that the SU(2) symmetry recovered state is a thermalized state. Further evidence from the evolution of the density of states supports this conclusion. We find the order parameter in the weak Coulomb interaction regime undergoes an approximate exponential decay. We numerically investigate the interplay of the relevant parameters (initial temperature, Coulomb interaction strength, initial mass-imbalance ratio) and their combined effect on the thermalization behavior. Finally, we study evolution of the order parameter as the hopping parameter is changed with either a linear ramp or a pulse. Our results can be useful in strategies to engineer the relaxation behavior of interacting quantum many-particle systems.

  20. Is it the time to rethink clinical decision-making strategies? From a single clinical outcome evaluation to a Clinical Multi-criteria Decision Assessment (CMDA).

    PubMed

    Migliore, Alberto; Integlia, Davide; Bizzi, Emanuele; Piaggio, Tomaso

    2015-10-01

    There are plenty of different clinical, organizational and economic parameters to consider in order having a complete assessment of the total impact of a pharmaceutical treatment. In the attempt to follow, a holistic approach aimed to provide an evaluation embracing all clinical parameters in order to choose the best treatments, it is necessary to compare and weight multiple criteria. Therefore, a change is required: we need to move from a decision-making context based on the assessment of one single criteria towards a transparent and systematic framework enabling decision makers to assess all relevant parameters simultaneously in order to choose the best treatment to use. In order to apply the MCDA methodology to clinical decision making the best pharmaceutical treatment (or medical devices) to use to treat a specific pathology, we suggest a specific application of the Multiple Criteria Decision Analysis for the purpose, like a Clinical Multi-criteria Decision Assessment CMDA. In CMDA, results from both meta-analysis and observational studies are used by a clinical consensus after attributing weights to specific domains and related parameters. The decision will result from a related comparison of all consequences (i.e., efficacy, safety, adherence, administration route) existing behind the choice to use a specific pharmacological treatment. The match will yield a score (in absolute value) that link each parameter with a specific intervention, and then a final score for each treatment. The higher is the final score; the most appropriate is the intervention to treat disease considering all criteria (domain an parameters). The results will allow the physician to evaluate the best clinical treatment for his patients considering at the same time all relevant criteria such as clinical effectiveness for all parameters and administration route. The use of CMDA model will yield a clear and complete indication of the best pharmaceutical treatment to use for patients, helping physicians to choose drugs with a complete set of information, imputed in the model. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Modeling enzymatic hydrolysis of lignocellulosic substrates using confocal fluorescence microscopy I: filter paper cellulose.

    PubMed

    Luterbacher, Jeremy S; Moran-Mirabal, Jose M; Burkholder, Eric W; Walker, Larry P

    2015-01-01

    Enzymatic hydrolysis is one of the critical steps in depolymerizing lignocellulosic biomass into fermentable sugars for further upgrading into fuels and/or chemicals. However, many studies still rely on empirical trends to optimize enzymatic reactions. An improved understanding of enzymatic hydrolysis could allow research efforts to follow a rational design guided by an appropriate theoretical framework. In this study, we present a method to image cellulosic substrates with complex three-dimensional structure, such as filter paper, undergoing hydrolysis under conditions relevant to industrial saccharification processes (i.e., temperature of 50°C, using commercial cellulolytic cocktails). Fluorescence intensities resulting from confocal images were used to estimate parameters for a diffusion and reaction model. Furthermore, the observation of a relatively constant bound enzyme fluorescence signal throughout hydrolysis supported our modeling assumption regarding the structure of biomass during hydrolysis. The observed behavior suggests that pore evolution can be modeled as widening of infinitely long slits. The resulting model accurately predicts the concentrations of soluble carbohydrates obtained from independent saccharification experiments conducted in bulk, demonstrating its relevance to biomass conversion work. © 2014 Wiley Periodicals, Inc.

  2. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    NASA Astrophysics Data System (ADS)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.

    2016-09-01

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

  3. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, J.N., E-mail: jnshadi@sandia.gov; Department of Mathematics and Statistics, University of New Mexico; Smith, T.M.

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts tomore » apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  4. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  5. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE PAGES

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; ...

    2016-05-20

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  6. Why the impact of mechanical stimuli on stem cells remains a challenge.

    PubMed

    Goetzke, Roman; Sechi, Antonio; De Laporte, Laura; Neuss, Sabine; Wagner, Wolfgang

    2018-05-04

    Mechanical stimulation affects growth and differentiation of stem cells. This may be used to guide lineage-specific cell fate decisions and therefore opens fascinating opportunities for stem cell biology and regenerative medicine. Several studies demonstrated functional and molecular effects of mechanical stimulation but on first sight these results often appear to be inconsistent. Comparison of such studies is hampered by a multitude of relevant parameters that act in concert. There are notorious differences between species, cell types, and culture conditions. Furthermore, the utilized culture substrates have complex features, such as surface chemistry, elasticity, and topography. Cell culture substrates can vary from simple, flat materials to complex 3D scaffolds. Last but not least, mechanical forces can be applied with different frequency, amplitude, and strength. It is therefore a prerequisite to take all these parameters into consideration when ascribing their specific functional relevance-and to only modulate one parameter at the time if the relevance of this parameter is addressed. Such research questions can only be investigated by interdisciplinary cooperation. In this review, we focus particularly on mesenchymal stem cells and pluripotent stem cells to discuss relevant parameters that contribute to the kaleidoscope of mechanical stimulation of stem cells.

  7. Reparametrization-based estimation of genetic parameters in multi-trait animal model using Integrated Nested Laplace Approximation.

    PubMed

    Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J

    2016-02-01

    A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.

  8. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  9. Are adverse effects incorporated in economic models? An initial review of current practice.

    PubMed

    Craig, D; McDaid, C; Fonseca, T; Stock, C; Duffy, S; Woolacott, N

    2009-12-01

    To identify methodological research on the incorporation of adverse effects in economic models and to review current practice. Major electronic databases (Cochrane Methodology Register, Health Economic Evaluations Database, NHS Economic Evaluation Database, EconLit, EMBASE, Health Management Information Consortium, IDEAS, MEDLINE and Science Citation Index) were searched from inception to September 2007. Health technology assessment (HTA) reports commissioned by the National Institute for Health Research (NIHR) HTA programme and published between 2004 and 2007 were also reviewed. The reviews of methodological research on the inclusion of adverse effects in decision models and of current practice were carried out according to standard methods. Data were summarised in a narrative synthesis. Of the 719 potentially relevant references in the methodological research review, five met the inclusion criteria; however, they contained little information of direct relevance to the incorporation of adverse effects in models. Of the 194 HTA monographs published from 2004 to 2007, 80 were reviewed, covering a range of research and therapeutic areas. In total, 85% of the reports included adverse effects in the clinical effectiveness review and 54% of the decision models included adverse effects in the model; 49% included adverse effects in the clinical review and model. The link between adverse effects in the clinical review and model was generally weak; only 3/80 (< 4%) used the results of a meta-analysis from the systematic review of clinical effectiveness and none used only data from the review without further manipulation. Of the models including adverse effects, 67% used a clinical adverse effects parameter, 79% used a cost of adverse effects parameter, 86% used one of these and 60% used both. Most models (83%) used utilities, but only two (2.5%) used solely utilities to incorporate adverse effects and were explicit that the utility captured relevant adverse effects; 53% of those models that included utilities derived them from patients on treatment and could therefore be interpreted as capturing adverse effects. In total, 30% of the models that included adverse effects used withdrawals related to drug toxicity and therefore might be interpreted as using withdrawals to capture adverse effects, but this was explicitly stated in only three reports. Of the 37 models that did not include adverse effects, 18 provided justification for this omission, most commonly lack of data; 19 appeared to make no explicit consideration of adverse effects in the model. There is an implicit assumption within modelling guidance that adverse effects are very important but there is a lack of clarity regarding how they should be dealt with and considered in modelling. In many cases a lack of clear reporting in the HTAs made it extremely difficult to ascertain what had actually been carried out in consideration of adverse effects. The main recommendation is for much clearer and explicit reporting of adverse effects, or their exclusion, in decision models and for explicit recognition in future guidelines that 'all relevant outcomes' should include some consideration of adverse events.

  10. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks.

    PubMed

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei; Song, Houbing

    2017-07-12

    Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs.

  11. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks

    PubMed Central

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs. PMID:28704959

  12. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  13. Quantum gravity fluctuations flatten the Planck-scale Higgs potential

    NASA Astrophysics Data System (ADS)

    Eichhorn, Astrid; Hamada, Yuta; Lumma, Johannes; Yamada, Masatoshi

    2018-04-01

    We investigate asymptotic safety of a toy model of a singlet-scalar extension of the Higgs sector including two real scalar fields under the impact of quantum-gravity fluctuations. Employing functional renormalization group techniques, we search for fixed points of the system which provide a tentative ultraviolet completion of the system. We find that in a particular regime of the gravitational parameter space the canonically marginal and relevant couplings in the scalar sector—including the mass parameters—become irrelevant at the ultraviolet fixed point. The infrared potential for the two scalars that can be reached from that fixed point is fully predicted and features no free parameters. In the remainder of the gravitational parameter space, the values of the quartic couplings in our model are predicted. In light of these results, we discuss whether the singlet-scalar could be a dark-matter candidate. Furthermore, we highlight how "classical scale invariance" in the sense of a flat potential of the scalar sector at the Planck scale could arise as a consequence of asymptotic safety.

  14. Kepler Uniform Modeling of KOIs: MCMC Notes for Data Release 25

    NASA Technical Reports Server (NTRS)

    Hoffman, Kelsey L.; Rowe, Jason F.

    2017-01-01

    This document describes data products related to the reported planetary parameters and uncertainties for the Kepler Objects of Interest (KOIs) based on a Markov-Chain-Monte-Carlo (MCMC) analysis. Reported parameters, uncertainties and data products can be found at the NASA Exoplanet Archive . The codes used for this data analysis are available on the Github website (Rowe 2016). The relevant paper for details of the calculations is Rowe et al. (2015). The main differences between the model fits discussed here and those in the DR24 catalogue are that the DR25 light curves were used in the analysis, our processing of the MAST light curves took into account different data flags, the number of chains calculated was doubled to 200 000, and the parameters which are reported are based on a damped least-squares fit, instead of the median value from the Markov chain or the chain with the lowest 2 as reported in the past.

  15. Variational models for discontinuity detection

    NASA Astrophysics Data System (ADS)

    Vitti, Alfonso; Battista Benciolini, G.

    2010-05-01

    The Mumford-Shah variational model produces a smooth approximation of the data and detects data discontinuities by solving a minimum problem involving an energy functional. The Blake-Zisserman model permits also the detection of discontinuities in the first derivative of the approximation. This model can result in a quasi piece-wise linear approximation, whereas the Mumford-Shah can result in a quasi piece-wise constant approximation. The two models are well known in the mathematical literature and are widely adopted in computer vision for image segmentation. In Geodesy the Blake-Zisserman model has been applied successfully to the detection of cycle-slips in linear combinations of GPS measurements. Few attempts to apply the model to time series of coordinates have been done so far. The problem of detecting discontinuities in time series of GNSS coordinates is well know and its relevance increases as the quality of geodetic measurements, analysis techniques, models and products improves. The application of the Blake-Zisserman model appears reasonable and promising due to the model characteristic to detect both position and velocity discontinuities in the same time series. The detection of position and velocity changes is of great interest in geophysics where the discontinuity itself can be the very relevant object. In the work for the realization of reference frames, detecting position and velocity discontinuities may help to define models that can handle non-linear motions. In this work the Mumford-Shah and the Blake-Zisserman models are briefly presented, the treatment is carried out from a practical viewpoint rather than from a theoretical one. A set of time series of GNSS coordinates has been processed and the results are presented in order to highlight the capabilities and the weakness of the variational approach. A first attempt to derive some indication for the automatic set up of the model parameters has been done. The underlying relation that could links the parameter values to the statistical properties of the data has been investigated.

  16. Simplified rotor load models and fatigue damage estimates for offshore wind turbines.

    PubMed

    Muskulus, M

    2015-02-28

    The aim of rotor load models is to characterize and generate the thrust loads acting on an offshore wind turbine. Ideally, the rotor simulation can be replaced by time series from a model with a few parameters and state variables only. Such models are used extensively in control system design and, as a potentially new application area, structural optimization of support structures. Different rotor load models are here evaluated for a jacket support structure in terms of fatigue lifetimes of relevant structural variables. All models were found to be lacking in accuracy, with differences of more than 20% in fatigue load estimates. The most accurate models were the use of an effective thrust coefficient determined from a regression analysis of dynamic thrust loads, and a novel stochastic model in state-space form. The stochastic model explicitly models the quasi-periodic components obtained from rotational sampling of turbulent fluctuations. Its state variables follow a mean-reverting Ornstein-Uhlenbeck process. Although promising, more work is needed on how to determine the parameters of the stochastic model and before accurate lifetime predictions can be obtained without comprehensive rotor simulations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  17. Modelling and simulation of heat pipes with TAIThermIR (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Winkelmann, Max E.

    2016-10-01

    Regarding thermal camouflage usually one has to reduce the surface temperature of an object. All vehicles and installations having a combustion engine usually produce a lot of heat with results on hot spots on the surface which are highly conspicuous. Using heat pipes to transfer this heat to another place on the surface more efficiently might be a way to reduce those hotspots and the overall conspicuity. In a first approach, a model for the Software TAIThermIR was developed to test which parameters of the heat pipes are relevant and what effects can be achieved. It will be shown, that the thermal resistivity of contact zones are quite relevant and the thermal coupling of the engine (source of heat) defines if the alteration of the thermal signature is large or not. Furthermore the impact of the use of heat pipes in relation to surface material is discussed. The influence of different weather scenarios on the change of signatures due to the use of heat pipes is of minor relevance and depends on the choice of the surface material. Finally application issues for real systems are discussed.

  18. Exclusive queueing model including the choice of service windows

    NASA Astrophysics Data System (ADS)

    Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-01-01

    In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.

  19. Modeling Single Well Injection-Withdrawal (SWIW) Tests for Characterization of Complex Fracture-Matrix Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotte, F.P.; Doughty, C.; Birkholzer, J.

    2010-11-01

    The ability to reliably predict flow and transport in fractured porous rock is an essential condition for performance evaluation of geologic (underground) nuclear waste repositories. In this report, a suite of programs (TRIPOLY code) for calculating and analyzing flow and transport in two-dimensional fracture-matrix systems is used to model single-well injection-withdrawal (SWIW) tracer tests. The SWIW test, a tracer test using one well, is proposed as a useful means of collecting data for site characterization, as well as estimating parameters relevant to tracer diffusion and sorption. After some specific code adaptations, we numerically generated a complex fracture-matrix system for computationmore » of steady-state flow and tracer advection and dispersion in the fracture network, along with solute exchange processes between the fractures and the porous matrix. We then conducted simulations for a hypothetical but workable SWIW test design and completed parameter sensitivity studies on three physical parameters of the rock matrix - namely porosity, diffusion coefficient, and retardation coefficient - in order to investigate their impact on the fracture-matrix solute exchange process. Hydraulic fracturing, or hydrofracking, is also modeled in this study, in two different ways: (1) by increasing the hydraulic aperture for flow in existing fractures and (2) by adding a new set of fractures to the field. The results of all these different tests are analyzed by studying the population of matrix blocks, the tracer spatial distribution, and the breakthrough curves (BTCs) obtained, while performing mass-balance checks and being careful to avoid some numerical mistakes that could occur. This study clearly demonstrates the importance of matrix effects in the solute transport process, with the sensitivity studies illustrating the increased importance of the matrix in providing a retardation mechanism for radionuclides as matrix porosity, diffusion coefficient, or retardation coefficient increase. Interestingly, model results before and after hydrofracking are insensitive to adding more fractures, while slightly more sensitive to aperture increase, making SWIW tests a possible means of discriminating between these two potential hydrofracking effects. Finally, we investigate the possibility of inferring relevant information regarding the fracture-matrix system physical parameters from the BTCs obtained during SWIW testing.« less

  20. Assessment of reduced-order unscented Kalman filter for parameter identification in 1-dimensional blood flow models using experimental data.

    PubMed

    Caiazzo, A; Caforio, Federica; Montecinos, Gino; Muller, Lucas O; Blanco, Pablo J; Toro, Eluterio F

    2016-10-25

    This work presents a detailed investigation of a parameter estimation approach on the basis of the reduced-order unscented Kalman filter (ROUKF) in the context of 1-dimensional blood flow models. In particular, the main aims of this study are (1) to investigate the effects of using real measurements versus synthetic data for the estimation procedure (i.e., numerical results of the same in silico model, perturbed with noise) and (2) to identify potential difficulties and limitations of the approach in clinically realistic applications to assess the applicability of the filter to such setups. For these purposes, the present numerical study is based on a recently published in vitro model of the arterial network, for which experimental flow and pressure measurements are available at few selected locations. To mimic clinically relevant situations, we focus on the estimation of terminal resistances and arterial wall parameters related to vessel mechanics (Young's modulus and wall thickness) using few experimental observations (at most a single pressure or flow measurement per vessel). In all cases, we first perform a theoretical identifiability analysis on the basis of the generalized sensitivity function, comparing then the results owith the ROUKF, using either synthetic or experimental data, to results obtained using reference parameters and to available measurements. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Experimental application of simulation tools for evaluating UAV video change detection

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Bartelsen, Jan

    2015-10-01

    Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.

  2. A mathematical method for quantifying in vivo mechanical behaviour of heel pad under dynamic load.

    PubMed

    Naemi, Roozbeh; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan

    2016-03-01

    Mechanical behaviour of the heel pad, as a shock attenuating interface during a foot strike, determines the loading on the musculoskeletal system during walking. The mathematical models that describe the force deformation relationship of the heel pad structure can determine the mechanical behaviour of heel pad under load. Hence, the purpose of this study was to propose a method of quantifying the heel pad stress-strain relationship using force-deformation data from an indentation test. The energy input and energy returned densities were calculated by numerically integrating the area below the stress-strain curve during loading and unloading, respectively. Elastic energy and energy absorbed densities were calculated as the sum of and the difference between energy input and energy returned densities, respectively. By fitting the energy function, derived from a nonlinear viscoelastic model, to the energy density-strain data, the elastic and viscous model parameters were quantified. The viscous and elastic exponent model parameters were significantly correlated with maximum strain, indicating the need to perform indentation tests at realistic maximum strains relevant to walking. The proposed method showed to be able to differentiate between the elastic and viscous components of the heel pad response to loading and to allow quantifying the corresponding stress-strain model parameters.

  3. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

    PubMed Central

    Zhang, Daqing; Xiao, Jianfeng; Zhou, Nannan; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2015-01-01

    Blood-brain barrier (BBB) is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM) is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA) to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA)/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration. PMID:26504797

  4. Visualization of Space-Time Ambiguities to be Explored by NASA GEC Mission with a Critique of Synthesized Measurements for Different GEC Mission Scenarios

    NASA Technical Reports Server (NTRS)

    Sojka, Jan J.

    2003-01-01

    The Grant supported research addressing the question of how the NASA Solar Terrestrial Probes (STP) Mission called Geospace electrodynamics Connections (GEC) will resolve space-time structures as well as collect sufficient information to solve the coupled thermosphere-ionosphere- magnetosphere dynamics and electrodynamics. The approach adopted was to develop a high resolution in both space and time model of the ionosphere-thermosphere (I-T) over altitudes relevant to GEC, especially the deep-dipping phase. This I-T model was driven by a high- resolution model of magnetospheric-ionospheric (M-I) coupling electrodynamics. Such a model contains all the key parameters to be measured by GEC instrumentation, which in turn are the required parameters to resolve present-day problems in describing the energy and momentum coupling between the ionosphere-magnetosphere and ionosphere-thermosphere. This model database has been successfully created for one geophysical condition; winter, solar maximum with disturbed geophysical conditions, specifically a substorm. Using this data set, visualizations (movies) were created to contrast dynamics of the different measurable parameters. Specifically, the rapidly varying magnetospheric E and auroral electron precipitation versus the slower varying ionospheric F-region electron density, but rapidly responding E-region density.

  5. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  6. Non-robust dynamic inferences from macroeconometric models: Bifurcation stratification of confidence regions

    NASA Astrophysics Data System (ADS)

    Barnett, William A.; Duzhak, Evgeniya Aleksandrovna

    2008-06-01

    Grandmont [J.M. Grandmont, On endogenous competitive business cycles, Econometrica 53 (1985) 995-1045] found that the parameter space of the most classical dynamic models is stratified into an infinite number of subsets supporting an infinite number of different kinds of dynamics, from monotonic stability at one extreme to chaos at the other extreme, and with many forms of multiperiodic dynamics in between. The econometric implications of Grandmont’s findings are particularly important, if bifurcation boundaries cross the confidence regions surrounding parameter estimates in policy-relevant models. Stratification of a confidence region into bifurcated subsets seriously damages robustness of dynamical inferences. Recently, interest in policy in some circles has moved to New-Keynesian models. As a result, in this paper we explore bifurcation within the class of New-Keynesian models. We develop the econometric theory needed to locate bifurcation boundaries in log-linearized New-Keynesian models with Taylor policy rules or inflation-targeting policy rules. Central results needed in this research are our theorems on the existence and location of Hopf bifurcation boundaries in each of the cases that we consider.

  7. Multi-scale modeling of diffusion-controlled reactions in polymers: renormalisation of reactivity parameters.

    PubMed

    Everaers, Ralf; Rosa, Angelo

    2012-01-07

    The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.

  8. Inflationary generalized Chaplygin gas and dark energy in light of the Planck and BICEP2 experiments

    NASA Astrophysics Data System (ADS)

    Dinda, Bikash R.; Kumar, Sumit; Sen, Anjan A.

    2014-10-01

    In this work, we study an inflationary scenario in the presence of generalized Chaplygin gas (GCG). We show that in Einstein gravity, GCG is not a suitable candidate for inflation; but in a five-dimensional brane-world scenario, it can work as a viable inflationary model. We calculate the relevant quantities such as ns, r, and As related to the primordial scalar and tensor fluctuations, and using their recent bounds from Planck and BICEP2, we constrain the model parameters as well as the five-dimensional Planck mass. But as a slow-roll inflationary model with a power-law type scalar primordial power spectrum, GCG as an inflationary model cannot resolve the tension between results from BICEP2 and Planck with a concordance ΛCDM Universe. We show that by going beyond the concordance ΛCDM model and incorporating more general dark energy behavior, we may ease this tension. We also obtain the constraints on the ns and r and the GCG model parameters using Planck+WP +BICEP2 data considering the CPL dark energy behavior.

  9. Characterizing Uncertainty In Electrical Resistivity Tomography Images Due To Subzero Temperature Variability

    NASA Astrophysics Data System (ADS)

    Herring, T.; Cey, E. E.; Pidlisecky, A.

    2017-12-01

    Time-lapse electrical resistivity tomography (ERT) is used to image changes in subsurface electrical conductivity (EC), e.g. due to a saline contaminant plume. Temperature variation also produces an EC response, which interferes with the signal of interest. Temperature compensation requires the temperature distribution and the relationship between EC and temperature, but this relationship at subzero temperatures is not well defined. The goal of this study is to examine how uncertainty in the subzero EC/temperature relationship manifests in temperature corrected ERT images, especially with respect to relevant plume parameters (location, contaminant mass, etc.). First, a lab experiment was performed to determine the EC of fine-grained glass beads over a range of temperatures (-20° to 20° C) and saturations. The measured EC/temperature relationship was then used to add temperature effects to a hypothetical EC model of a conductive plume. Forward simulations yielded synthetic field data to which temperature corrections were applied. Varying the temperature/EC relationship used in the temperature correction and comparing the temperature corrected ERT results to the synthetic model enabled a quantitative analysis of the error of plume parameters associated with temperature variability. Modeling possible scenarios in this way helps to establish the feasibility of different time-lapse ERT applications by quantifying the uncertainty associated with parameter(s) of interest.

  10. Aeroservoelastic Uncertainty Model Identification from Flight Data

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    2001-01-01

    Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.

  11. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Experimental constraints from flavour changing processes and physics beyond the Standard Model.

    PubMed

    Gersabeck, M; Gligorov, V V; Serra, N

    Flavour physics has a long tradition of paving the way for direct discoveries of new particles and interactions. Results over the last decade have placed stringent bounds on the parameter space of physics beyond the Standard Model. Early results from the LHC, and its dedicated flavour factory LHCb, have further tightened these constraints and reiterate the ongoing relevance of flavour studies. The experimental status of flavour observables in the charm and beauty sectors is reviewed in measurements of CP violation, neutral meson mixing, and measurements of rare decays.

  13. Correlation effects in superconducting quantum dot systems

    NASA Astrophysics Data System (ADS)

    Pokorný, Vladislav; Žonda, Martin

    2018-05-01

    We study the effect of electron correlations on a system consisting of a single-level quantum dot with local Coulomb interaction attached to two superconducting leads. We use the single-impurity Anderson model with BCS superconducting baths to study the interplay between the proximity induced electron pairing and the local Coulomb interaction. We show how to solve the model using the continuous-time hybridization-expansion quantum Monte Carlo method. The results obtained for experimentally relevant parameters are compared with results of self-consistent second order perturbation theory as well as with the numerical renormalization group method.

  14. Stream Flow Prediction by Remote Sensing and Genetic Programming

    NASA Technical Reports Server (NTRS)

    Chang, Ni-Bin

    2009-01-01

    A genetic programming (GP)-based, nonlinear modeling structure relates soil moisture with synthetic-aperture-radar (SAR) images to present representative soil moisture estimates at the watershed scale. Surface soil moisture measurement is difficult to obtain over a large area due to a variety of soil permeability values and soil textures. Point measurements can be used on a small-scale area, but it is impossible to acquire such information effectively in large-scale watersheds. This model exhibits the capacity to assimilate SAR images and relevant geoenvironmental parameters to measure soil moisture.

  15. Particle model of full-size ITER-relevant negative ion source.

    PubMed

    Taccogna, F; Minelli, P; Ippolito, N

    2016-02-01

    This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.

  16. Tsunami propagation modelling - a sensitivity study

    NASA Astrophysics Data System (ADS)

    Dao, M. H.; Tkalich, P.

    2007-12-01

    Indian Ocean (2004) Tsunami and following tragic consequences demonstrated lack of relevant experience and preparedness among involved coastal nations. After the event, scientific and forecasting circles of affected countries have started a capacity building to tackle similar problems in the future. Different approaches have been used for tsunami propagation, such as Boussinesq and Nonlinear Shallow Water Equations (NSWE). These approximations were obtained assuming different relevant importance of nonlinear, dispersion and spatial gradient variation phenomena and terms. The paper describes further development of original TUNAMI-N2 model to take into account additional phenomena: astronomic tide, sea bottom friction, dispersion, Coriolis force, and spherical curvature. The code is modified to be suitable for operational forecasting, and the resulting version (TUNAMI-N2-NUS) is verified using test cases, results of other models, and real case scenarios. Using the 2004 Tsunami event as one of the scenarios, the paper examines sensitivity of numerical solutions to variation of different phenomena and parameters, and the results are analyzed and ranked accordingly.

  17. Laser-plasma interactions for fast ignition

    NASA Astrophysics Data System (ADS)

    Kemp, A. J.; Fiuza, F.; Debayle, A.; Johzaki, T.; Mori, W. B.; Patel, P. K.; Sentoku, Y.; Silva, L. O.

    2014-05-01

    In the electron-driven fast-ignition (FI) approach to inertial confinement fusion, petawatt laser pulses are required to generate MeV electrons that deposit several tens of kilojoules in the compressed core of an imploded DT shell. We review recent progress in the understanding of intense laser-plasma interactions (LPI) relevant to FI. Increases in computational and modelling capabilities, as well as algorithmic developments have led to enhancement in our ability to perform multi-dimensional particle-in-cell simulations of LPI at relevant scales. We discuss the physics of the interaction in terms of laser absorption fraction, the laser-generated electron spectra, divergence, and their temporal evolution. Scaling with irradiation conditions such as laser intensity are considered, as well as the dependence on plasma parameters. Different numerical modelling approaches and configurations are addressed, providing an overview of the modelling capabilities and limitations. In addition, we discuss the comparison of simulation results with experimental observables. In particular, we address the question of surrogacy of today's experiments for the full-scale FI problem.

  18. Neuromusculoskeletal Model Calibration Significantly Affects Predicted Knee Contact Forces for Walking

    PubMed Central

    Serrancolí, Gil; Kinney, Allison L.; Fregly, Benjamin J.; Font-Llagunes, Josep M.

    2016-01-01

    Though walking impairments are prevalent in society, clinical treatments are often ineffective at restoring lost function. For this reason, researchers have begun to explore the use of patient-specific computational walking models to develop more effective treatments. However, the accuracy with which models can predict internal body forces in muscles and across joints depends on how well relevant model parameter values can be calibrated for the patient. This study investigated how knowledge of internal knee contact forces affects calibration of neuromusculoskeletal model parameter values and subsequent prediction of internal knee contact and leg muscle forces during walking. Model calibration was performed using a novel two-level optimization procedure applied to six normal walking trials from the Fourth Grand Challenge Competition to Predict In Vivo Knee Loads. The outer-level optimization adjusted time-invariant model parameter values to minimize passive muscle forces, reserve actuator moments, and model parameter value changes with (Approach A) and without (Approach B) tracking of experimental knee contact forces. Using the current guess for model parameter values but no knee contact force information, the inner-level optimization predicted time-varying muscle activations that were close to experimental muscle synergy patterns and consistent with the experimental inverse dynamic loads (both approaches). For all the six gait trials, Approach A predicted knee contact forces with high accuracy for both compartments (average correlation coefficient r = 0.99 and root mean square error (RMSE) = 52.6 N medial; average r = 0.95 and RMSE = 56.6 N lateral). In contrast, Approach B overpredicted contact force magnitude for both compartments (average RMSE = 323 N medial and 348 N lateral) and poorly matched contact force shape for the lateral compartment (average r = 0.90 medial and −0.10 lateral). Approach B had statistically higher lateral muscle forces and lateral optimal muscle fiber lengths but lower medial, central, and lateral normalized muscle fiber lengths compared to Approach A. These findings suggest that poorly calibrated model parameter values may be a major factor limiting the ability of neuromusculoskeletal models to predict knee contact and leg muscle forces accurately for walking. PMID:27210105

  19. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  20. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  1. Relaxation limit of a compressible gas-liquid model with well-reservoir interaction

    NASA Astrophysics Data System (ADS)

    Solem, Susanne; Evje, Steinar

    2017-02-01

    This paper deals with the relaxation limit of a two-phase compressible gas-liquid model which contains a pressure-dependent well-reservoir interaction term of the form q (P_r - P) where q>0 is the rate of the pressure-dependent influx/efflux of gas, P is the (unknown) wellbore pressure, and P_r is the (known) surrounding reservoir pressure. The model can be used to study gas-kick flow scenarios relevant for various wellbore operations. One extreme case is when the wellbore pressure P is largely dictated by the surrounding reservoir pressure P_r. Formally, this model is obtained by deriving the limiting system as the relaxation parameter q in the full model tends to infinity. The main purpose of this work is to understand to what extent this case can be represented by a well-defined mathematical model for a fixed global time T>0. Well-posedness of the full model has been obtained in Evje (SIAM J Math Anal 45(2):518-546, 2013). However, as the estimates for the full model are dependent on the relaxation parameter q, new estimates must be obtained for the equilibrium model to ensure existence of solutions. By means of appropriate a priori assumptions and some restrictions on the model parameters, necessary estimates (low order and higher order) are obtained. These estimates that depend on the global time T together with smallness assumptions on the initial data are then used to obtain existence of solutions in suitable Sobolev spaces.

  2. Diversification versus specialization in complex ecosystems.

    PubMed

    Di Clemente, Riccardo; Chiarotti, Guido L; Cristelli, Matthieu; Tacchella, Andrea; Pietronero, Luciano

    2014-01-01

    By analyzing the distribution of revenues across the production sectors of quoted firms we suggest a novel dimension that drives the firms diversification process at country level. Data show a non trivial macro regional clustering of the diversification process, which underlines the relevance of geopolitical environments in determining the microscopic dynamics of economic entities. These findings demonstrate the possibility of singling out in complex ecosystems those micro-features that emerge at macro-levels, which could be of particular relevance for decision-makers in selecting the appropriate parameters to be acted upon in order to achieve desirable results. The understanding of this micro-macro information exchange is further deepened through the introduction of a simplified dynamic model.

  3. Diversification versus Specialization in Complex Ecosystems

    PubMed Central

    Di Clemente, Riccardo; Chiarotti, Guido L.; Cristelli, Matthieu; Tacchella, Andrea; Pietronero, Luciano

    2014-01-01

    By analyzing the distribution of revenues across the production sectors of quoted firms we suggest a novel dimension that drives the firms diversification process at country level. Data show a non trivial macro regional clustering of the diversification process, which underlines the relevance of geopolitical environments in determining the microscopic dynamics of economic entities. These findings demonstrate the possibility of singling out in complex ecosystems those micro-features that emerge at macro-levels, which could be of particular relevance for decision-makers in selecting the appropriate parameters to be acted upon in order to achieve desirable results. The understanding of this micro-macro information exchange is further deepened through the introduction of a simplified dynamic model. PMID:25384059

  4. Pinatubo Emulation in Multiple Models (POEMs): co-ordinated experiments in the ISA-MIP model intercomparison activity component of the SPARC Stratospheric Sulphur and it's Role in Climate initiative (SSiRC)

    NASA Astrophysics Data System (ADS)

    Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina

    2016-04-01

    The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.

  5. Lignin Depolymerization with Nitrate-Intercalated Hydrotalcite Catalysts

    DOE PAGES

    Kruger, Jacob S.; Cleveland, Nicholas S.; Zhang, Shuting; ...

    2016-01-13

    Hydrotalcites (HTCs) exhibit multiple adjustable parameters to tune catalytic activity, including interlayer anion composition, metal hydroxide layer composition, and catalyst preparation methods. Here in this paper, we report the influence of several of these parameters on β-O-4 bond scission in a lignin model dimer, 2-phenoxy-1-phenethanol (PE), to yield phenol and acetophenone. We find that the presence of both basic and NO 3– anions in the interlayer increases the catalyst activity by 2–3-fold. In contrast, other anions or transition metals do not enhance catalytic activity in comparison to blank HTC. The catalyst is not active for C–C bond cleavage on ligninmore » model dimers and has no effect on dimers without an α-OH group. Most importantly, the catalyst is highly active in the depolymerization of two process-relevant lignin substrates, producing a significant amount of low-molecular-weight aromatic species. The catalyst can be recycled until the NO 3– anions are depleted, after which the activity can be restored by replenishing the NO 3– reservoir and regenerating the hydrated HTC structure. These results demonstrate a route to selective lignin depolymerization in a heterogeneous system with an inexpensive, earth-abundant, commercially relevant, and easily regenerated catalyst.« less

  6. Overview of Icing Physics Relevant to Scaling

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2005-01-01

    An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.

  7. Myokit: A simple interface to cardiac cellular electrophysiology.

    PubMed

    Clerx, Michael; Collins, Pieter; de Lange, Enno; Volders, Paul G A

    2016-01-01

    Myokit is a new powerful and versatile software tool for modeling and simulation of cardiac cellular electrophysiology. Myokit consists of an easy-to-read modeling language, a graphical user interface, single and multi-cell simulation engines and a library of advanced analysis tools accessible through a Python interface. Models can be loaded from Myokit's native file format or imported from CellML. Model export is provided to C, MATLAB, CellML, CUDA and OpenCL. Patch-clamp data can be imported and used to estimate model parameters. In this paper, we review existing tools to simulate the cardiac cellular action potential to find that current tools do not cater specifically to model development and that there is a gap between easy-to-use but limited software and powerful tools that require strong programming skills from their users. We then describe Myokit's capabilities, focusing on its model description language, simulation engines and import/export facilities in detail. Using three examples, we show how Myokit can be used for clinically relevant investigations, multi-model testing and parameter estimation in Markov models, all with minimal programming effort from the user. This way, Myokit bridges a gap between performance, versatility and user-friendliness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Skill and independence weighting for multi-model assessments

    DOE PAGES

    Sanderson, Benjamin M.; Wehner, Michael; Knutti, Reto

    2017-06-28

    We present a weighting strategy for use with the CMIP5 multi-model archive in the fourth National Climate Assessment, which considers both skill in the climatological performance of models over North America as well as the inter-dependency of models arising from common parameterizations or tuning practices. The method exploits information relating to the climatological mean state of a number of projection-relevant variables as well as metrics representing long-term statistics of weather extremes. The weights, once computed can be used to simply compute weighted means and significance information from an ensemble containing multiple initial condition members from potentially co-dependent models of varyingmore » skill. Two parameters in the algorithm determine the degree to which model climatological skill and model uniqueness are rewarded; these parameters are explored and final values are defended for the assessment. The influence of model weighting on projected temperature and precipitation changes is found to be moderate, partly due to a compensating effect between model skill and uniqueness. However, more aggressive skill weighting and weighting by targeted metrics is found to have a more significant effect on inferred ensemble confidence in future patterns of change for a given projection.« less

  9. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  10. Adult Age Differences and the Role of Cognitive Resources in Perceptual–Motor Skill Acquisition: Application of a Multilevel Negative Exponential Model

    PubMed Central

    Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali

    2010-01-01

    The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985

  11. Using transfer functions to quantify El Niño Southern Oscillation dynamics in data and models.

    PubMed

    MacMartin, Douglas G; Tziperman, Eli

    2014-09-08

    Transfer function tools commonly used in engineering control analysis can be used to better understand the dynamics of El Niño Southern Oscillation (ENSO), compare data with models and identify systematic model errors. The transfer function describes the frequency-dependent input-output relationship between any pair of causally related variables, and can be estimated from time series. This can be used first to assess whether the underlying relationship is or is not frequency dependent, and if so, to diagnose the underlying differential equations that relate the variables, and hence describe the dynamics of individual subsystem processes relevant to ENSO. Estimating process parameters allows the identification of compensating model errors that may lead to a seemingly realistic simulation in spite of incorrect model physics. This tool is applied here to the TAO array ocean data, the GFDL-CM2.1 and CCSM4 general circulation models, and to the Cane-Zebiak ENSO model. The delayed oscillator description is used to motivate a few relevant processes involved in the dynamics, although any other ENSO mechanism could be used instead. We identify several differences in the processes between the models and data that may be useful for model improvement. The transfer function methodology is also useful in understanding the dynamics and evaluating models of other climate processes.

  12. Flux estimation of the FIFE planetary boundary layer (PBL) with 10.6 micron Doppler lidar

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1990-01-01

    A method is devised for calculating wind, momentum, and other flux parameters that characterize the planetary boundary layer (PBL) and thereby facilitate the calibration of spaceborne vs. in situ flux estimates. Single Doppler lidar data are used to estimate the variance of the mean wind and the covariance related to the vertically pointing fluxes of horizontal momentum. The skewness of the vertical velocity and the range of kinetic energy dissipation are also estimated, and the surface heat flux is determined by means of a statistical Navier-Stokes equation. The conclusion shows that the PBL structure combines both 'bottom-up' and 'top-down' processes suggesting that the relevant parameters for the atmospheric boundary layer be revised. The conclusions are of significant interest to the modeling techniques used in General Circulation Models as well as to flux estimation.

  13. A Statistical Physicist's Approach to Biological Motion: From the the Kinesin Walk to Muscle Contraction

    NASA Astrophysics Data System (ADS)

    Vicsek, Tamas

    1997-03-01

    It is demonstrated that a wide range of experimental results on biological motion can be successfully interpreted in terms of statistical physics motivated models taking into account the relevant microscopic details of motor proteins and allowing analytic solutions. Two important examples are considered, i) the motion of a single kinesin molecule along microtubules inside individual cells and ii) muscle contraction which is a macroscopic phenomenon due to the collective action of a large number of myosin heads along actin filaments. i) Recently individual two-headed kinesin molecules have been studied in in vitro motility assays revealing a number of their peculiar transport properties. Here we propose a simple and robust model for the kinesin stepping process with elastically coupled Brownian heads showing all of these properties. The analytic treatment of our model results in a very good fit to the experimental data and practically has no free parameters. ii) Myosin is an ATPase enzyme that converts the chemical energy stored in ATP molecules into mechanical work. During muscle contraction, the myosin cross-bridges attach to the actin filaments and exert force on them yielding a relative sliding of the actin and myosin filaments. In this paper we present a simple mechanochemical model for the cross-bridge interaction involving the relevant kinetic data and providing simple analytic solutions for the mechanical properties of muscle contraction, such as the force-velocity relationship or the relative number of the attached cross-bridges. So far the only analytic formula which could be fitted to the measured force-velocity curves has been the well known Hill equation containing parameters lacking clear microscopic origin. The main advantages of our new approach are that it explicitly connects the mechanical data with the kinetic data and the concentration of the ATP and ATPase products and as such it leads to new analytic solutions which agree extremely well with a wide range of experimental curves, while the parameters of the corresponding expressions have well defined microscopic meaning.

  14. In silico analysis of antibiotic-induced Clostridium difficile infection: Remediation techniques and biological adaptations

    PubMed Central

    Carlson, Jean M.

    2018-01-01

    In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments. PMID:29451873

  15. The C2HDM revisited

    NASA Astrophysics Data System (ADS)

    Fontes, Duarte; Mühlleitner, Margarete; Romão, Jorge C.; Santos, Rui; Silva, João P.; Wittbrodt, Jonas

    2018-02-01

    The complex two-Higgs doublet model is one of the simplest ways to extend the scalar sector of the Standard Model to include a new source of CP-violation. The model has been used as a benchmark model to search for CP-violation at the LHC and as a possible explanation for the matter-antimatter asymmetry of the Universe. In this work, we re-analyse in full detail the softly broken ℤ 2 symmetric complex two-Higgs doublet model (C2HDM). We provide the code C2HDM_HDECAY implementing the C2HDM in the well-known HDECAY program which calculates the decay widths including the state-of-the-art higher order QCD corrections and the relevant off-shell decays. Using C2HDM_HDECAY together with the most relevant theoretical and experimental constraints, including electric dipole moments (EDMs), we review the parameter space of the model and discuss its phenomenology. In particular, we find cases where large CP-odd couplings to fermions are still allowed and provide benchmark points for these scenarios. We examine the prospects of discovering CP-violation at the LHC and show how theoretically motivated measures of CP-violation correlate with observables.

  16. In silico analysis of antibiotic-induced Clostridium difficile infection: Remediation techniques and biological adaptations.

    PubMed

    Jones, Eric W; Carlson, Jean M

    2018-02-01

    In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments.

  17. Systems modelling methodology for the analysis of apoptosis signal transduction and cell death decisions.

    PubMed

    Rehm, Markus; Prehn, Jochen H M

    2013-06-01

    Systems biology and systems medicine, i.e. the application of systems biology in a clinical context, is becoming of increasing importance in biology, drug discovery and health care. Systems biology incorporates knowledge and methods that are applied in mathematics, physics and engineering, but may not be part of classical training in biology. We here provide an introduction to basic concepts and methods relevant to the construction and application of systems models for apoptosis research. We present the key methods relevant to the representation of biochemical processes in signal transduction models, with a particular reference to apoptotic processes. We demonstrate how such models enable a quantitative and temporal analysis of changes in molecular entities in response to an apoptosis-inducing stimulus, and provide information on cell survival and cell death decisions. We introduce methods for analyzing the spatial propagation of cell death signals, and discuss the concepts of sensitivity analyses that enable a prediction of network responses to disturbances of single or multiple parameters. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  19. Landslide model performance in a high resolution small-scale landscape

    NASA Astrophysics Data System (ADS)

    De Sy, V.; Schoorl, J. M.; Keesstra, S. D.; Jones, K. E.; Claessens, L.

    2013-05-01

    The frequency and severity of shallow landslides in New Zealand threatens life and property, both on- and off-site. The physically-based shallow landslide model LAPSUS-LS is tested for its performance in simulating shallow landslide locations induced by a high intensity rain event in a small-scale landscape. Furthermore, the effect of high resolution digital elevation models on the performance was tested. The performance of the model was optimised by calibrating different parameter values. A satisfactory result was achieved with a high resolution (1 m) DEM. Landslides, however, were generally predicted lower on the slope than mapped erosion scars. This discrepancy could be due to i) inaccuracies in the DEM or in other model input data such as soil strength properties; ii) relevant processes for this environmental context that are not included in the model; or iii) the limited validity of the infinite length assumption in the infinite slope stability model embedded in the LAPSUS-LS. The trade-off between a correct prediction of landslides versus stable cells becomes increasingly worse with coarser resolutions; and model performance decreases mainly due to altering slope characteristics. The optimal parameter combinations differ per resolution. In this environmental context the 1 m resolution topography resembles actual topography most closely and landslide locations are better distinguished from stable areas than for coarser resolutions. More gain in model performance could be achieved by adding landslide process complexities and parameter heterogeneity of the catchment.

  20. A Quantitative Model of Early Atherosclerotic Plaques Parameterized Using In Vitro Experiments.

    PubMed

    Thon, Moritz P; Ford, Hugh Z; Gee, Michael W; Myerscough, Mary R

    2018-01-01

    There are a growing number of studies that model immunological processes in the artery wall that lead to the development of atherosclerotic plaques. However, few of these models use parameters that are obtained from experimental data even though data-driven models are vital if mathematical models are to become clinically relevant. We present the development and analysis of a quantitative mathematical model for the coupled inflammatory, lipid and macrophage dynamics in early atherosclerotic plaques. Our modeling approach is similar to the biologists' experimental approach where the bigger picture of atherosclerosis is put together from many smaller observations and findings from in vitro experiments. We first develop a series of three simpler submodels which are least-squares fitted to various in vitro experimental results from the literature. Subsequently, we use these three submodels to construct a quantitative model of the development of early atherosclerotic plaques. We perform a local sensitivity analysis of the model with respect to its parameters that identifies critical parameters and processes. Further, we present a systematic analysis of the long-term outcome of the model which produces a characterization of the stability of model plaques based on the rates of recruitment of low-density lipoproteins, high-density lipoproteins and macrophages. The analysis of the model suggests that further experimental work quantifying the different fates of macrophages as a function of cholesterol load and the balance between free cholesterol and cholesterol ester inside macrophages may give valuable insight into long-term atherosclerotic plaque outcomes. This model is an important step toward models applicable in a clinical setting.

  1. Beyond ideal magnetohydrodynamics: from fibration to 3  +  1 foliation

    NASA Astrophysics Data System (ADS)

    Andersson, N.; Hawke, I.; Dionysopoulou, K.; Comer, G. L.

    2017-06-01

    We consider a resistive multi-fluid framework from the 3  +  1 space-time foliation point-of-view, paying particular attention to issues relating to the use of multi-parameter equations of state and the associated inversion from evolved to primitive variables. We highlight relevant numerical issues that arise for general systems with relative flows. As an application of the new formulation, we consider a three-component system relevant for hot neutron stars. In this case we let the baryons (neutrons and protons) move together, but allow heat and electrons to exhibit relative flow. This reduces the problem to three momentum equations; overall energy-momentum conservation, a generalised Ohm’s law and a heat equation. Our results provide a hierarchy of increasingly complex models and prepare the ground for new state-of-the-art simulations of relevant scenarios in relativistic astrophysics.

  2. Is cardiac toxicity a relevant issue in the radiation treatment of esophageal cancer?

    PubMed

    Beukema, Jannet C; van Luijk, Peter; Widder, Joachim; Langendijk, Johannes A; Muijs, Christina T

    2015-01-01

    In recent years several papers have been published on radiation-induced cardiac toxicity, especially in breast cancer patients. However, in esophageal cancer patients the radiation dose to the heart is usually markedly higher. To determine whether radiation-induced cardiac toxicity is also a relevant issue for this group, we conducted a review of the current literature. A literature search was performed in Medline for papers concerning cardiac toxicity in esophageal cancer patients treated with radiotherapy with or without chemotherapy. The overall crude incidence of symptomatic cardiac toxicity was as high as 10.8%. Toxicities corresponded with several dose-volume parameters of the heart. The most frequently reported complications were pericardial effusion, ischemic heart disease and heart failure. Cardiac toxicity is a relevant issue in the treatment of esophageal cancer. However, valid Normal Tissue Complication Probability models for esophageal cancer are not available at present. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Finding Relevant Parameters for the Thin-film Photovoltaic Cells Production Process with the Application of Data Mining Methods.

    PubMed

    Ulaczyk, Jan; Morawiec, Krzysztof; Zabierowski, Paweł; Drobiazg, Tomasz; Barreau, Nicolas

    2017-09-01

    A data mining approach is proposed as a useful tool for the control parameters analysis of the 3-stage CIGSe photovoltaic cell production process, in order to find variables that are the most relevant for cell electric parameters and efficiency. The analysed data set consists of stage duration times, heater power values as well as temperatures for the element sources and the substrate - there are 14 variables per sample in total. The most relevant variables of the process have been found based on the so-called random forest analysis with the application of the Boruta algorithm. 118 CIGSe samples, prepared at Institut des Matériaux Jean Rouxel, were analysed. The results are close to experimental knowledge on the CIGSe cells production process. They bring new evidence to production parameters of new cells and further research. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.

  5. Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport

    NASA Astrophysics Data System (ADS)

    Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike

    2017-04-01

    Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.

  6. Two phase modeling of the influence of plastic strain on the magnetic and magnetostrictive behaviors of ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Hubert, Olivier; Lazreg, Said

    2017-02-01

    A growing interest of automotive industry in the use of high performance steels is observed. These materials are obtained thanks to complex manufacturing processes whose parameters fluctuations lead to strong variations of microstructure and mechanical properties. The on-line magnetic non-destructive monitoring is a relevant response to this problem but it requires fast models sensitive to different parameters of the forming process. The plastic deformation is one of these important parameters. Indeed, ferromagnetic materials are known to be sensitive to stress application and especially to plastic strains. In this paper, a macroscopic approach using the kinematic hardening is proposed to model this behavior, considering a plastic strained material as a two phase system. Relationship between kinematic hardening and residual stress is defined in this framework. Since stress fields are multiaxial, an uniaxial equivalent stress is calculated and introduced inside the so-called magneto-mechanical multidomain modeling to represent the effect of plastic strain. The modeling approach is complemented by many experiments involving magnetic and magnetostrictive measurements. They are carried out with or without applied stress, using a dual-phase steel deformed at different levels. The main interest of this material is that the mechanically hard phase, soft phase and the kinematic hardening can be clearly identified thanks to simple experiments. It is shown how this model can be extended to single phase materials.

  7. A study on the predictability of acute lymphoblastic leukaemia response to treatment using a hybrid oncosimulator.

    PubMed

    Ouzounoglou, Eleftherios; Kolokotroni, Eleni; Stanulla, Martin; Stamatakos, Georgios S

    2018-02-06

    Efficient use of Virtual Physiological Human (VPH)-type models for personalized treatment response prediction purposes requires a precise model parameterization. In the case where the available personalized data are not sufficient to fully determine the parameter values, an appropriate prediction task may be followed. This study, a hybrid combination of computational optimization and machine learning methods with an already developed mechanistic model called the acute lymphoblastic leukaemia (ALL) Oncosimulator which simulates ALL progression and treatment response is presented. These methods are used in order for the parameters of the model to be estimated for retrospective cases and to be predicted for prospective ones. The parameter value prediction is based on a regression model trained on retrospective cases. The proposed Hybrid ALL Oncosimulator system has been evaluated when predicting the pre-phase treatment outcome in ALL. This has been correctly achieved for a significant percentage of patient cases tested (approx. 70% of patients). Moreover, the system is capable of denying the classification of cases for which the results are not trustworthy enough. In that case, potentially misleading predictions for a number of patients are avoided, while the classification accuracy for the remaining patient cases further increases. The results obtained are particularly encouraging regarding the soundness of the proposed methodologies and their relevance to the process of achieving clinical applicability of the proposed Hybrid ALL Oncosimulator system and VPH models in general.

  8. Ascertainment-adjusted parameter estimation approach to improve robustness against misspecification of health monitoring methods

    NASA Astrophysics Data System (ADS)

    Juesas, P.; Ramasso, E.

    2016-12-01

    Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.

  9. How attention influences perceptual decision making: Single-trial EEG correlates of drift-diffusion model parameters

    PubMed Central

    Nunez, Michael D.; Vandekerckhove, Joachim; Srinivasan, Ramesh

    2016-01-01

    Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects. PMID:28435173

  10. How attention influences perceptual decision making: Single-trial EEG correlates of drift-diffusion model parameters.

    PubMed

    Nunez, Michael D; Vandekerckhove, Joachim; Srinivasan, Ramesh

    2017-02-01

    Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects.

  11. \\varvec{B^0→ K^{*0}μ ^+μ ^-} decay in the aligned two-Higgs-doublet model

    NASA Astrophysics Data System (ADS)

    Hu, Quan-Yi; Li, Xin-Qiang; Yang, Ya-Dong

    2017-03-01

    In the aligned two-Higgs-doublet model, we perform a complete one-loop computation of the short-distance Wilson coefficients C_{7,9,10}^{(' )}, which are the most relevant ones for b→ sℓ ^+ℓ ^- transitions. It is found that, when the model parameter | σ u| is much smaller than | σd| , the charged scalar contributes mainly to chirality-flipped C_{9,10}^' , with the corresponding effects being proportional to | σd| ^2. Numerically, the charged-scalar effects fit into two categories: (A) C_{7,9,10}^{H^± } are sizable, but C_{9,10}^' {H^± }}˜eq 0, corresponding to the (large | σu| , small | σd| ) region; (B) C_7^{H^± } and C_{9,10}^' {H^± }} are sizable, but C_{9,10}^{H^± }˜eq 0, corresponding to the (small | σu| , large | σd| ) region. Taking into account phenomenological constraints from the inclusive radiative decay B→ Xs{γ }, as well as the latest model-independent global analysis of b→ sℓ ^+ℓ ^- data, we obtain the much restricted parameter space of the model. We then study the impact of the allowed model parameters on the angular observables P_2 and P_5' of B^0→ K^{*0}μ ^+μ ^- decay, and we find that P_5' could be increased significantly to be consistent with the experimental data in case B.

  12. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  13. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.

    2015-11-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  14. Anharmonic interatomic force constants and thermal conductivity from Grüneisen parameters: An application to graphene

    NASA Astrophysics Data System (ADS)

    Lee, Ching Hua; Gan, Chee Kwan

    2017-07-01

    Phonon-mediated thermal conductivity, which is of great technological relevance, arises due fundamentally to anharmonic scattering from interatomic potentials. Despite its prevalence, accurate first-principles calculations of thermal conductivity remain challenging, primarily due to the high computational cost of anharmonic interatomic force constant (IFC) calculations. Meanwhile, the related anharmonic phenomenon of thermal expansion is much more tractable, being computable from the Grüneisen parameters associated with phonon frequency shifts due to crystal deformations. In this work, we propose an approach for computing the largest cubic IFCs from the Grüneisen parameter data. This allows an approximate determination of the thermal conductivity via a much less expensive route. The key insight is that although the Grüneisen parameters cannot possibly contain all the information on the cubic IFCs, being derivable from spatially uniform deformations, they can still unambiguously and accurately determine the largest and most physically relevant ones. By fitting the anisotropic Grüneisen parameter data along judiciously designed deformations, we can deduce (i.e., reverse-engineer) the dominant cubic IFCs and estimate three-phonon scattering amplitudes. We illustrate our approach by explicitly computing the largest cubic IFCs and thermal conductivity of graphene, especially for its out-of-plane (flexural) modes that exhibit anomalously large anharmonic shifts and thermal conductivity contributions. Our calculations on graphene not only exhibit reasonable agreement with established density-functional theory results, but they also present a pedagogical opportunity for introducing an elegant analytic treatment of the Grüneisen parameters of generic two-band models. Our approach can be readily extended to more complicated crystalline materials with nontrivial anharmonic lattice effects.

  15. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.

  16. The Magnetar Model for Type I Superluminous Supernovae. I. Bayesian Analysis of the Full Multicolor Light-curve Sample with MOSFiT

    NASA Astrophysics Data System (ADS)

    Nicholl, Matt; Guillochon, James; Berger, Edo

    2017-11-01

    We use the new Modular Open Source Fitter for Transients to model 38 hydrogen-poor superluminous supernovae (SLSNe). We fit their multicolor light curves with a magnetar spin-down model and present posterior distributions of magnetar and ejecta parameters. The color evolution can be fit with a simple absorbed blackbody. The medians (1σ ranges) for key parameters are spin period 2.4 ms (1.2-4 ms), magnetic field 0.8× {10}14 G (0.2{--}1.8× {10}14 G), ejecta mass 4.8 {M}⊙ (2.2-12.9 {M}⊙ ), and kinetic energy 3.9× {10}51 erg (1.9{--}9.8× {10}51 erg). This significantly narrows the parameter space compared to our uninformed priors, showing that although the magnetar model is flexible, the parameter space relevant to SLSNe is well constrained by existing data. The requirement that the instantaneous engine power is ˜1044 erg at the light-curve peak necessitates either large rotational energy (P < 2 ms), or more commonly that the spin-down and diffusion timescales be well matched. We find no evidence for separate populations of fast- and slow-declining SLSNe, which instead form a continuum in light-curve widths and inferred parameters. Variations in the spectra are explained through differences in spin-down power and photospheric radii at maximum light. We find no significant correlations between model parameters and host galaxy properties. Comparing our posteriors to stellar evolution models, we show that SLSNe require rapidly rotating (fastest 10%) massive stars (≳ 20 {M}⊙ ), which is consistent with their observed rate. High mass, low metallicity, and likely binary interaction all serve to maintain rapid rotation essential for magnetar formation. By reproducing the full set of light curves, our posteriors can inform photometric searches for SLSNe in future surveys.

  17. Reviewing the evidence to inform the population of cost-effectiveness models within health technology assessments.

    PubMed

    Kaltenthaler, Eva; Tappenden, Paul; Paisley, Suzy

    2013-01-01

    Health technology assessments (HTAs) typically require the development of a cost-effectiveness model, which necessitates the identification, selection, and use of other types of information beyond clinical effectiveness evidence to populate the model parameters. The reviewing activity associated with model development should be transparent and reproducible but can result in a tension between being both timely and systematic. Little procedural guidance exists in this area. The purpose of this article was to provide guidance, informed by focus groups, on what might constitute a systematic and transparent approach to reviewing information to populate model parameters. A focus group series was held with HTA experts in the United Kingdom including systematic reviewers, information specialists, and health economic modelers to explore these issues. Framework analysis was used to analyze the qualitative data elicited during focus groups. Suggestions included the use of rapid reviewing methods and the need to consider the trade-off between relevance and quality. The need for transparency in the reporting of review methods was emphasized. It was suggested that additional attention should be given to the reporting of parameters deemed to be more important to the model or where the preferred decision regarding the choice of evidence is equivocal. These recommendations form part of a Technical Support Document produced for the National Institute for Health and Clinical Excellence Decision Support Unit in the United Kingdom. It is intended that these recommendations will help to ensure a more systematic, transparent, and reproducible process for the review of model parameters within HTA. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    PubMed

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may provide guidance on how to effectively reduce BDT and may be used to identifying deteriorating machine performance. © 2017 American Association of Physicists in Medicine.

  19. Introducing Meta-models for a More Efficient Hazard Mitigation Strategy with Rockfall Protection Barriers

    NASA Astrophysics Data System (ADS)

    Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane

    2018-04-01

    The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.

  20. Light weakly coupled axial forces: models, constraints, and projections

    DOE PAGES

    Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth; ...

    2017-05-01

    Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less

  1. Light weakly coupled axial forces: models, constraints, and projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth

    Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less

  2. Prediction of the chromatographic retention of acid-base compounds in pH buffered methanol-water mobile phases in gradient mode by a simplified model.

    PubMed

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2015-03-13

    Retention of ionizable analytes under gradient elution depends on the pH of the mobile phase, the pKa of the analyte and their evolution along the programmed gradient. In previous work, a model depending on two fitting parameters was recommended because of its very favorable relationship between accuracy and required experimental work. It was developed using acetonitrile as the organic modifier and involves pKa modeling by means of equations that take into account the acidic functional group of the compound (carboxylic acid, protonated amine, etc.). In this work, the two-parameter predicting model is tested and validated using methanol as the organic modifier of the mobile phase and several compounds of higher pharmaceutical relevance and structural complexity as testing analytes. The results have been quite good overall, showing that the predicting model is applicable to a wide variety of acid-base compounds using mobile phases prepared with acetonitrile or methanol. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Transdisciplinary application of the cross-scale resilience model

    USGS Publications Warehouse

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  4. Sleep mechanisms: Sleep deprivation and detection of changing levels of consciousness

    NASA Technical Reports Server (NTRS)

    Dement, W. C.; Barchas, J. D.

    1972-01-01

    An attempt was made to obtain information relevant to assessing the need to sleep and make up for lost sleep. Physiological and behavioral parameters were used as measuring parameters. Sleep deprivation in a restricted environment, derivation of data relevant to determining sleepiness from EEG, and the development of the Sanford Sleepiness Scale were discussed.

  5. The predicted influence of climate change on lesser prairie-chicken reproductive parameters

    USGS Publications Warehouse

    Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, D.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.

    2013-01-01

    The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Nina events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.

  6. The predicted influence of climate change on lesser prairie-chicken reproductive parameters.

    PubMed

    Grisham, Blake A; Boal, Clint W; Haukos, David A; Davis, Dawn M; Boydston, Kathy K; Dixon, Charles; Heck, Willard R

    2013-01-01

    The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.

  7. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    PubMed

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  8. Coupled wake boundary layer model of windfarms

    NASA Astrophysics Data System (ADS)

    Stevens, Richard; Gayme, Dennice; Meneveau, Charles

    2014-11-01

    We present a coupled wake boundary layer (CWBL) model that describes the distribution of the power output in a windfarm. The model couples the traditional, industry-standard wake expansion/superposition approach with a top-down model for the overall windfarm boundary layer structure. Wake models capture the effect of turbine positioning, while the top-down approach represents the interaction between the windturbine wakes and the atmospheric boundary layer. Each portion of the CWBL model requires specification of a parameter that is unknown a-priori. The wake model requires the wake expansion rate, whereas the top-down model requires the effective spanwise turbine spacing within which the model's momentum balance is relevant. The wake expansion rate is obtained by matching the mean velocity at the turbine from both approaches, while the effective spanwise turbine spacing is determined from the wake model. Coupling of the constitutive components of the CWBL model is achieved by iterating these parameters until convergence is reached. We show that the CWBL model predictions compare more favorably with large eddy simulation results than those made with either the wake or top-down model in isolation and that the model can be applied successfully to the Horns Rev and Nysted windfarms. The `Fellowships for Young Energy Scientists' (YES!) of the Foundation for Fundamental Research on Matter supported by NWO, and NSF Grant #1243482.

  9. Analysis on pseudo excitation of random vibration for structure of time flight counter

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Li, Dapeng

    2015-03-01

    Traditional computing method is inefficient for getting key dynamical parameters of complicated structure. Pseudo Excitation Method(PEM) is an effective method for calculation of random vibration. Due to complicated and coupling random vibration in rocket or shuttle launching, the new staging white noise mathematical model is deduced according to the practical launch environment. This deduced model is applied for PEM to calculate the specific structure of Time of Flight Counter(ToFC). The responses of power spectral density and the relevant dynamic characteristic parameters of ToFC are obtained in terms of the flight acceptance test level. Considering stiffness of fixture structure, the random vibration experiments are conducted in three directions to compare with the revised PEM. The experimental results show the structure can bear the random vibration caused by launch without any damage and key dynamical parameters of ToFC are obtained. The revised PEM is similar with random vibration experiment in dynamical parameters and responses are proved by comparative results. The maximum error is within 9%. The reasons of errors are analyzed to improve reliability of calculation. This research provides an effective method for solutions of computing dynamical characteristic parameters of complicated structure in the process of rocket or shuttle launching.

  10. Predictions from a flavour GUT model combined with a SUSY breaking sector

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Hohl, Christian

    2017-10-01

    We discuss how flavour GUT models in the context of supergravity can be completed with a simple SUSY breaking sector, such that the flavour-dependent (non-universal) soft breaking terms can be calculated. As an example, we discuss a model based on an SU(5) GUT symmetry and A 4 family symmetry, plus additional discrete "shaping symmetries" and a ℤ 4 R symmetry. We calculate the soft terms and identify the relevant high scale input parameters, and investigate the resulting predictions for the low scale observables, such as flavour violating processes, the sparticle spectrum and the dark matter relic density.

  11. Constraining extended scalar sectors at the LHC and beyond

    NASA Astrophysics Data System (ADS)

    Ilnicka, Agnieszka; Robens, Tania; Stefaniak, Tim

    2018-04-01

    We give a brief overview of beyond the Standard Model (BSM) theories with an extended scalar sector and their phenomenological status in the light of recent experimental results. We discuss the relevant theoretical and experimental constraints, and show their impact on the allowed parameter space of two specific models: the real scalar singlet extension of the Standard Model (SM) and the Inert Doublet Model. We emphasize the importance of the LHC measurements, both the direct searches for additional scalar bosons, as well as the precise measurements of properties of the Higgs boson of mass 125 GeV. We show the complementarity of these measurements to electroweak and dark matter observables.

  12. Constraining new physics models with isotope shift spectroscopy

    NASA Astrophysics Data System (ADS)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  13. Chaotic behavior of a spin-glass model on a Cayley tree

    NASA Astrophysics Data System (ADS)

    da Costa, F. A.; de Araújo, J. M.; Salinas, S. R.

    2015-06-01

    We investigate the phase diagram of a spin-1 Ising spin-glass model on a Cayley tree. According to early work of Thompson and collaborators, this problem can be formulated in terms of a set of nonlinear discrete recursion relations along the branches of the tree. Physically relevant solutions correspond to the attractors of these mapping equations. In the limit of infinite coordination of the tree, and for some choices of the model parameters, we make contact with findings for the phase diagram of more recently investigated versions of the Blume-Emery-Griffiths spin-glass model. In addition to the anticipated phases, we numerically characterize the existence of modulated and chaotic structures.

  14. Ablation of clinically relevant kidney tissue volumes by high-intensity focused ultrasound: Preliminary results of standardized ex-vivo investigations.

    PubMed

    Häcker, Axel; Peters, Kristina; Knoll, Thomas; Marlinghaus, Ernst; Alken, Peter; Jenne, Jürgen W; Michel, Maurice Stephan

    2006-11-01

    To investigate strategies to achieve confluent kidney-tissue ablation by high-intensity focused ultrasound (HIFU). Our model of the perfused ex-vivo porcine kidney was used. Tissue ablation was performed with an experimental HIFU device (Storz Medical, Kreuzlingen, Switzerland). Lesion-to-lesion interaction was investigated by varying the lesion distance (5 to 2.5 mm), generator power (300, 280, and 260 W), cooling time (10, 20, and 30 seconds), and exposure time (4, 3, and 2 seconds). The lesion rows were analyzed grossly and by histologic examination (hematoxylin-eosin and nicotinamide adenine dinucleotide staining). It was possible to achieve complete homogeneous ablation of a clinically relevant tissue volume but only by meticulous adjustment of the exposure parameters. Minimal changes in these parameters caused changes in lesion formation with holes within the lesions and lesion-to-lesion interaction. Our preliminary results show that when using this new device, HIFU can ablate a large tissue volume homogeneously in perfused ex-vivo porcine tissue under standardized conditions with meticulous adjustment of exposure parameters. Further investigations in vivo are necessary to test whether large tissue volumes can be ablated completely and reliably despite the influence of physiologic tissue and organ movement.

  15. EDITORIAL: Interrelationship between plasma phenomena in the laboratory and in space

    NASA Astrophysics Data System (ADS)

    Koepke, Mark

    2008-07-01

    The premise of investigating basic plasma phenomena relevant to space is that an alliance exists between both basic plasma physicists, using theory, computer modelling and laboratory experiments, and space science experimenters, using different instruments, either flown on different spacecraft in various orbits or stationed on the ground. The intent of this special issue on interrelated phenomena in laboratory and space plasmas is to promote the interpretation of scientific results in a broader context by sharing data, methods, knowledge, perspectives, and reasoning within this alliance. The desired outcomes are practical theories, predictive models, and credible interpretations based on the findings and expertise available. Laboratory-experiment papers that explicitly address a specific space mission or a specific manifestation of a space-plasma phenomenon, space-observation papers that explicitly address a specific laboratory experiment or a specific laboratory result, and theory or modelling papers that explicitly address a connection between both laboratory and space investigations were encouraged. Attention was given to the utility of the references for readers who seek further background, examples, and details. With the advent of instrumented spacecraft, the observation of waves (fluctuations), wind (flows), and weather (dynamics) in space plasmas was approached within the framework provided by theory with intuition provided by the laboratory experiments. Ideas on parallel electric field, magnetic topology, inhomogeneity, and anisotropy have been refined substantially by laboratory experiments. Satellite and rocket observations, theory and simulations, and laboratory experiments have contributed to the revelation of a complex set of processes affecting the accelerations of electrons and ions in the geospace plasma. The processes range from meso-scale of several thousands of kilometers to micro-scale of a few meters to kilometers. Papers included in this special issue serve to synthesise our current understanding of processes related to the coupling and feedback at disparate scales. Categories of topics included here are (1) ionospheric physics and (2) Alfvén-wave physics, both of which are related to the particle acceleration responsible for auroral displays, (3) whistler-mode triggering mechanism, which is relevant to radiation-belt dynamics, (4) plasmoid encountering a barrier, which has applications throughout the realm of space and astrophysical plasmas, and (5) laboratory investigations of the entire magnetosphere or the plasma surrounding the magnetosphere. The papers are ordered from processes that take place nearest the Earth to processes that take place at increasing distances from Earth. Many advances in understanding space plasma phenomena have been linked to insight derived from theoretical modeling and/or laboratory experiments. Observations from space-borne instruments are typically interpreted using theoretical models developed to predict the properties and dynamics of space and astrophysical plasmas. The usefulness of customized laboratory experiments for providing confirmation of theory by identifying, isolating, and studying physical phenomena efficiently, quickly, and economically has been demonstrated in the past. The benefits of laboratory experiments to investigating space-plasma physics are their reproducibility, controllability, diagnosability, reconfigurability, and affordability compared to a satellite mission or rocket campaign. Certainly, the plasma being investigated in a laboratory device is quite different from that being measured by a spaceborne instrument; nevertheless, laboratory experiments discover unexpected phenomena, benchmark theoretical models, develop physical insight, establish observational signatures, and pioneer diagnostic techniques. Explicit reference to such beneficial laboratory contributions is occasionally left out of the citations in the space-physics literature in favor of theory-paper counterparts and, thus, the scientific support that laboratory results can provide to the development of space-relevant theoretical models is often under-recognized. It is unrealistic to expect the dimensional parameters corresponding to space plasma to be matchable in the laboratory. However, a laboratory experiment is considered well designed if the subset of parameters relevant to a specific process shares the same phenomenological regime as the subset of analogous space parameters, even if less important parameters are mismatched. Regime boundaries are assigned by normalizing a dimensional parameter to an appropriate reference or scale value to make it dimensionless and noting the values at which transitions occur in the physical behavior or approximations. An example of matching regimes for cold-plasma waves is finding a 45° diagonal line on the log--log CMA diagram along which lie both a laboratory-observed wave and a space-observed wave. In such a circumstance, a space plasma and a lab plasma will support the same kind of modes if the dimensionless parameters are scaled properly (Bellan 2006 Fundamentals of Plasma Physics (Cambridge: Cambridge University Press) p 227). The plasma source, configuration geometry, and boundary conditions associated with a specific laboratory experiment are characteristic elements that affect the plasma and plasma processes that are being investigated. Space plasma is not exempt from an analogous set of constraining factors that likewise influence the phenomena that occur. Typically, each morphologically distinct region of space has associated with it plasma that is unique by virtue of the various mechanisms responsible for the plasma's presence there, as if the plasma were produced by a unique source. Boundary effects that typically constrain the possible parameter values to lie within one or more restricted ranges are inescapable in laboratory plasma. The goal of a laboratory experiment is to examine the relevant physics within these ranges and extrapolate the results to space conditions that may or may not be subject to any restrictions on the values of the plasma parameters. The interrelationship between laboratory and space plasma experiments has been cultivated at a low level and the potential scientific benefit in this area has yet to be realized. The few but excellent examples of joint papers, joint experiments, and directly relevant cross-disciplinary citations are a direct result of the emphasis placed on this interrelationship two decades ago. Building on this special issue Plasma Physics and Controlled Fusion plans to create a dedicated webpage to highlight papers directly relevant to this field published either in the recent past or in the future. It is hoped that this resource will appeal to the readership in the laboratory-experiment and space-plasma communities and improve the cross-fertilization between them.

  16. Uncertainties propagation and global sensitivity analysis of the frequency response function of piezoelectric energy harvesters

    NASA Astrophysics Data System (ADS)

    Ruiz, Rafael O.; Meruane, Viviana

    2017-06-01

    The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.

  17. Multi-response calibration of a conceptual hydrological model in the semiarid catchment of Wadi al Arab, Jordan

    NASA Astrophysics Data System (ADS)

    Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.

    2014-02-01

    A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.

  18. Sensitivity of predicted scaling and permeability in Enhanced Geothermal Systems to Thermodynamic Data and Activity Models

    NASA Astrophysics Data System (ADS)

    Hingerl, Ferdinand F.; Wagner, Thomas; Kulik, Dmitrii A.; Kosakowski, Georg; Driesner, Thomas; Thomsen, Kaj

    2010-05-01

    A consortium of research groups from ETH Zurich, EPF Lausanne, the Paul Scherrer Institut and the University of Bonn collaborates in a comprehensive program of basic research on key aspects of the Enhanced Geothermal Systems (EGSs). As part of this GEOTHERM project (www.geotherm.ethz.ch), we concentrate on the fundamental investigation of thermodynamic models suitable for describing fluid-rock interactions at geothermal conditions. Predictions of the fluid-rock interaction in EGS still face several major challenges. Slight variations in the input thermodynamic and kinetic parameters may result in significant differences in the predicted mineral solubilities and stable assemblage. Realistic modeling of mineral precipitation in turn has implications onto our understanding of the permeability evolution of the geothermal reservoir, as well as the scaling in technical installations. In order to reasonably model an EGS, thermodynamic databases and activity models must be tailored to geothermal conditions. We therefore implemented in GEMS code the Pitzer formalism, which is the standard model used for computing thermodynamic excess properties of brines at elevated temperatures and pressures. This model, however, depends on a vast amount of interaction parameters, which are to a substantial extend unknown. Furthermore, a high order polynomial temperature interpolation makes extrapolation unreliable if not impossible. As an alternative we additionally implemented the EUNIQUAC activity model. EUNIQUAC requires fewer empirical fit parameters (only binary interaction parameters needed) and uses simpler and more stable temperature and pressure extrapolations. This results in an increase in computation speed, which is of crucial importance when performing coupled long term simulations of geothermal reservoirs. To achieve better performance under geothermal conditions, we are currently partly reformulating EUNIQUAC and refitting the existing parameter set. First results of the Pitzer-EUNIQUAC benchmark applied to relevant aqueous solutions at elevated temperature, pressure and ionic strength will be presented.

  19. Integrated modelling of crop production and nitrate leaching with the Daisy model.

    PubMed

    Manevski, Kiril; Børgesen, Christen D; Li, Xiaoxin; Andersen, Mathias N; Abrahamsen, Per; Hu, Chunsheng; Hansen, Søren

    2016-01-01

    An integrated modelling strategy was designed and applied to the Soil-Vegetation-Atmosphere Transfer model Daisy for simulation of crop production and nitrate leaching under pedo-climatic and agronomic environment different than that of model original parameterisation. The points of significance and caution in the strategy are: •Model preparation should include field data in detail due to the high complexity of the soil and the crop processes simulated with process-based model, and should reflect the study objectives. Inclusion of interactions between parameters in a sensitivity analysis results in better account for impacts on outputs of measured variables.•Model evaluation on several independent data sets increases robustness, at least on coarser time scales such as month or year. It produces a valuable platform for adaptation of the model to new crops or for the improvement of the existing parameters set. On daily time scale, validation for highly dynamic variables such as soil water transport remains challenging. •Model application is demonstrated with relevance for scientists and regional managers. The integrated modelling strategy is applicable for other process-based models similar to Daisy. It is envisaged that the strategy establishes model capability as a useful research/decision-making, and it increases knowledge transferability, reproducibility and traceability.

  20. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  1. Exact solutions of a two parameter flux model and cryobiological applications.

    PubMed

    Benson, James D; Chicone, Carmen C; Critser, John K

    2005-06-01

    Solute-solvent transmembrane flux models are used throughout biological sciences with applications in plant biology, cryobiology (transplantation and transfusion medicine), as well as circulatory and kidney physiology. Using a standard two parameter differential equation model of solute and solvent transmembrane flux described by Jacobs [The simultaneous measurement of cell permeability to water and to dissolved substances, J. Cell. Comp. Physiol. 2 (1932) 427-444], we determine the functions that describe the intracellular water volume and moles of intracellular solute for every time t and every set of initial conditions. Here, we provide several novel biophysical applications of this theory to important biological problems. These include using this result to calculate the value of cell volume excursion maxima and minima along with the time at which they occur, a novel result that is of significant relevance to the addition and removal of permeating solutes during cryopreservation. We also present a methodology that produces extremely accurate sum of squares estimates when fitting data for cellular permeability parameter values. Finally, we show that this theory allows a significant increase in both accuracy and speed of finite element methods for multicellular volume simulations, which has critical clinical biophysical applications in cryosurgical approaches to cancer treatment.

  2. A Data-Driven Approach to Develop Physically Sound Predictors: Application to Depth-Averaged Velocities and Drag Coefficients on Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Tinoco, R. O.; Goldstein, E. B.; Coco, G.

    2016-12-01

    We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.

  3. The Effects of School Holidays on Transmission of Varicella Zoster Virus, England and Wales, 1967–2008

    PubMed Central

    Jackson, Charlotte; Mangtani, Punam; Fine, Paul; Vynnycky, Emilia

    2014-01-01

    Background Changes in children’s contact patterns between termtime and school holidays affect the transmission of several respiratory-spread infections. Transmission of varicella zoster virus (VZV), the causative agent of chickenpox, has also been linked to the school calendar in several settings, but temporal changes in the proportion of young children attending childcare centres may have influenced this relationship. Methods We used two modelling methods (a simple difference equations model and a Time series Susceptible Infectious Recovered (TSIR) model) to estimate fortnightly values of a contact parameter (the per capita rate of effective contact between two specific individuals), using GP consultation data for chickenpox in England and Wales from 1967–2008. Results The estimated contact parameters were 22–31% lower during the summer holiday than during termtime. The relationship between the contact parameter and the school calendar did not change markedly over the years analysed. Conclusions In England and Wales, reductions in contact between children during the school summer holiday lead to a reduction in the transmission of VZV. These estimates are relevant for predicting how closing schools and nurseries may affect an outbreak of an emerging respiratory-spread pathogen. PMID:24932994

  4. Charge relaxation and dynamics in organic semiconductors

    NASA Astrophysics Data System (ADS)

    Kwok, H. L.

    2006-08-01

    Charge relaxation in dispersive materials is often described in terms of the stretched exponential function (Kohlrausch law). The process can be explained using a "hopping" model which in principle, also applies to charge transport such as current conduction. This work analyzed reported transient photoconductivity data on functionalized pentacene single crystals using a geometric hopping model developed by B. Sturman et al and extracted values (or range of values) on the materials parameters relevant to charge relaxation as well as charge transport. Using the correlated disorder model (CDM), we estimated values of the carrier mobility for the pentacene samples. From these results, we observed the following: i) the transport site density appeared to be of the same order of magnitude as the carrier density; ii) it was possible to extract lower bound values on the materials parameters linked to the transport process; and iii) by matching the simulated charge decay to the transient photoconductivity data, we were able to refine estimates on the materials parameters. The data also allowed us to simulate the stretched exponential decay. Our observations suggested that the stretching index and the carrier mobility were related. Physically, such interdependence would allow one to demarcate between localized molecular interactions and distant coulomb interactions.

  5. Parameter-Space Survey of Linear G-mode and Interchange in Extended Magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, E. C.; Sovinec, C. R.

    The extended magnetohydrodynamic stability of interchange modes is studied in two configurations. In slab geometry, a local dispersion relation for the gravitational interchange mode (g-mode) with three different extensions of the MHD model [P. Zhu, et al., Phys. Rev. Lett. 101, 085005 (2008)] is analyzed. Our results delineate where drifts stablize the g-mode with gyroviscosity alone and with a two-fluid Ohm’s law alone. Including the two-fluid Ohm’s law produces an ion drift wave that interacts with the g-mode. This interaction then gives rise to a second instability at finite k y. A second instability is also observed in numerical extended MHD computations of linear interchange in cylindrical screw-pinch equilibria, the second configuration. Particularly with incomplete models, this mode limits the regions of stability for physically realistic conditions. But, applying a consistent two-temperature extended MHD model that includes the diamagnetic heat flux density (more » $$\\vec{q}$$ *) makes the onset of the second mode occur at larger Hall parameter. For conditions relevant to the SSPX experiment [E.B. Hooper, Plasma Phys. Controlled Fusion 54, 113001 (2012)], significant stabilization is observed for Suydam parameters as large as unity (D s≲1).« less

  6. Parameter-Space Survey of Linear G-mode and Interchange in Extended Magnetohydrodynamics

    DOE PAGES

    Howell, E. C.; Sovinec, C. R.

    2017-09-11

    The extended magnetohydrodynamic stability of interchange modes is studied in two configurations. In slab geometry, a local dispersion relation for the gravitational interchange mode (g-mode) with three different extensions of the MHD model [P. Zhu, et al., Phys. Rev. Lett. 101, 085005 (2008)] is analyzed. Our results delineate where drifts stablize the g-mode with gyroviscosity alone and with a two-fluid Ohm’s law alone. Including the two-fluid Ohm’s law produces an ion drift wave that interacts with the g-mode. This interaction then gives rise to a second instability at finite k y. A second instability is also observed in numerical extended MHD computations of linear interchange in cylindrical screw-pinch equilibria, the second configuration. Particularly with incomplete models, this mode limits the regions of stability for physically realistic conditions. But, applying a consistent two-temperature extended MHD model that includes the diamagnetic heat flux density (more » $$\\vec{q}$$ *) makes the onset of the second mode occur at larger Hall parameter. For conditions relevant to the SSPX experiment [E.B. Hooper, Plasma Phys. Controlled Fusion 54, 113001 (2012)], significant stabilization is observed for Suydam parameters as large as unity (D s≲1).« less

  7. Constructing an everywhere and locally relevant predictive model of the West-African critical zone

    NASA Astrophysics Data System (ADS)

    Hector, B.; Cohard, J. M.; Pellarin, T.; Maxwell, R. M.; Cappelaere, B.; Demarty, J.; Grippa, M.; Kergoat, L.; Lebel, T.; Mamadou, O.; Mougin, E.; Panthou, G.; Peugeot, C.; Vandervaere, J. P.; Vischel, T.; Vouillamoz, J. M.

    2017-12-01

    Considering water resources and hydrologic hazards, West Africa is among the most vulnerable regions to face both climatic (e.g. with the observed intensification of precipitation) and anthropogenic changes. With +3% of demographic rate, the region experiences rapid land use changes and increased pressure on surface and groundwater resources with observed consequences on the hydrological cycle (water table rise result of the sahelian paradox, increase in flood occurrence, etc.) Managing large hydrosystems (such as transboundary aquifers or rivers basins as the Niger river) requires anticipation of such changes. However, the region significantly lacks observations, for constructing and validating critical zone (CZ) models able to predict future hydrologic regime, but also comprises hydrosystems which encompass strong environmental gradients (e.g. geological, climatic, ecological) with highly different dominating hydrological processes. We address these issues by constructing a high resolution (1 km²) regional scale physically-based model using ParFlow-CLM which allows modeling a wide range of processes without prior knowledge on their relative dominance. Our approach combines multiple scale modeling from local to meso and regional scales within the same theoretical framework. Local and meso-scale models are evaluated thanks to the rich AMMA-CATCH CZ observation database which covers 3 supersites with contrasted environments in Benin (Lat.: 9.8°N), Niger (Lat.: 13.3°N) and Mali (Lat.: 15.3°N). At the regional scale the lack of relevant map of soil hydrodynamic parameters is addressed using remote sensing data assimilation. Our first results show the model's ability to reproduce the known dominant hydrological processes (runoff generation, ET, groundwater recharge…) across the major West-African regions and allow us to conduct virtual experiments to explore the impact of global changes on the hydrosystems. This approach is a first step toward the construction of a reference model to study regional CZ sensitivity to global changes and will help to identify prior parameters required and to construct meta-models for deeper investigations of interactions within the CZ.

  8. The physiological equivalent temperature - a universal index for the biometeorological assessment of the thermal environment

    NASA Astrophysics Data System (ADS)

    Höppe, P.

    With considerably increased coverage of weather information in the news media in recent years in many countries, there is also more demand for data that are applicable and useful for everyday life. Both the perception of the thermal component of weather as well as the appropriate clothing for thermal comfort result from the integral effects of all meteorological parameters relevant for heat exchange between the body and its environment. Regulatory physiological processes can affect the relative importance of meteorological parameters, e.g. wind velocity becomes more important when the body is sweating. In order to take into account all these factors, it is necessary to use a heat-balance model of the human body. The physiological equivalent temperature (PET) is based on the Munich Energy-balance Model for Individuals (MEMI), which models the thermal conditions of the human body in a physiologically relevant way. PET is defined as the air temperature at which, in a typical indoor setting (without wind and solar radiation), the heat budget of the human body is balanced with the same core and skin temperature as under the complex outdoor conditions to be assessed. This way PET enables a layperson to compare the integral effects of complex thermal conditions outside with his or her own experience indoors. On hot summer days, for example, with direct solar irradiation the PET value may be more than 20 K higher than the air temperature, on a windy day in winter up to 15 K lower.

  9. Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares

    NASA Astrophysics Data System (ADS)

    Heidari, Manoutchehr; Wench, Allen

    1997-05-01

    Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.

  10. Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares

    USGS Publications Warehouse

    Heidari, M.; Moench, A.

    1997-01-01

    Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.

  11. A computational kinetic model of diffusion for molecular systems.

    PubMed

    Teo, Ivan; Schulten, Klaus

    2013-09-28

    Regulation of biomolecular transport in cells involves intra-protein steps like gating and passage through channels, but these steps are preceded by extra-protein steps, namely, diffusive approach and admittance of solutes. The extra-protein steps develop over a 10-100 nm length scale typically in a highly particular environment, characterized through the protein's geometry, surrounding electrostatic field, and location. In order to account for solute energetics and mobility of solutes in this environment at a relevant resolution, we propose a particle-based kinetic model of diffusion based on a Markov State Model framework. Prerequisite input data consist of diffusion coefficient and potential of mean force maps generated from extensive molecular dynamics simulations of proteins and their environment that sample multi-nanosecond durations. The suggested diffusion model can describe transport processes beyond microsecond duration, relevant for biological function and beyond the realm of molecular dynamics simulation. For this purpose the systems are represented by a discrete set of states specified by the positions, volumes, and surface elements of Voronoi grid cells distributed according to a density function resolving the often intricate relevant diffusion space. Validation tests carried out for generic diffusion spaces show that the model and the associated Brownian motion algorithm are viable over a large range of parameter values such as time step, diffusion coefficient, and grid density. A concrete application of the method is demonstrated for ion diffusion around and through the Eschericia coli mechanosensitive channel of small conductance ecMscS.

  12. 3D geometric modeling and simulation of laser propagation through turbulence with plenoptic functions

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Nelson, William; Davis, Christopher C.

    2014-10-01

    Plenoptic functions are functions that preserve all the necessary light field information of optical events. Theoretical work has demonstrated that geometric based plenoptic functions can serve equally well in the traditional wave propagation equation known as the "scalar stochastic Helmholtz equation". However, in addressing problems of 3D turbulence simulation, the dominant methods using phase screen models have limitations both in explaining the choice of parameters (on the transverse plane) in real-world measurements, and finding proper correlations between neighboring phase screens (the Markov assumption breaks down). Though possible corrections to phase screen models are still promising, the equivalent geometric approach based on plenoptic functions begins to show some advantages. In fact, in these geometric approaches, a continuous wave problem is reduced to discrete trajectories of rays. This allows for convenience in parallel computing and guarantees conservation of energy. Besides the pairwise independence of simulated rays, the assigned refractive index grids can be directly tested by temperature measurements with tiny thermoprobes combined with other parameters such as humidity level and wind speed. Furthermore, without loss of generality one can break the causal chain in phase screen models by defining regional refractive centers to allow rays that are less affected to propagate through directly. As a result, our work shows that the 3D geometric approach serves as an efficient and accurate method in assessing relevant turbulence problems with inputs of several environmental measurements and reasonable guesses (such as Cn 2 levels). This approach will facilitate analysis and possible corrections in lateral wave propagation problems, such as image de-blurring, prediction of laser propagation over long ranges, and improvement of free space optic communication systems. In this paper, the plenoptic function model and relevant parallel algorithm computing will be presented, and its primary results and applications are demonstrated.

  13. Hydro-abrasive erosion on coated Pelton runners: Partial calibration of the IEC model based on measurements in HPP Fieschertal

    NASA Astrophysics Data System (ADS)

    Felix, D.; Abgottspon, A.; Albayrak, I.; Boes, R. M.

    2016-11-01

    At medium- and high-head hydropower plants (HPPs) on sediment-laden rivers, hydro-abrasive erosion on hydraulic turbines is a major economic issue. For optimization of such HPPs, there is an interest in equations to predict erosion depths. Such a semi-empirical equation suitable for engineering practice is proposed in the relevant guideline of the International Electrotechnical Commission (IEC 62364). However, for Pelton turbines no numerical values of the model's calibration parameters have been available yet. In the scope of a research project at the high-head HPP Fieschertal, Switzerland, the particle load and the erosion on the buckets of two hard-coated 32 MW-Pelton runners have been measured since 2012. Based on three years of field data, the numerical values of a group of calibration parameters of the IEC erosion model were determined for five application cases: (i) reduction of splitter height, (ii) increase of splitter width and (iii) increase of cut-out depth due to erosion of mainly base material, as well as erosion of coating on (iv) the splitter crests and (v) inside the buckets. Further laboratory and field investigations are recommended to quantify the effects of individual parameters as well as to improve, generalize and validate erosion models for uncoated and coated Pelton turbines.

  14. A geometrically controlled rigidity transition in a model for confluent 3D tissues

    NASA Astrophysics Data System (ADS)

    Merkel, Matthias; Manning, M. Lisa

    2018-02-01

    The origin of rigidity in disordered materials is an outstanding open problem in statistical physics. Previously, a class of 2D cellular models has been shown to undergo a rigidity transition controlled by a mechanical parameter that specifies cell shapes. Here, we generalize this model to 3D and find a rigidity transition that is similarly controlled by the preferred surface area S 0: the model is solid-like below a dimensionless surface area of {s}0\\equiv {S}0/{\\bar{V}}2/3≈ 5.413 with \\bar{V} being the average cell volume, and fluid-like above this value. We demonstrate that, unlike jamming in soft spheres, residual stresses are necessary to create rigidity. These stresses occur precisely when cells are unable to obtain their desired geometry, and we conjecture that there is a well-defined minimal surface area possible for disordered cellular structures. We show that the behavior of this minimal surface induces a linear scaling of the shear modulus with the control parameter at the transition point, which is different from the scaling observed in particulate matter. The existence of such a minimal surface may be relevant for biological tissues and foams, and helps explain why cell shapes are a good structural order parameter for rigidity transitions in biological tissues.

  15. RELATIVISTIC MHD SIMULATIONS OF COLLISION-INDUCED MAGNETIC DISSIPATION IN POYNTING-FLUX-DOMINATED JETS/OUTFLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Wei; Zhang, Bing; Li, Hui

    We perform 3D relativistic ideal magnetohydrodynamics (MHD) simulations to study the collisions between high-σ (Poynting-flux-dominated (PFD)) blobs which contain both poloidal and toroidal magnetic field components. This is meant to mimic the interactions inside a highly variable PFD jet. We discover a significant electromagnetic field (EMF) energy dissipation with an Alfvénic rate with the efficiency around 35%. Detailed analyses show that this dissipation is mostly facilitated by the collision-induced magnetic reconnection. Additional resolution and parameter studies show a robust result that the relative EMF energy dissipation efficiency is nearly independent of the numerical resolution or most physical parameters in themore » relevant parameter range. The reconnection outflows in our simulation can potentially form the multi-orientation relativistic mini jets as needed for several analytical models. We also find a linear relationship between the σ values before and after the major EMF energy dissipation process. Our results give support to the proposed astrophysical models that invoke significant magnetic energy dissipation in PFD jets, such as the internal collision-induced magnetic reconnection and turbulence model for gamma-ray bursts, and reconnection triggered mini jets model for active galactic nuclei. The simulation movies are shown in http://www.physics.unlv.edu/∼deng/simulation1.html.« less

  16. Sensitivity of boundary layer variables to PBL schemes over the central Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Xu, L.; Liu, H.; Wang, L.; Du, Q.; Liu, Y.

    2017-12-01

    Planetary Boundary Layer (PBL) parameterization schemes play critical role in numerical weather prediction and research. They describe physical processes associated with the momentum, heat and humidity exchange between land surface and atmosphere. In this study, two non-local (YSU and ACM2) and two local (MYJ and BouLac) planetary boundary layer parameterization schemes in the Weather Research and Forecasting (WRF) model have been tested over the central Tibetan Plateau regarding of their capability to model boundary layer parameters relevant for surface energy exchange. The model performance has been evaluated against measurements from the Third Tibetan Plateau atmospheric scientific experiment (TIPEX-III). Simulated meteorological parameters and turbulence fluxes have been compared with observations through standard statistical measures. Model results show acceptable behavior, but no particular scheme produces best performance for all locations and parameters. All PBL schemes underestimate near surface air temperatures over the Tibetan Plateau. By investigating the surface energy budget components, the results suggest that downward longwave radiation and sensible heat flux are the main factors causing the lower near surface temperature. Because the downward longwave radiation and sensible heat flux are respectively affected by atmosphere moisture and land-atmosphere coupling, improvements in water vapor distribution and land-atmosphere energy exchange is meaningful for better presentation of PBL physical processes over the central Tibetan Plateau.

  17. An analytical poroelastic model for ultrasound elastography imaging of tumors

    NASA Astrophysics Data System (ADS)

    Tauhidul Islam, Md; Chaudhry, Anuj; Unnikrishnan, Ginu; Reddy, J. N.; Righetti, Raffaella

    2018-01-01

    The mechanical behavior of biological tissues has been studied using a number of mechanical models. Due to the relatively high fluid content and mobility, many biological tissues have been modeled as poroelastic materials. Diseases such as cancers are known to alter the poroelastic response of a tissue. Tissue poroelastic properties such as compressibility, interstitial permeability and fluid pressure also play a key role for the assessment of cancer treatments and for improved therapies. At the present time, however, a limited number of poroelastic models for soft tissues are retrievable in the literature, and the ones available are not directly applicable to tumors as they typically refer to uniform tissues. In this paper, we report the analytical poroelastic model for a non-uniform tissue under stress relaxation. Displacement, strain and fluid pressure fields in a cylindrical poroelastic sample containing a cylindrical inclusion during stress relaxation are computed. Finite element simulations are then used to validate the proposed theoretical model. Statistical analysis demonstrates that the proposed analytical model matches the finite element results with less than 0.5% error. The availability of the analytical model and solutions presented in this paper may be useful to estimate diagnostically relevant poroelastic parameters such as interstitial permeability and fluid pressure, and, in general, for a better interpretation of clinically-relevant ultrasound elastography results.

  18. A computer model for liquid jet atomization in rocket thrust chambers

    NASA Astrophysics Data System (ADS)

    Giridharan, M. G.; Lee, J. G.; Krishnan, A.; Yang, H. Q.; Ibrahim, E.; Chuech, S.; Przekwas, A. J.

    1991-12-01

    The process of atomization has been used as an efficient means of burning liquid fuels in rocket engines, gas turbine engines, internal combustion engines, and industrial furnaces. Despite its widespread application, this complex hydrodynamic phenomenon has not been well understood, and predictive models for this process are still in their infancy. The difficulty in simulating the atomization process arises from the relatively large number of parameters that influence it, including the details of the injector geometry, liquid and gas turbulence, and the operating conditions. In this study, numerical models are developed from first principles, to quantify factors influencing atomization. For example, the surface wave dynamics theory is used for modeling the primary atomization and the droplet energy conservation principle is applied for modeling the secondary atomization. The use of empirical correlations has been minimized by shifting the analyses to fundamental levels. During applications of these models, parametric studies are performed to understand and correlate the influence of relevant parameters on the atomization process. The predictions of these models are compared with existing experimental data. The main tasks of this study were the following: development of a primary atomization model; development of a secondary atomization model; development of a model for impinging jets; development of a model for swirling jets; and coupling of the primary atomization model with a CFD code.

  19. Multiscale Modeling of Hematologic Disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedosov, Dmitry A.; Pivkin, Igor; Pan, Wenxiao

    Parasitic infectious diseases and other hereditary hematologic disorders are often associated with major changes in the shape and viscoelastic properties of red blood cells (RBCs). Such changes can disrupt blood flow and even brain perfusion, as in the case of cerebral malaria. Modeling of these hematologic disorders requires a seamless multiscale approach, where blood cells and blood flow in the entire arterial tree are represented accurately using physiologically consistent parameters. In this chapter, we present a computational methodology based on dissipative particle dynamics (DPD) which models RBCs as well as whole blood in health and disease. DPD is a Lagrangianmore » method that can be derived from systematic coarse-graining of molecular dynamics but can scale efficiently up to small arteries and can also be used to model RBCs down to spectrin level. To this end, we present two complementary mathematical models for RBCs and describe a systematic procedure on extracting the relevant input parameters from optical tweezers and microfluidic experiments for single RBCs. We then use these validated RBC models to predict the behavior of whole healthy blood and compare with experimental results. The same procedure is applied to modeling malaria, and results for infected single RBCs and whole blood are presented.« less

  20. Mapping (dis)agreement in hydrologic projections

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Addor, Nans; Mizukami, Naoki; Newman, Andrew J.; Torfs, Paul J. J. F.; Clark, Martyn P.; Uijlenhoet, Remko; Teuling, Adriaan J.

    2018-03-01

    Hydrologic projections are of vital socio-economic importance. However, they are also prone to uncertainty. In order to establish a meaningful range of storylines to support water managers in decision making, we need to reveal the relevant sources of uncertainty. Here, we systematically and extensively investigate uncertainty in hydrologic projections for 605 basins throughout the contiguous US. We show that in the majority of the basins, the sign of change in average annual runoff and discharge timing for the period 2070-2100 compared to 1985-2008 differs among combinations of climate models, hydrologic models, and parameters. Mapping the results revealed that different sources of uncertainty dominate in different regions. Hydrologic model induced uncertainty in the sign of change in mean runoff was related to snow processes and aridity, whereas uncertainty in both mean runoff and discharge timing induced by the climate models was related to disagreement among the models regarding the change in precipitation. Overall, disagreement on the sign of change was more widespread for the mean runoff than for the discharge timing. The results demonstrate the need to define a wide range of quantitative hydrologic storylines, including parameter, hydrologic model, and climate model forcing uncertainty, to support water resource planning.

  1. Using multiple group modeling to test moderators in meta-analysis.

    PubMed

    Schoemann, Alexander M

    2016-12-01

    Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. PubMed Interact: an Interactive Search Application for MEDLINE/PubMed

    PubMed Central

    Muin, Michael; Fontelo, Paul; Ackerman, Michael

    2006-01-01

    Online search and retrieval systems are important resources for medical literature research. Progressive Web 2.0 technologies provide opportunities to improve search strategies and user experience. Using PHP, Document Object Model (DOM) manipulation and Asynchronous JavaScript and XML (Ajax), PubMed Interact allows greater functionality so users can refine search parameters with ease and interact with the search results to retrieve and display relevant information and related articles. PMID:17238658

  3. Mathematical models of human paralyzed muscle after long-term training.

    PubMed

    Law, L A Frey; Shields, R K

    2007-01-01

    Spinal cord injury (SCI) results in major musculoskeletal adaptations, including muscle atrophy, faster contractile properties, increased fatigability, and bone loss. The use of functional electrical stimulation (FES) provides a method to prevent paralyzed muscle adaptations in order to sustain force-generating capacity. Mathematical muscle models may be able to predict optimal activation strategies during FES, however muscle properties further adapt with long-term training. The purpose of this study was to compare the accuracy of three muscle models, one linear and two nonlinear, for predicting paralyzed soleus muscle force after exposure to long-term FES training. Further, we contrasted the findings between the trained and untrained limbs. The three models' parameters were best fit to a single force train in the trained soleus muscle (N=4). Nine additional force trains (test trains) were predicted for each subject using the developed models. Model errors between predicted and experimental force trains were determined, including specific muscle force properties. The mean overall error was greatest for the linear model (15.8%) and least for the nonlinear Hill Huxley type model (7.8%). No significant error differences were observed between the trained versus untrained limbs, although model parameter values were significantly altered with training. This study confirmed that nonlinear models most accurately predict both trained and untrained paralyzed muscle force properties. Moreover, the optimized model parameter values were responsive to the relative physiological state of the paralyzed muscle (trained versus untrained). These findings are relevant for the design and control of neuro-prosthetic devices for those with SCI.

  4. Estimating state-transition probabilities for unobservable states using capture-recapture/resighting data

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.

    2002-01-01

    Temporary emigration was identified some time ago as causing potential problems in capture-recapture studies, and in the last five years approaches have been developed for dealing with special cases of this general problem. Temporary emigration can be viewed more generally as involving transitions to and from an unobservable state, and frequently the state itself is one of biological interest (e.g., 'nonbreeder'). Development of models that permit estimation of relevant parameters in the presence of an unobservable state requires either extra information (e.g., as supplied by Pollock's robust design) or the following classes of model constraints: reducing the order of Markovian transition probabilities, imposing a degree of determinism on transition probabilities, removing state specificity of survival probabilities, and imposing temporal constancy of parameters. The objective of the work described in this paper is to investigate estimability of model parameters under a variety of models that include an unobservable state. Beginning with a very general model and no extra information, we used numerical methods to systematically investigate the use of ancillary information and constraints to yield models that are useful for estimation. The result is a catalog of models for which estimation is possible. An example analysis of sea turtle capture-recapture data under two different models showed similar point estimates but increased precision for the model that incorporated ancillary data (the robust design) when compared to the model with deterministic transitions only. This comparison and the results of our numerical investigation of model structures lead to design suggestions for capture-recapture studies in the presence of an unobservable state.

  5. Accounting for Parameter Uncertainty in Complex Atmospheric Models, With an Application to Greenhouse Gas Emissions Evaluation

    NASA Astrophysics Data System (ADS)

    Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.

    2016-12-01

    In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.

  6. Physical properties of asteroids derived from a novel approach to modeling of optical lightcurves and WISE thermalinfrared data

    NASA Astrophysics Data System (ADS)

    Durech, Josef; Hanus, Josef; Delbo, Marco; Ali-Lagoa, Victor; Carry, Benoit

    2014-11-01

    Convex shape models and spin vectors of asteroids are now routinely derived from their disk-integrated lightcurves by the lightcurve inversion method of Kaasalainen et al. (2001, Icarus 153, 37). These shape models can be then used in combination with thermal infrared data and a thermophysical model to derive other physical parameters - size, albedo, macroscopic roughness and thermal inertia of the surface. In this classical two-step approach, the shape and spin parameters are kept fixed during the thermophysical modeling when the emitted thermal flux is computed from the surface temperature, which is computed by solving a 1-D heat diffusion equation in sub-surface layers. A novel method of simultaneous inversion of optical and infrared data was presented by Durech et al. (2012, LPI Contribution No. 1667, id.6118). The new algorithm uses the same convex shape representation as the lightcurve inversion but optimizes all relevant physical parameters simultaneously (including the shape, size, rotation vector, thermal inertia, albedo, surface roughness, etc.), which leads to a better fit to the thermal data and a reliable estimation of model uncertainties. We applied this method to selected asteroids using their optical lightcurves from archives and thermal infrared data observed by the Wide-field Infrared Survey Explorer (WISE) satellite. We will (i) show several examples of how well our model fits both optical and infrared data, (ii) discuss the uncertainty of derived parameters (namely the thermal inertia), (iii) compare results obtained with the two-step approach with those obtained by our method, (iv) discuss the advantages of this simultaneous approach with respect to the classical two-step approach, and (v) advertise the possibility to use this approach to tens of thousands asteroids for which enough WISE and optical data exist.

  7. Modeling Longitudinal Dynamics in the Fermilab Booster Synchrotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostiguy, Jean-Francois; Bhat, Chandra; Lebedev, Valeri

    2016-06-01

    The PIP-II project will replace the existing 400 MeV linac with a new, CW-capable, 800 MeV superconducting one. With respect to current operations, a 50% increase in beam intensity in the rapid cycling Booster synchrotron is expected. Booster batches are combined in the Recycler ring; this process limits the allowed longitudinal emittance of the extracted Booster beam. To suppress eddy currents, the Booster has no beam pipe; magnets are evacuated, exposing the beam to core laminations and this has a substantial impact on the longitudinal impedance. Noticeable longitudinal emittance growth is already observed at transition crossing. Operation at higher intensitymore » will likely necessitate mitigation measures. We describe systematic efforts to construct a predictive model for current operating conditions. A longitudinal only code including a laminated wall impedance model, space charge effects, and feedback loops is developed. Parameter validation is performed using detailed measurements of relevant beam, rf and control parameters. An attempt is made to benchmark the code at operationally favorable machine settings.« less

  8. A COMPREHENSIVE INSIGHT ON OCULAR PHARMACOKINETICS

    PubMed Central

    Agrahari, Vibhuti; Mandal, Abhirup; Agrahari, Vivek; Trinh, Hoang My; Joseph, Mary; Ray, Animikh; Hadji, Hicheme; Mitra, Ranjana; Pal, Dhananjay; Mitra, Ashim K.

    2017-01-01

    Eye is a distinctive organ with protective anatomy and physiology. Several pharmacokinetics compartment model of ocular drug delivery has been developed for describing the absorption, distribution and elimination of ocular drugs in the eye. Determining pharmacokinetics parameters in ocular tissues is a major challenge because of the complex anatomy and dynamic physiological barrier of the eye. In this review, pharmacokinetics of these compartments exploring different drugs, delivery systems and routes of administration are discussed including factors affecting intraocular bioavailability. Factors such as pre-corneal fluid drainage, drug binding to tear proteins, systemic drug absorption, corneal factors, melanin binding, drug metabolism renders ocular delivery challenging and elaborated in this manuscript. Several compartment models are discussed those are developed in ocular drug delivery to study the pharmacokinetics parameters. There are several transporters present in both anterior and posterior segments of the eye which play a significant role in ocular pharmacokinetics and summarized briefly. Moreover, several ocular pharmacokinetics animal models and relevant studies are reviewed and discussed in addition to the pharmacokinetics of various ocular formulations. PMID:27798766

  9. Reconstruction of normal forms by learning informed observation geometries from data.

    PubMed

    Yair, Or; Talmon, Ronen; Coifman, Ronald R; Kevrekidis, Ioannis G

    2017-09-19

    The discovery of physical laws consistent with empirical observations is at the heart of (applied) science and engineering. These laws typically take the form of nonlinear differential equations depending on parameters; dynamical systems theory provides, through the appropriate normal forms, an "intrinsic" prototypical characterization of the types of dynamical regimes accessible to a given model. Using an implementation of data-informed geometry learning, we directly reconstruct the relevant "normal forms": a quantitative mapping from empirical observations to prototypical realizations of the underlying dynamics. Interestingly, the state variables and the parameters of these realizations are inferred from the empirical observations; without prior knowledge or understanding, they parametrize the dynamics intrinsically without explicit reference to fundamental physical quantities.

  10. Warehouse stocking optimization based on dynamic ant colony genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaoxu

    2018-04-01

    In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.

  11. 3D numerical simulations of negative hydrogen ion extraction using realistic plasma parameters, geometry of the extraction aperture and full 3D magnetic field map

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.

    2014-02-01

    Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.

  12. Building a database for statistical characterization of ELMs on DIII-D

    NASA Astrophysics Data System (ADS)

    Fritch, B. J.; Marinoni, A.; Bortolon, A.

    2017-10-01

    Edge localized modes (ELMs) are bursty instabilities which occur in the edge region of H-mode plasmas and have the potential to damage in-vessel components of future fusion machines by exposing the divertor region to large energy and particle fluxes during each ELM event. While most ELM studies focus on average quantities (e.g. energy loss per ELM), this work investigates the statistical distributions of ELM characteristics, as a function of plasma parameters. A semi-automatic algorithm is being used to create a database documenting trigger times of the tens of thousands of ELMs for DIII-D discharges in scenarios relevant to ITER, thus allowing statistically significant analysis. Probability distributions of inter-ELM periods and energy losses will be determined and related to relevant plasma parameters such as density, stored energy, and current in order to constrain models and improve estimates of the expected inter-ELM periods and sizes, both of which must be controlled in future reactors. Work supported in part by US DoE under the Science Undergraduate Laboratory Internships (SULI) program, DE-FC02-04ER54698 and DE-FG02- 94ER54235.

  13. Slice cultures of the imprinting-relevant forebrain area MNH of the domestic chick: quantitative characterization of neuronal morphology.

    PubMed

    Hofmann, H; Braun, K

    1995-05-26

    The persistence of morphological features of neurons in slice cultures of the imprinting-relevant forebrain area MNH (mediorostral neostriatum and hyperstriatum ventrale) of the domestic chick was analysed at 7, 14, 21 and 28 days in vitro. After having been explanted and kept in culture the neurons in vitro have larger soma areas, longer and more extensively branched dendritic trees and lower spine frequencies compared to the neurons in vivo. During the analyzed culturing period, the parameters soma area, total and mean dendritic length, number of dendrites, number of dendritic nodes per dendrite and per neuron as well as the spine densities in different dendritic segments showed no significant differences between early and late periods. Highly correlated in every age group were the total dendritic length and the number of dendritic nodes per neuron, indicating regular ramification during dendritic growth. Since these morphological parameters remain stable during the first 4 weeks in vitro, this culture system may provide a suitable model to investigate experimentally induced morphological changes.

  14. A delay differential model of ENSO variability: parametric instability and the distribution of extremes

    NASA Astrophysics Data System (ADS)

    Zaliapin, I.; Ghil, M.; Thompson, S.

    2007-12-01

    We consider a Delay Differential Equation (DDE) model for El-Nino Southern Oscillation (ENSO) variability. The model combines two key mechanisms that participate in the ENSO dynamics: delayed negative feedback and seasonal forcing. Descriptive and metric stability analyses of the model are performed in a complete 3D space of its physically relevant parameters. Existence of two regimes --- stable and unstable --- is reported. The domains of the regimes are separated by a sharp neutral curve in the parameter space. The detailed structure of the neutral curve become very complicated (possibly fractal), and individual trajectories within the unstable region become highly complex (possibly chaotic) as the atmosphere-ocean coupling increases. In the unstable regime, spontaneous transitions in the mean "temperature" (i.e., thermocline depth), period, and extreme annual values occur, for purely periodic, seasonal forcing. This indicates (via the continuous dependence theorem) the existence of numerous unstable solutions responsible for the complex dynamics of the system. In the stable regime, only periodic solutions are found. Our results illustrate the role of the distinct parameters of ENSO variability, such as strength of seasonal forcing vs. atmosphere ocean coupling and propagation period of oceanic waves across the Tropical Pacific. The model reproduces, among other phenomena, the Devil's bleachers (caused by period locking) documented in other ENSO models, such as nonlinear PDEs and GCMs, as well as in certain observations. We expect such behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.

  15. Mathematics as a conduit for translational research in post-traumatic osteoarthritis.

    PubMed

    Ayati, Bruce P; Kapitanov, Georgi I; Coleman, Mitchell C; Anderson, Donald D; Martin, James A

    2017-03-01

    Biomathematical models offer a powerful method of clarifying complex temporal interactions and the relationships among multiple variables in a system. We present a coupled in silico biomathematical model of articular cartilage degeneration in response to impact and/or aberrant loading such as would be associated with injury to an articular joint. The model incorporates fundamental biological and mechanical information obtained from explant and small animal studies to predict post-traumatic osteoarthritis (PTOA) progression, with an eye toward eventual application in human patients. In this sense, we refer to the mathematics as a "conduit of translation." The new in silico framework presented in this paper involves a biomathematical model for the cellular and biochemical response to strains computed using finite element analysis. The model predicts qualitative responses presently, utilizing system parameter values largely taken from the literature. To contribute to accurate predictions, models need to be accurately parameterized with values that are based on solid science. We discuss a parameter identification protocol that will enable us to make increasingly accurate predictions of PTOA progression using additional data from smaller scale explant and small animal assays as they become available. By distilling the data from the explant and animal assays into parameters for biomathematical models, mathematics can translate experimental data to clinically relevant knowledge. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:566-572, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  16. A comparison of random draw and locally neutral models for the avifauna of an English woodland.

    PubMed

    Dolman, Andrew M; Blackburn, Tim M

    2004-06-03

    Explanations for patterns observed in the structure of local assemblages are frequently sought with reference to interactions between species, and between species and their local environment. However, analyses of null models, where non-interactive local communities are assembled from regional species pools, have demonstrated that much of the structure of local assemblages remains in simulated assemblages where local interactions have been excluded. Here we compare the ability of two null models to reproduce the breeding bird community of Eastern Wood, a 16-hectare woodland in England, UK. A random draw model, in which there is complete annual replacement of the community by immigrants from the regional pool, is compared to a locally neutral community model, in which there are two additional parameters describing the proportion of the community replaced annually (per capita death rate) and the proportion of individuals recruited locally rather than as immigrants from the regional pool. Both the random draw and locally neutral model are capable of reproducing with significant accuracy several features of the observed structure of the annual Eastern Wood breeding bird community, including species relative abundances, species richness and species composition. The two additional parameters present in the neutral model result in a qualitatively more realistic representation of the Eastern Wood breeding bird community, particularly of its dynamics through time. The fact that these parameters can be varied, allows for a close quantitative fit between model and observed communities to be achieved, particularly with respect to annual species richness and species accumulation through time. The presence of additional free parameters does not detract from the qualitative improvement in the model and the neutral model remains a model of local community structure that is null with respect to species differences at the local scale. The ability of this locally neutral model to describe a larger number of woodland bird communities with either little variation in its parameters or with variation explained by features local to the woods themselves (such as the area and isolation of a wood) will be a key subsequent test of its relevance.

  17. Laser-driven magnetic reconnection in the multi-plasmoid regime

    NASA Astrophysics Data System (ADS)

    Totorica, Samuel; Abel, Tom; Fiuza, Frederico

    2017-10-01

    Magnetic reconnection is a promising candidate mechanism for accelerating the nonthermal particles associated with explosive astrophysical phenomena. Laboratory experiments are starting to probe multi-plasmoid regimes of relevance for particle acceleration. We have performed two- and three-dimensional particle-in-cell (PIC) simulations to explore particle acceleration for parameters relevant to laser-driven reconnection experiments. We have extended our previous work to explore particle acceleration in larger system sizes. Our results show the transition to plasmoid-dominated acceleration associated with the merging and contraction of plasmoids that further extend the maximum energy of the power-law tail of the particle distribution. Furthermore, we have modeled Coulomb collisions and will discuss the influence of collisionality on the plasmoid formation, dynamics, and particle acceleration.

  18. Estimation of sojourn time in chronic disease screening without data on interval cases.

    PubMed

    Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W

    2000-03-01

    Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.

  19. Appropriate evidence sources for populating decision analytic models within health technology assessment (HTA): a systematic review of HTA manuals and health economic guidelines.

    PubMed

    Zechmeister-Koss, Ingrid; Schnell-Inderst, Petra; Zauner, Günther

    2014-04-01

    An increasing number of evidence sources are relevant for populating decision analytic models. What is needed is detailed methodological advice on which type of data is to be used for what type of model parameter. We aim to identify standards in health technology assessment manuals and economic (modeling) guidelines on appropriate evidence sources and on the role different types of data play within a model. Documents were identified via a call among members of the International Network of Agencies for Health Technology Assessment and by hand search. We included documents from Europe, the United States, Canada, Australia, and New Zealand as well as transnational guidelines written in English or German. We systematically summarized in a narrative manner information on appropriate evidence sources for model parameters, their advantages and limitations, data identification methods, and data quality issues. A large variety of evidence sources for populating models are mentioned in the 28 documents included. They comprise research- and non-research-based sources. Valid and less appropriate sources are identified for informing different types of model parameters, such as clinical effect size, natural history of disease, resource use, unit costs, and health state utility values. Guidelines do not provide structured and detailed advice on this issue. The article does not include information from guidelines in languages other than English or German, and the information is not tailored to specific modeling techniques. The usability of guidelines and manuals for modeling could be improved by addressing the issue of evidence sources in a more structured and comprehensive format.

  20. Parameterisation of Biome BGC to assess forest ecosystems in Africa

    NASA Astrophysics Data System (ADS)

    Gautam, Sishir; Pietsch, Stephan A.

    2010-05-01

    African forest ecosystems are an important environmental and economic resource. Several studies show that tropical forests are critical to society as economic, environmental and societal resources. Tropical forests are carbon dense and thus play a key role in climate change mitigation. Unfortunately, the response of tropical forests to environmental change is largely unknown owing to insufficient spatially extensive observations. Developing regions like Africa where records of forest management for long periods are unavailable the process-based ecosystem simulation model - BIOME BGC could be a suitable tool to explain forest ecosystem dynamics. This ecosystem simulation model uses descriptive input parameters to establish the physiology, biochemistry, structure, and allocation patterns within vegetation functional types, or biomes. Undocumented parameters for larger-resolution simulations are currently the major limitations to regional modelling in African forest ecosystems. This study was conducted to document input parameters for BIOME-BGC for major natural tropical forests in the Congo basin. Based on available literature and field measurements updated values for turnover and mortality, allometry, carbon to nitrogen ratios, allocation of plant material to labile, cellulose, and lignin pools, tree morphology and other relevant factors were assigned. Daily climate input data for the model applications were generated using the statistical weather generator MarkSim. The forest was inventoried at various sites and soil samples of corresponding stands across Gabon were collected. Carbon and nitrogen in the collected soil samples were determined from soil analysis. The observed tree volume, soil carbon and soil nitrogen were then compared with the simulated model outputs to evaluate the model performance. Furthermore, the simulation using Congo Basin specific parameters and generalised BIOME BGC parameters for tropical evergreen broadleaved tree species were also executed and the simulated results compared. Once the model was optimised for forests in the Congo basin it was validated against observed tree volume, soil carbon and soil nitrogen from a set of independent plots.

  1. Spatial capture-recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sollmann, Rahel; Gardner, Beth

    2013-01-01

    Spatial Capture-Recapture provides a revolutionary extension of traditional capture-recapture methods for studying animal populations using data from live trapping, camera trapping, DNA sampling, acoustic sampling, and related field methods. This book is a conceptual and methodological synthesis of spatial capture-recapture modeling. As a comprehensive how-to manual, this reference contains detailed examples of a wide range of relevant spatial capture-recapture models for inference about population size and spatial and temporal variation in demographic parameters. Practicing field biologists studying animal populations will find this book to be a useful resource, as will graduate students and professionals in ecology, conservation biology, and fisheries and wildlife management.

  2. Models of dyadic social interaction.

    PubMed Central

    Griffin, Dale; Gonzalez, Richard

    2003-01-01

    We discuss the logic of research designs for dyadic interaction and present statistical models with parameters that are tied to psychologically relevant constructs. Building on Karl Pearson's classic nineteenth-century statistical analysis of within-organism similarity, we describe several approaches to indexing dyadic interdependence and provide graphical methods for visualizing dyadic data. We also describe several statistical and conceptual solutions to the 'levels of analytic' problem in analysing dyadic data. These analytic strategies allow the researcher to examine and measure psychological questions of interdependence and social influence. We provide illustrative data from casually interacting and romantic dyads. PMID:12689382

  3. Effects of Intergranular Gas Bubbles on Thermal Conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    K. Chockalingam; Paul C. Millett; M. R. Tonks

    2012-11-01

    Model microstructures obtained from phase-field simulations are used to study the effective heat transfer across bicrys- tals with stationary grain boundary bubble populations. We find that the grain boundary coverage, irrespective of the intergranular bubble radii, is the most relevant parameter to the thermal resistance, which we use to derive effec- tive Kapitza resistances that are dependent on the grain boundary coverage and Kaptiza resistance of the intact grain boundary. We propose a model to predict thermal conductivity as a function of porosity, grain-size, Kaptiza resistance of the intact grain boundary, and grain boundary bubble coverage.

  4. The impact of pesticides on oxidative stress level in human organism and their activity as an endocrine disruptor.

    PubMed

    Jabłońska-Trypuć, Agata; Wołejko, Elżbieta; Wydro, Urszula; Butarewicz, Andrzej

    2017-07-03

    Pesticides cause serious environmental and health problems both to humans and animals. The aim of this review is to discuss selected herbicides and fungicides regarding their mode of action and their influence on basic oxidative stress parameters and endocrine disruption properties tested in selected cell cultures in vitro. Because of numerous difficulties which animal studies are subject to, cell cultures are an excellent experimental model reflecting human exposure to different pesticides through all relevant routes. This experimental model can be used to monitor aggregate and cumulative pesticide exposures.

  5. Modelling iron mismanagement in neurodegenerative disease in vitro: paradigms, pitfalls, possibilities & practical considerations.

    PubMed

    Healy, Sinead; McMahon, Jill M; FitzGerald, Una

    2017-11-01

    Although aberrant metabolism and deposition of iron has been associated with aging and neurodegeneration, the contribution of iron to neuropathology is unclear. Well-designed model systems that are suited to studying the putative pathological effect of iron are likely to be essential if such unresolved details are to be clarified. In this review, we have evaluated the utility and effectiveness of the reductionist in vitro platform to study the molecular mechanisms putatively underlying iron perturbations of neurodegenerative disease. The expression and function of iron metabolism proteins in glia and neurons and the extent to which this iron regulatory system is replicated in in vitro models has been comprehensively described, followed by an appraisal of the inherent suitability of different in vitro and ex vivo models that have been, or might be, used for iron loading. Next, we have identified and critiqued the relevant experimental parameters that have been used in in vitro iron loading experiments, including the choice of iron reagent, relevant iron loading concentrations and supplementation with serum or ascorbate, and propose optimal iron loading conditions. Finally, we have provided a synthesis of the differential iron accumulation and toxicity in glia and neurons from reported iron loading paradigms. In summary, this review has amalgamated the findings and paradigms of the published reports modelling iron loading in monocultures, discussed the limitations and discrepancies of such work to critically propose a robust, relevant and reliable model of iron loading to be used for future investigations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. General two-species interacting Lotka-Volterra system: Population dynamics and wave propagation

    NASA Astrophysics Data System (ADS)

    Zhu, Haoqi; Wang, Mao-Xiang; Lai, Pik-Yin

    2018-05-01

    The population dynamics of two interacting species modeled by the Lotka-Volterra (LV) model with general parameters that can promote or suppress the other species is studied. It is found that the properties of the two species' isoclines determine the interaction of species, leading to six regimes in the phase diagram of interspecies interaction; i.e., there are six different interspecific relationships described by the LV model. Four regimes allow for nontrivial species coexistence, among which it is found that three of them are stable, namely, weak competition, mutualism, and predator-prey scenarios can lead to win-win coexistence situations. The Lyapunov function for general nontrivial two-species coexistence is also constructed. Furthermore, in the presence of spatial diffusion of the species, the dynamics can lead to steady wavefront propagation and can alter the population map. Propagating wavefront solutions in one dimension are investigated analytically and by numerical solutions. The steady wavefront speeds are obtained analytically via nonlinear dynamics analysis and verified by numerical solutions. In addition to the inter- and intraspecific interaction parameters, the intrinsic speed parameters of each species play a decisive role in species populations and wave properties. In some regimes, both species can copropagate with the same wave speeds in a finite range of parameters. Our results are further discussed in the light of possible biological relevance and ecological implications.

  7. Role of intraspecific competition in the coexistence of mobile populations in spatially extended ecosystems.

    PubMed

    Yang, Rui; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2010-06-01

    Evolutionary-game based models of nonhierarchical, cyclically competing populations have become paradigmatic for addressing the fundamental problem of species coexistence in spatially extended ecosystems. We study the role of intraspecific competition in the coexistence and find that the competition can strongly promote the coexistence for high individual mobility in the sense that stable coexistence can arise in parameter regime where extinction would occur without the competition. The critical value of the competition rate beyond which the coexistence is induced is found to be independent of the mobility. We derive a theoretical model based on nonlinear partial differential equations to predict the critical competition rate and the boundaries between the coexistence and extinction regions in a relevant parameter space. We also investigate pattern formation and well-mixed spatiotemporal population dynamics to gain further insights into our findings. (c) 2010 American Institute of Physics.

  8. A model for acoustic vaporization dynamics of a bubble/droplet system encapsulated within a hyperelastic shell.

    PubMed

    Lacour, Thomas; Guédra, Matthieu; Valier-Brasier, Tony; Coulouvrat, François

    2018-01-01

    Nanodroplets have great, promising medical applications such as contrast imaging, embolotherapy, or targeted drug delivery. Their functions can be mechanically activated by means of focused ultrasound inducing a phase change of the inner liquid known as the acoustic droplet vaporization (ADV) process. In this context, a four-phases (vapor + liquid + shell + surrounding environment) model of ADV is proposed. Attention is especially devoted to the mechanical properties of the encapsulating shell, incorporating the well-known strain-softening behavior of Mooney-Rivlin material adapted to very large deformations of soft, nearly incompressible materials. Various responses to ultrasound excitation are illustrated, depending on linear and nonlinear mechanical shell properties and acoustical excitation parameters. Different classes of ADV outcomes are exhibited, and a relevant threshold ensuring complete vaporization of the inner liquid layer is defined. The dependence of this threshold with acoustical, geometrical, and mechanical parameters is also provided.

  9. Communication: Vibrational relaxation of CO(1Σ) in collision with Ar(1S) at temperatures relevant to the hypersonic flight regime.

    PubMed

    Denis-Alpizar, Otoniel; Bemish, Raymond J; Meuwly, Markus

    2017-03-21

    Vibrational energy relaxation (VER) of diatomics following collisions with the surrounding medium is an important elementary process for modeling high-temperature gas flow. VER is characterized by two parameters: the vibrational relaxation time τ vib and the state relaxation rates. Here the vibrational relaxation of CO(ν=0←ν=1) in Ar is considered for validating a computational approach to determine the vibrational relaxation time parameter (pτ vib ) using an accurate, fully dimensional potential energy surface. For lower temperatures, comparison with experimental data shows very good agreement whereas at higher temperatures (up to 25 000 K), comparisons with an empirically modified model due to Park confirm its validity for CO in Ar. Additionally, the calculations provide insight into the importance of Δν>1 transitions that are ignored in typical applications of the Landau-Teller framework.

  10. Financial Crisis: A New Measure for Risk of Pension Fund Portfolios

    PubMed Central

    Cadoni, Marinella; Melis, Roberta; Trudda, Alessandro

    2015-01-01

    It has been argued that pension funds should have limitations on their asset allocation, based on the risk profile of the different financial instruments available on the financial markets. This issue proves to be highly relevant at times of market crisis, when a regulation establishing limits to risk taking for pension funds could prevent defaults. In this paper we present a framework for evaluating the risk level of a single financial instrument or a portfolio. By assuming that the log asset returns can be described by a multifractional Brownian motion, we evaluate the risk using the time dependent Hurst parameter H(t) which models volatility. To provide a measure of the risk, we model the Hurst parameter with a random variable with mixture of beta distribution. We prove the efficacy of the methodology by implementing it on different risk level financial instruments and portfolios. PMID:26086529

  11. Financial Crisis: A New Measure for Risk of Pension Fund Portfolios.

    PubMed

    Cadoni, Marinella; Melis, Roberta; Trudda, Alessandro

    2015-01-01

    It has been argued that pension funds should have limitations on their asset allocation, based on the risk profile of the different financial instruments available on the financial markets. This issue proves to be highly relevant at times of market crisis, when a regulation establishing limits to risk taking for pension funds could prevent defaults. In this paper we present a framework for evaluating the risk level of a single financial instrument or a portfolio. By assuming that the log asset returns can be described by a multifractional Brownian motion, we evaluate the risk using the time dependent Hurst parameter H(t) which models volatility. To provide a measure of the risk, we model the Hurst parameter with a random variable with mixture of beta distribution. We prove the efficacy of the methodology by implementing it on different risk level financial instruments and portfolios.

  12. The comparative immunology of wild and laboratory mice, Mus musculus domesticus

    PubMed Central

    Abolins, Stephen; King, Elizabeth C.; Lazarou, Luke; Weldon, Laura; Hughes, Louise; Drescher, Paul; Raynes, John G.; Hafalla, Julius C. R.; Viney, Mark E.; Riley, Eleanor M.

    2017-01-01

    The laboratory mouse is the workhorse of immunology, used as a model of mammalian immune function, but how well immune responses of laboratory mice reflect those of free-living animals is unknown. Here we comprehensively characterize serological, cellular and functional immune parameters of wild mice and compare them with laboratory mice, finding that wild mouse cellular immune systems are, comparatively, in a highly activated (primed) state. Associations between immune parameters and infection suggest that high level pathogen exposure drives this activation. Moreover, wild mice have a population of highly activated myeloid cells not present in laboratory mice. By contrast, in vitro cytokine responses to pathogen-associated ligands are generally lower in cells from wild mice, probably reflecting the importance of maintaining immune homeostasis in the face of intense antigenic challenge in the wild. These data provide a comprehensive basis for validating (or not) laboratory mice as a useful and relevant immunological model system. PMID:28466840

  13. Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Scheid, R. E., Jr.; Rodriguez, G.

    1985-01-01

    Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.

  14. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.

  15. Observable gravitational waves in pre-big bang cosmology: an update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasperini, M., E-mail: gasperini@ba.infn.it

    In the light of the recent results concerning CMB observations and GW detection we address the question of whether it is possible, in a self-consistent inflationary framework, to simultaneously generate a spectrum of scalar metric perturbations in agreement with Planck data and a stochastic background of primordial gravitational radiation compatible with the design sensitivity of aLIGO/Virgo and/or eLISA. We suggest that this is possible in a string cosmology context, for a wide region of the parameter space of the so-called pre-big bang models. We also discuss the associated values of the tensor-to-scalar ratio relevant to the CMB polarization experiments. Wemore » conclude that future, cross-correlated results from CMB observations and GW detectors will be able to confirm or disprove pre-big bang models and—in any case—will impose new significant constraints on the basic string theory/cosmology parameters.« less

  16. An Overview of the GIS Weasel

    USGS Publications Warehouse

    Viger, Roland J.

    2008-01-01

    This fact sheet provides a high-level description of the GIS Weasel, a software system designed to aid users in preparing spatial information as input to lumped and distributed parameter environmental simulation models (ESMs). The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to the application of a user?s ESM and to generate parameters from those maps. The operation of the GIS Weasel does not require a user to be a GIS expert, only that a user has an understanding of the spatial information requirements of the model. The GIS Weasel software system provides a GIS-based graphical user interface (GUI), C programming language executables, and general utility scripts. The software will run on any computing platform where ArcInfo Workstation (version 8.1 or later) and the GRID extension are accessible. The user controls the GIS Weasel by interacting with menus, maps, and tables.

  17. Simulation of drive of mechanisms, working in specific conditions

    NASA Astrophysics Data System (ADS)

    Ivanovskaya, A. V.; Rybak, A. T.

    2018-05-01

    This paper presents a method for determining the dynamic loads on the lifting actuator device other than the conventional methods, for example, ship windlass. For such devices, the operation of their drives is typical under special conditions: different environments, the influence of hydrometeorological factors, a high level of vibration, variability of loading, etc. Hoisting devices working in such conditions are not considered in the standard; however, relevant studies concern permissible parameters of the drive devices of this kind. As an example, the article studied the work deck lifting devices - windlass. To construct a model, the windlass is represented by a rod of the variable cross-section. As a result, a mathematical model of the longitudinal oscillations of such rod is obtained. Analytic dependencies have also been obtained to determine the natural frequencies of the lowest forms of oscillations, which are necessary and are the basis for evaluating the parameters of operation of this type of the device.

  18. Individual-based modelling of population growth and diffusion in discrete time.

    PubMed

    Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone

    2017-01-01

    Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.

  19. Caracterisation des occupations du sol en milieu urbain par imagerie radar

    NASA Astrophysics Data System (ADS)

    Codjia, Claude

    This study aims to test the relevance of medium and high-resolution SAR images on the characterization of the types of land use in urban areas. To this end, we have relied on textural approaches based on second-order statistics. Specifically, we look for texture parameters most relevant for discriminating urban objects. We have used in this regard Radarsat-1 in fine polarization mode and Radarsat-2 HH fine mode in dual and quad polarization and ultrafine mode HH polarization. The land uses sought were dense building, medium density building, low density building, industrial and institutional buildings, low density vegetation, dense vegetation and water. We have identified nine texture parameters for analysis, grouped into families according to their mathematical definitions in a first step. The parameters of similarity / dissimilarity include Homogeneity, Contrast, the Differential Inverse Moment and Dissimilarity. The parameters of disorder are Entropy and the Second Angular Momentum. The Standard Deviation and Correlation are the dispersion parameters and the Average is a separate family. It is clear from experience that certain combinations of texture parameters from different family used in classifications yield good results while others produce kappa of very little interest. Furthermore, we realize that if the use of several texture parameters improves classifications, its performance ceils from three parameters. The calculation of correlations between the textures and their principal axes confirm the results. Despite the good performance of this approach based on the complementarity of texture parameters, systematic errors due to the cardinal effects remain on classifications. To overcome this problem, a radiometric compensation model was developed based on the radar cross section (SER). A radar simulation from the digital surface model of the environment allowed us to extract the building backscatter zones and to analyze the related backscatter. Thus, we were able to devise a strategy of compensation of cardinal effects solely based on the responses of the objects according to their orientation from the plane of illumination through the radar's beam. It appeared that a compensation algorithm based on the radar cross section was appropriate. Some examples of the application of this algorithm on HH polarized RADARSAT-2 images are presented as well. Application of this algorithm will allow considerable gains with regard to certain forms of automation (classification and segmentation) at the level of radar imagery thus generating a higher level of quality in regard to visual interpretation. Application of this algorithm on RADARSAT-1 and RADARSAT-2 images with HH, HV, VH, and VV polarisations helped make considerable gains and eliminate most of the classification errors due to the cardinal effects.

  20. Mechanisms for pattern specificity of deep-brain stimulation in Parkinson’s disease

    PubMed Central

    Mato, Germán; Dellavale, Damián

    2017-01-01

    Deep brain stimulation (DBS) has become a widely used technique for treating advanced stages of neurological and psychiatric illness. In the case of motor disorders related to basal ganglia (BG) dysfunction, several mechanisms of action for the DBS therapy have been identified which might be involved simultaneously or in sequence. However, the identification of a common key mechanism underlying the clinical relevant DBS configurations has remained elusive due to the inherent complexity related to the interaction between the electrical stimulation and the neural tissue, and the intricate circuital structure of the BG-thalamocortical network. In this work, it is shown that the clinically relevant range for both, the frequency and intensity of the electrical stimulation pattern, is an emergent property of the BG anatomy at the system-level that can be addressed using mean-field descriptive models of the BG network. Moreover, it is shown that the activity resetting mechanism elicited by electrical stimulation provides a natural explanation to the ineffectiveness of irregular (i.e., aperiodic) stimulation patterns, which has been commonly observed in previously reported pathophysiology models of Parkinson’s disease. Using analytical and numerical techniques, these results have been reproduced in both cases: 1) a reduced mean-field model that can be thought as an elementary building block capable to capture the underlying fundamentals of the relevant loops constituting the BG-thalamocortical network, and 2) a detailed model constituted by the direct and hyperdirect loops including one-dimensional spatial structure of the BG nuclei. We found that the optimal ranges for the essential parameters of the stimulation patterns can be understood without taking into account biophysical details of the relevant structures. PMID:28813460

  1. Latent Growth and Dynamic Structural Equation Models.

    PubMed

    Grimm, Kevin J; Ram, Nilam

    2018-05-07

    Latent growth models make up a class of methods to study within-person change-how it progresses, how it differs across individuals, what are its determinants, and what are its consequences. Latent growth methods have been applied in many domains to examine average and differential responses to interventions and treatments. In this review, we introduce the growth modeling approach to studying change by presenting different models of change and interpretations of their model parameters. We then apply these methods to examining sex differences in the development of binge drinking behavior through adolescence and into adulthood. Advances in growth modeling methods are then discussed and include inherently nonlinear growth models, derivative specification of growth models, and latent change score models to study stochastic change processes. We conclude with relevant design issues of longitudinal studies and considerations for the analysis of longitudinal data.

  2. Limiting the effective mass and new physics parameters from 0 ν β β

    NASA Astrophysics Data System (ADS)

    Awasthi, Ram Lal; Dasgupta, Arnab; Mitra, Manimala

    2016-10-01

    In the light of the recent result from KamLAND-Zen (KLZ) and GERDA Phase-II, we update the bounds on the effective mass and the new physics parameters, relevant for neutrinoless double beta decay (0 ν β β ). In addition to the light Majorana neutrino exchange, we analyze beyond standard model contributions that arise in left-right symmetry and R-parity violating supersymmetry. The improved limit from KLZ constrains the effective mass of light neutrino exchange down to sub-eV mass regime 0.06 eV. Using the correlation between the 136Xe and 76 half-lives, we show that the KLZ limit individually rules out the positive claim of observation of 0 ν β β for all nuclear matrix element compilation. For the left-right symmetry and R-parity violating supersymmetry, the KLZ bound implies a factor of 2 improvement of the effective mass and the new physics parameters. The future ton scale experiments such as, nEXO will further constrain these models, in particular, will rule out standard as well as Type-II dominating LRSM inverted hierarchy scenario.

  3. The solution of private problems for optimization heat exchangers parameters

    NASA Astrophysics Data System (ADS)

    Melekhin, A.

    2017-11-01

    The relevance of the topic due to the decision of problems of the economy of resources in heating systems of buildings. To solve this problem we have developed an integrated method of research which allows solving tasks on optimization of parameters of heat exchangers. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The author have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  4. Thermomechanical Simulation of the Splashing of Ceramic Droplets on a Rigid Substrate

    NASA Astrophysics Data System (ADS)

    Bertagnolli, Mauro; Marchese, Maurizio; Jacucci, Gianni; St. Doltsinis, Ioannis; Noelting, Swen

    1997-05-01

    Finite element simulation techniques have been applied to the spreading process of single ceramic liquid droplets impacting on a flat cold surface under plasma-spraying conditions. The goal of the present investigation is to predict the geometrical form of the splat as a function of technological process parameters, such as initial temperature and velocity, and to follow the thermal field developing in the droplet up to solidification. A non-linear finite element programming system has been utilized in order to model the complex physical phenomena involved in the present impact process. The Lagrangean description of the motion of the viscous melt in the drops, as constrained by surface tension and the developing contact with the target, has been coupled to an analysis of transient thermal phenomena accounting also for the solidification of the material. The present study refers to a parameter spectrum as from experimental data of technological relevance. The significance of process parameters for the most pronounced physical phenomena is discussed as are also the consequences of modelling. We consider the issue of solidification as well and touch on the effect of partially unmelted material.

  5. Two healing lengths in a two-band GL-model with quadratic terms: Numerical results

    NASA Astrophysics Data System (ADS)

    Macias-Medri, A. E.; Rodríguez-Núñez, J. J.

    2018-05-01

    A two-band and quartic interaction order Ginzburg-Landau model in the presence of a single vortex is studied in this work. Interactions of second (quadratic, with coupling parameter γ) and fourth (quartic, with coupling parameter γ˜) order between the two superconducting order parameters (fi with i = 1,2) are incorporated in a functional. Terms beyond quadratic gradient contributions are neglected in the corresponding minimized free energy. The solution of the system of coupled equations is solved by numerical methods to obtain the fi-profiles, where our starting point was the calculation of the superconducting critical temperature Tc. With this at hand, we evaluate fi and the magnetic field along the z-axis, B0, as function of γ, γ˜, the radial distance r/λ1(0) and the temperature T, for T ≈ Tc. The self-consistent equations allow us to compute λ (penetration depth) and the healing lengths of fi (Lhi with i = 1,2) as functions of T, γ and γ˜. At the end, relevant discussions about type-1.5 superconductivity in the compounds we have studied are presented.

  6. Passive control of thermoacoustic oscillations with adjoint methods

    NASA Astrophysics Data System (ADS)

    Aguilar, Jose; Juniper, Matthew

    2017-11-01

    Strict pollutant regulations are driving gas turbine manufacturers to develop devices that operate under lean premixed conditions, which produce less NOx but encourage thermoacoustic oscillations. These are a form of unstable combustion that arise due to the coupling between the acoustic field and the fluctuating heat release in a combustion chamber. In such devices, in which safety is paramount, thermoacoustic oscillations must be eliminated passively, rather than through feedback control. The ideal way to eliminate thermoacoustic oscillations is by subtly changing the shape of the device. To achieve this, one must calculate the sensitivity of each unstable thermoacoustic mode to every geometric parameter. This is prohibitively expensive with standard methods, but is relatively cheap with adjoint methods. In this study we first present low-order network models as a tool to model and study the thermoacoustic behaviour of combustion chambers. Then we compute the continuous adjoint equations and the sensitivities to relevant parameters. With this, we run an optimization routine that modifies the parameters in order to stabilize all the resonant modes of a laboratory combustor rig.

  7. Anisotropic Rabi model

    NASA Astrophysics Data System (ADS)

    Xie, Qiong-Tao; Cui, Shuai; Cao, Jun-Peng; Amico, Luigi; Fan, Heng

    2014-04-01

    We define the anisotropic Rabi model as the generalization of the spin-boson Rabi model: The Hamiltonian system breaks the parity symmetry; the rotating and counterrotating interactions are governed by two different coupling constants; a further parameter introduces a phase factor in the counterrotating terms. The exact energy spectrum and eigenstates of the generalized model are worked out. The solution is obtained as an elaboration of a recently proposed method for the isotropic limit of the model. In this way, we provide a long-sought solution of a cascade of models with immediate relevance in different physical fields, including (i) quantum optics, a two-level atom in single-mode cross-electric and magnetic fields; (ii) solid-state physics, electrons in semiconductors with Rashba and Dresselhaus spin-orbit coupling; and (iii) mesoscopic physics, Josephson-junction flux-qubit quantum circuits.

  8. A regularized variable selection procedure in additive hazards model with stratified case-cohort design.

    PubMed

    Ni, Ai; Cai, Jianwen

    2018-07-01

    Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.

  9. Section 4. The GIS Weasel User's Manual

    USGS Publications Warehouse

    Viger, Roland J.; Leavesley, George H.

    2007-01-01

    INTRODUCTION The GIS Weasel was designed to aid in the preparation of spatial information for input to lumped and distributed parameter hydrologic or other environmental models. The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to a user's model and to generate parameters from those maps. The operation of the GIS Weasel does not require the user to be a GIS expert, only that the user have an understanding of the spatial information requirements of the environmental simulation model being used. The GIS Weasel software system uses a GIS-based graphical user interface (GUI), the C programming language, and external scripting languages. The software will run on any computing platform where ArcInfo Workstation (version 8.0.2 or later) and the GRID extension are accessible. The user controls the processing of the GIS Weasel by interacting with menus, maps, and tables. The purpose of this document is to describe the operation of the software. This document is not intended to describe the usage of this software in support of any particular environmental simulation model. Such guides are published separately.

  10. Realistic simplified gaugino-higgsino models in the MSSM

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn

    2018-03-01

    We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.

  11. A Characteristic Dose Model for Historical Internal Dose Reconstruction in the Framework of the IAEC Compensation Programme.

    PubMed

    Kravchik, T; Abraham, A; Israeli, M; Yahel, E

    2017-04-25

    A model was developed at the Nuclear Research Centre Negev (NRCN) to assess historical doses from internal exposures by a relatively fast and simple procedure. These assessments are needed in the framework of a compensation programme for the Israeli Atomic Energy Commission (IAEC) workers, which were diagnosed for cancer diseases. This compensation programme was recently recommended by a public committee to avoid lengthy court procedures. The developed model is based on the recorded doses from external exposures of all the workers at the NRCN, who were divided into groups representing their different working environments. Each group of workers was characterised by three parameters: working period, working areas and occupation. The model uses several conservative assumptions in order to calculate the doses to various body organs in certain years, which are relevant to the calculation of the probability of causation (POC). The POC value serves as a main parameter in the compensation programme. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. CalFitter: a web server for analysis of protein thermal denaturation data.

    PubMed

    Mazurenko, Stanislav; Stourac, Jan; Kunka, Antonin; Nedeljkovic, Sava; Bednar, David; Prokop, Zbynek; Damborsky, Jiri

    2018-05-14

    Despite significant advances in the understanding of protein structure-function relationships, revealing protein folding pathways still poses a challenge due to a limited number of relevant experimental tools. Widely-used experimental techniques, such as calorimetry or spectroscopy, critically depend on a proper data analysis. Currently, there are only separate data analysis tools available for each type of experiment with a limited model selection. To address this problem, we have developed the CalFitter web server to be a unified platform for comprehensive data fitting and analysis of protein thermal denaturation data. The server allows simultaneous global data fitting using any combination of input data types and offers 12 protein unfolding pathway models for selection, including irreversible transitions often missing from other tools. The data fitting produces optimal parameter values, their confidence intervals, and statistical information to define unfolding pathways. The server provides an interactive and easy-to-use interface that allows users to directly analyse input datasets and simulate modelled output based on the model parameters. CalFitter web server is available free at https://loschmidt.chemi.muni.cz/calfitter/.

  13. The dynamical core of the Aeolus 1.0 statistical-dynamical atmosphere model: validation and parameter optimization

    NASA Astrophysics Data System (ADS)

    Totz, Sonja; Eliseev, Alexey V.; Petri, Stefan; Flechsig, Michael; Caesar, Levke; Petoukhov, Vladimir; Coumou, Dim

    2018-02-01

    We present and validate a set of equations for representing the atmosphere's large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical-dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983-2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model's performance in reproducing the seasonal cycle and the influence of the El Niño-Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model's representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.

  14. Multisite Ion Model in Concentrated Solutions of Divalent Cations (MgCl2 and CaCl2): Osmotic Pressure Calculations

    PubMed Central

    2015-01-01

    Accurate force field parameters for ions are essential for meaningful simulation studies of proteins and nucleic acids. Currently accepted models of ions, especially for divalent ions, do not necessarily reproduce the right physiological behavior of Ca2+ and Mg2+ ions. Saxena and Sept (J. Chem. Theor. Comput.2013, 9, 3538–3542) described a model, called the multisite-ion model, where instead of treating the ions as an isolated sphere, the charge was split into multiple sites with partial charge. This model provided accurate inner shell coordination of the ion with biomolecules and predicted better free energies for proteins and nucleic acids. Here, we expand and refine the multisite model to describe the behavior of divalent ions in concentrated MgCl2 and CaCl2 electrolyte solutions, eliminating the unusual ion–ion pairing and clustering of ions which occurred in the original model. We calibrate and improve the parameters of the multisite model by matching the osmotic pressure of concentrated solutions of MgCl2 to the experimental values and then use these parameters to test the behavior of CaCl2 solutions. We find that the concentrated solutions of both divalent ions exhibit the experimentally observed behavior with correct osmotic pressure, the presence of solvent separated ion pairs instead of direct ion pairs, and no aggregation of ions. The improved multisite model for (Mg2+ and Ca2+) can be used in classical simulations of biomolecules at physiologically relevant salt concentrations. PMID:25482831

  15. Ground water flow modeling with sensitivity analyses to guide field data collection in a mountain watershed

    USGS Publications Warehouse

    Johnson, Raymond H.

    2007-01-01

    In mountain watersheds, the increased demand for clean water resources has led to an increased need for an understanding of ground water flow in alpine settings. In Prospect Gulch, located in southwestern Colorado, understanding the ground water flow system is an important first step in addressing metal loads from acid-mine drainage and acid-rock drainage in an area with historical mining. Ground water flow modeling with sensitivity analyses are presented as a general tool to guide future field data collection, which is applicable to any ground water study, including mountain watersheds. For a series of conceptual models, the observation and sensitivity capabilities of MODFLOW-2000 are used to determine composite scaled sensitivities, dimensionless scaled sensitivities, and 1% scaled sensitivity maps of hydraulic head. These sensitivities determine the most important input parameter(s) along with the location of observation data that are most useful for future model calibration. The results are generally independent of the conceptual model and indicate recharge in a high-elevation recharge zone as the most important parameter, followed by the hydraulic conductivities in all layers and recharge in the next lower-elevation zone. The most important observation data in determining these parameters are hydraulic heads at high elevations, with a depth of less than 100 m being adequate. Evaluation of a possible geologic structure with a different hydraulic conductivity than the surrounding bedrock indicates that ground water discharge to individual stream reaches has the potential to identify some of these structures. Results of these sensitivity analyses can be used to prioritize data collection in an effort to reduce time and money spend by collecting the most relevant model calibration data.

  16. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  17. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE PAGES

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...

    2016-05-01

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  18. A flexible, interactive software tool for fitting the parameters of neuronal models.

    PubMed

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  19. Towards a consensus-based biokinetic model for green microalgae - The ASM-A.

    PubMed

    Wágner, Dorottya S; Valverde-Pérez, Borja; Sæbø, Mariann; Bregua de la Sotilla, Marta; Van Wagenen, Jonathan; Smets, Barth F; Plósz, Benedek Gy

    2016-10-15

    Cultivation of microalgae in open ponds and closed photobioreactors (PBRs) using wastewater resources offers an opportunity for biochemical nutrient recovery. Effective reactor system design and process control of PBRs requires process models. Several models with different complexities have been developed to predict microalgal growth. However, none of these models can effectively describe all the relevant processes when microalgal growth is coupled with nutrient removal and recovery from wastewaters. Here, we present a mathematical model developed to simulate green microalgal growth (ASM-A) using the systematic approach of the activated sludge modelling (ASM) framework. The process model - identified based on a literature review and using new experimental data - accounts for factors influencing photoautotrophic and heterotrophic microalgal growth, nutrient uptake and storage (i.e. Droop model) and decay of microalgae. Model parameters were estimated using laboratory-scale batch and sequenced batch experiments using the novel Latin Hypercube Sampling based Simplex (LHSS) method. The model was evaluated using independent data obtained in a 24-L PBR operated in sequenced batch mode. Identifiability of the model was assessed. The model can effectively describe microalgal biomass growth, ammonia and phosphate concentrations as well as the phosphorus storage using a set of average parameter values estimated with the experimental data. A statistical analysis of simulation and measured data suggests that culture history and substrate availability can introduce significant variability on parameter values for predicting the reaction rates for bulk nitrate and the intracellularly stored nitrogen state-variables, thereby requiring scenario specific model calibration. ASM-A was identified using standard cultivation medium and it can provide a platform for extensions accounting for factors influencing algal growth and nutrient storage using wastewater resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A flexible, interactive software tool for fitting the parameters of neuronal models

    PubMed Central

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540

Top