Sample records for rate model parameters

  1. Application of a data assimilation method via an ensemble Kalman filter to reactive urea hydrolysis transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juxiu Tong; Bill X. Hu; Hai Huang

    2014-03-01

    With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less

  2. Estimation of parameters in rational reaction rates of molecular biological systems via weighted least squares

    NASA Astrophysics Data System (ADS)

    Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke

    2010-01-01

    The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.

  3. Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2015-12-01

    The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.

  4. Rain-rate data base development and rain-rate climate analysis

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1993-01-01

    The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.

  5. Roll paper pilot. [mathematical model for predicting pilot rating of aircraft in roll task

    NASA Technical Reports Server (NTRS)

    Naylor, F. R.; Dillow, J. D.; Hannen, R. A.

    1973-01-01

    A mathematical model for predicting the pilot rating of an aircraft in a roll task is described. The model includes: (1) the lateral-directional aircraft equations of motion; (2) a stochastic gust model; (3) a pilot model with two free parameters; and (4) a pilot rating expression that is a function of rms roll angle and the pilot lead time constant. The pilot gain and lead time constant are selected to minimize the pilot rating expression. The pilot parameters are then adjusted to provide a 20% stability margin and the adjusted pilot parameters are used to compute a roll paper pilot rating of the aircraft/gust configuration. The roll paper pilot rating was computed for 25 aircraft/gust configurations. A range of actual ratings from 2 to 9 were encountered and the roll paper pilot ratings agree quite well with the actual ratings. In addition there is good correlation between predicted and measured rms roll angle.

  6. Understanding which parameters control shallow ascent of silicic effusive magma

    NASA Astrophysics Data System (ADS)

    Thomas, Mark E.; Neuberg, Jurgen W.

    2014-11-01

    The estimation of the magma ascent rate is key to predicting volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. Linking potential changes of such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models Soufrière that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. We show that variability in the rate of low frequency seismicity, assumed to correlate directly with the rate of magma movement, can be used as an indicator for changes in ascent rate and, therefore, eruptive activity. The results indicate that conduit diameter and excess pressure in the magma chamber are amongst the dominant controlling variables, but the single most important parameter is the volatile content (assumed as only water). Modeling this parameter in the range of reported values causes changes in the calculated ascent velocities of up to 800%.

  7. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  8. Hierarchial mark-recapture models: a framework for inference about demographic processes

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2004-01-01

    The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly described in the model.

  9. Reconstruction of interaction rate in holographic dark energy

    NASA Astrophysics Data System (ADS)

    Mukherjee, Ankan

    2016-11-01

    The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. It is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.

  10. The Dissolution Behavior of Borosilicate Glasses in Far-From Equilibrium Conditions

    DOE PAGES

    Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.; ...

    2018-02-10

    An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted atmore » temperatures of 23, 40, 70, and 90 °C and pH(22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. As a result, the higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.« less

  11. The dissolution behavior of borosilicate glasses in far-from equilibrium conditions

    NASA Astrophysics Data System (ADS)

    Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.; Ryan, Joseph V.; Asmussen, R. Matthew

    2018-04-01

    An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted at temperatures of 23, 40, 70, and 90 °C and pH (22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. The higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.

  12. The Dissolution Behavior of Borosilicate Glasses in Far-From Equilibrium Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.

    An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted atmore » temperatures of 23, 40, 70, and 90 °C and pH(22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. As a result, the higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.« less

  13. Integrative neural networks model for prediction of sediment rating curve parameters for ungauged basins

    NASA Astrophysics Data System (ADS)

    Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.

    2015-12-01

    One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.

  14. Evaluation of rate law approximations in bottom-up kinetic models of metabolism.

    PubMed

    Du, Bin; Zielinski, Daniel C; Kavvas, Erol S; Dräger, Andreas; Tan, Justin; Zhang, Zhen; Ruggiero, Kayla E; Arzumanyan, Garri A; Palsson, Bernhard O

    2016-06-06

    The mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question. In this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations. Overall, our work generally supports the use of approximate rate laws when building large scale kinetic models, due to the key role that physiologically meaningful flux and concentration ranges play in determining network dynamics. However, we also showed that detailed mechanistic models show a clear benefit in prediction accuracy when data is available. The work here should help to provide guidance to future kinetic modeling efforts on the choice of rate law and parameterization approaches.

  15. Modeling association among demographic parameters in analysis of open population capture-recapture data.

    PubMed

    Link, William A; Barker, Richard J

    2005-03-01

    We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  16. Modeling association among demographic parameters in analysis of open population capture-recapture data

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2005-01-01

    We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  17. Shock Layer Radiation Modeling and Uncertainty for Mars Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Brandis, Aaron M.; Sutton, Kenneth

    2012-01-01

    A model for simulating nonequilibrium radiation from Mars entry shock layers is presented. A new chemical kinetic rate model is developed that provides good agreement with recent EAST and X2 shock tube radiation measurements. This model includes a CO dissociation rate that is a factor of 13 larger than the rate used widely in previous models. Uncertainties in the proposed rates are assessed along with uncertainties in translational-vibrational relaxation modeling parameters. The stagnation point radiative flux uncertainty due to these flowfield modeling parameter uncertainties is computed to vary from 50 to 200% for a range of free-stream conditions, with densities ranging from 5e-5 to 5e-4 kg/m3 and velocities ranging from of 6.3 to 7.7 km/s. These conditions cover the range of anticipated peak radiative heating conditions for proposed hypersonic inflatable aerodynamic decelerators (HIADs). Modeling parameters for the radiative spectrum are compiled along with a non-Boltzmann rate model for the dominant radiating molecules, CO, CN, and C2. A method for treating non-local absorption in the non-Boltzmann model is developed, which is shown to result in up to a 50% increase in the radiative flux through absorption by the CO 4th Positive band. The sensitivity of the radiative flux to the radiation modeling parameters is presented and the uncertainty for each parameter is assessed. The stagnation point radiative flux uncertainty due to these radiation modeling parameter uncertainties is computed to vary from 18 to 167% for the considered range of free-stream conditions. The total radiative flux uncertainty is computed as the root sum square of the flowfield and radiation parametric uncertainties, which results in total uncertainties ranging from 50 to 260%. The main contributors to these significant uncertainties are the CO dissociation rate and the CO heavy-particle excitation rates. Applying the baseline flowfield and radiation models developed in this work, the radiative heating for the Mars Pathfinder probe is predicted to be nearly 20 W/cm2. In contrast to previous studies, this value is shown to be significant relative to the convective heating.

  18. COMPARISON OF IN VIVO DERIVED AND SCALED IN VITRO METABOLIC RATE CONSTANTS FOR SOME VOLATILE ORGANIC COMPOUNDS (VOCS)

    EPA Science Inventory

    The reliability of physiologically based pharmacokinetic (PBPK) models is directly related to the accuracy of the metabolic rate parameters used as model inputs. When metabolic rate parameters derived from in vivo experiments are unavailable, they can be estimated from in vitro d...

  19. Dependence of subject-specific parameters for a fast helical CT respiratory motion model on breathing rate: an animal study

    NASA Astrophysics Data System (ADS)

    O'Connell, Dylan; Thomas, David H.; Lamb, James M.; Lewis, John H.; Dou, Tai; Sieren, Jered P.; Saylor, Melissa; Hofmann, Christian; Hoffman, Eric A.; Lee, Percy P.; Low, Daniel A.

    2018-02-01

    To determine if the parameters relating lung tissue displacement to a breathing surrogate signal in a previously published respiratory motion model vary with the rate of breathing during image acquisition. An anesthetized pig was imaged using multiple fast helical scans to sample the breathing cycle with simultaneous surrogate monitoring. Three datasets were collected while the animal was mechanically ventilated with different respiratory rates: 12 bpm (breaths per minute), 17 bpm, and 24 bpm. Three sets of motion model parameters describing the correspondences between surrogate signals and tissue displacements were determined. The model error was calculated individually for each dataset, as well asfor pairs of parameters and surrogate signals from different experiments. The values of one model parameter, a vector field denoted α which related tissue displacement to surrogate amplitude, determined for each experiment were compared. The mean model error of the three datasets was 1.00  ±  0.36 mm with a 95th percentile value of 1.69 mm. The mean error computed from all combinations of parameters and surrogate signals from different datasets was 1.14  ±  0.42 mm with a 95th percentile of 1.95 mm. The mean difference in α over all pairs of experiments was 4.7%  ±  5.4%, and the 95th percentile was 16.8%. The mean angle between pairs of α was 5.0  ±  4.0 degrees, with a 95th percentile of 13.2 mm. The motion model parameters were largely unaffected by changes in the breathing rate during image acquisition. The mean error associated with mismatched sets of parameters and surrogate signals was 0.14 mm greater than the error achieved when using parameters and surrogate signals acquired with the same breathing rate, while maximum respiratory motion was 23.23 mm on average.

  20. Modeling pattern in collections of parameters

    USGS Publications Warehouse

    Link, W.A.

    1999-01-01

    Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.

  1. Uncertainty quantification of reaction mechanisms accounting for correlations introduced by rate rules and fitted Arrhenius parameters

    DOE PAGES

    Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...

    2013-02-23

    We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less

  2. The Role of Parvalbumin, Sarcoplasmatic Reticulum Calcium Pump Rate, Rates of Cross-Bridge Dynamics, and Ryanodine Receptor Calcium Current on Peripheral Muscle Fatigue: A Simulation Study

    PubMed Central

    Neumann, Verena

    2016-01-01

    A biophysical model of the excitation-contraction pathway, which has previously been validated for slow-twitch and fast-twitch skeletal muscles, is employed to investigate key biophysical processes leading to peripheral muscle fatigue. Special emphasis hereby is on investigating how the model's original parameter sets can be interpolated such that realistic behaviour with respect to contraction time and fatigue progression can be obtained for a continuous distribution of the model's parameters across the muscle units, as found for the functional properties of muscles. The parameters are divided into 5 groups describing (i) the sarcoplasmatic reticulum calcium pump rate, (ii) the cross-bridge dynamics rates, (iii) the ryanodine receptor calcium current, (iv) the rates of binding of magnesium and calcium ions to parvalbumin and corresponding dissociations, and (v) the remaining processes. The simulations reveal that the first two parameter groups are sensitive to contraction time but not fatigue, the third parameter group affects both considered properties, and the fourth parameter group is only sensitive to fatigue progression. Hence, within the scope of the underlying model, further experimental studies should investigate parvalbumin dynamics and the ryanodine receptor calcium current to enhance the understanding of peripheral muscle fatigue. PMID:27980606

  3. The power and robustness of maximum LOD score statistics.

    PubMed

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  4. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  5. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  6. Characterization of human passive muscles for impact loads using genetic algorithm and inverse finite element methods.

    PubMed

    Chawla, A; Mukherjee, S; Karthikeyan, B

    2009-02-01

    The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.

  7. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  8. Reconstruction of interaction rate in holographic dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Ankan, E-mail: ankan_ju@iiserkol.ac.in

    2016-11-01

    The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. Itmore » is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.« less

  9. Modeling and quantification of repolarization feature dependency on heart rate.

    PubMed

    Minchole, A; Zacur, E; Pueyo, E; Laguna, P

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.

  10. Quadratic semiparametric Von Mises calculus

    PubMed Central

    Robins, James; Li, Lingling; Tchetgen, Eric

    2009-01-01

    We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487

  11. Bringing metabolic networks to life: convenience rate law and thermodynamic constraints

    PubMed Central

    Liebermeister, Wolfram; Klipp, Edda

    2006-01-01

    Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669

  12. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  13. Quality of traffic flow on urban arterial streets and its relationship with safety.

    PubMed

    Dixit, Vinayak V; Pande, Anurag; Abdel-Aty, Mohamed; Das, Abhishek; Radwan, Essam

    2011-09-01

    The two-fluid model for vehicular traffic flow explains the traffic on arterials as a mix of stopped and running vehicles. It describes the relationship between the vehicles' running speed and the fraction of running vehicles. The two parameters of the model essentially represent 'free flow' travel time and level of interaction among vehicles, and may be used to evaluate urban roadway networks and urban corridors with partially limited access. These parameters are influenced by not only the roadway characteristics but also by behavioral aspects of driver population, e.g., aggressiveness. Two-fluid models are estimated for eight arterial corridors in Orlando, FL for this study. The parameters of the two-fluid model were used to evaluate corridor level operations and the correlations of these parameters' with rates of crashes having different types/severity. Significant correlations were found between two-fluid parameters and rear-end and angle crash rates. Rate of severe crashes was also found to be significantly correlated with the model parameter signifying inter-vehicle interactions. While there is need for further analysis, the findings suggest that the two-fluid model parameters may have potential as surrogate measures for traffic safety on urban arterial streets. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Sensitivity analysis of the parameters of an HIV/AIDS model with condom campaign and antiretroviral therapy

    NASA Astrophysics Data System (ADS)

    Marsudi, Hidayat, Noor; Wibowo, Ratno Bagus Edy

    2017-12-01

    In this article, we present a deterministic model for the transmission dynamics of HIV/AIDS in which condom campaign and antiretroviral therapy are both important for the disease management. We calculate the effective reproduction number using the next generation matrix method and investigate the local and global stability of the disease-free equilibrium of the model. Sensitivity analysis of the effective reproduction number with respect to the model parameters were carried out. Our result shows that efficacy rate of condom campaign, transmission rate for contact with the asymptomatic infective, progression rate from the asymptomatic infective to the pre-AIDS infective, transmission rate for contact with the pre-AIDS infective, ARV therapy rate, proportion of the susceptible receiving condom campaign and proportion of the pre-AIDS receiving ARV therapy are highly sensitive parameters that effect the transmission dynamics of HIV/AIDS infection.

  15. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.

    In far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this paper is to study effects of these variables on three model waste glasses (SON68, ISG, AFCI). To do this, experiments were conducted at temperatures of 23, 40, 70, and 90 °C and pH(RT) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. The results from these tests were then used to parameterize a kinetic rate model based on transition state theory. Both the absolute dissolution rates andmore » the rate model parameters are compared with previous results. Discrepancies in the absolute dissolution rates as compared to those obtained using other test methods are discussed. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies. The results were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), which is capable of providing a robust uncertainty analysis. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, the effect of glass composition on the rate parameter values could possibly be obtained. This would allow for the possibility of predicting the forward dissolution rate of glass based solely on composition« less

  17. The heuristic value of redundancy models of aging.

    PubMed

    Boonekamp, Jelle J; Briga, Michael; Verhulst, Simon

    2015-11-01

    Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links. To this end, we explore the heuristic value of redundancy models of aging to develop a deeper insight into the mechanisms causing variation in senescence and lifespan. We start by showing (i) how different redundancy model parameters affect projected aging and mortality, and (ii) how variation in redundancy model parameters relates to variation in parameters of the Gompertz equation. Lifestyle changes or medical interventions during life can modify mortality rate, and we investigate (iii) how interventions that change specific redundancy parameters within the model affect subsequent mortality and actuarial senescence. Lastly, as an example of data-directed modelling and the insights that can be gained from this, (iv) we fit a redundancy model to mortality patterns observed by Mair et al. (2003; Science 301: 1731-1733) in Drosophila that were subjected to dietary restriction and temperature manipulations. Mair et al. found that dietary restriction instantaneously reduced mortality rate without affecting aging, while temperature manipulations had more transient effects on mortality rate and did affect aging. We show that after adjusting model parameters the redundancy model describes both effects well, and a comparison of the parameter values yields a deeper insight in the mechanisms causing these contrasting effects. We see replacement of the redundancy model parameters by more detailed sub-models of these parameters as a next step in linking demographic patterns to underlying molecular mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Refining Reproductive Parameters for Modelling Sustainability and Extinction in Hunted Primate Populations in the Amazon

    PubMed Central

    Bowler, Mark; Anderson, Matt; Montes, Daniel; Pérez, Pedro; Mayor, Pedro

    2014-01-01

    Primates are frequently hunted in Amazonia. Assessing the sustainability of hunting is essential to conservation planning. The most-used sustainability model, the ‘Production Model’, and more recent spatial models, rely on basic reproductive parameters for accuracy. These parameters are often crudely estimated. To date, parameters used for the Amazon’s most-hunted primate, the woolly monkey (Lagothrix spp.), come from captive populations in the 1960s, when captive births were rare. Furthermore, woolly monkeys have since been split into five species. We provide reproductive parameters calculated by examining the reproductive organs of female Poeppig’s woolly monkeys (Lagothrix poeppigii), collected by hunters as part of their normal subsistence activity. Production was 0.48–0.54 young per female per year, and an interbirth interval of 22.3 to 25.2 months, similar to parameters from captive populations. However, breeding was seasonal, which imposes limits on the maximum reproductive rate attainable. We recommend the use of spatial models over the Production Model, since they are less sensitive to error in estimated reproductive rates. Further refinements to reproductive parameters are needed for most primate taxa. Methods like ours verify the suitability of captive reproductive rates for sustainability analysis and population modelling for populations under differing conditions of hunting pressure and seasonality. Without such research, population modelling is based largely on guesswork. PMID:24714614

  19. A simplified method for determining reactive rate parameters for reaction ignition and growth in explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P.J.

    1996-07-01

    A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.

  20. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates

    PubMed Central

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene

    2016-01-01

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891

  1. Identification and synthetic modeling of factors affecting American black duck populations

    USGS Publications Warehouse

    Conroy, Michael J.; Miller, Mark W.; Hines, James E.

    2002-01-01

    We reviewed the literature on factors potentially affecting the population status of American black ducks (Anas rupribes). Our review suggests that there is some support for the influence of 4 major, continental-scope factors in limiting or regulating black duck populations: 1) loss in the quantity or quality of breeding habitats; 2) loss in the quantity or quality of wintering habitats; 3) harvest, and 4) interactions (competition, hybridization) with mallards (Anas platyrhychos) during the breeding and/or wintering periods. These factors were used as the basis of an annual life cycle model in which reproduction rates and survival rates were modeled as functions of the above factors, with parameters of the model describing the strength of these relationships. Variation in the model parameter values allows for consideration of scientific uncertainty as to the degree each of these factors may be contributing to declines in black duck populations, and thus allows for the investigation of the possible effects of management (e.g., habitat improvement, harvest reductions) under different assumptions. We then used available, historical data on black duck populations (abundance, annual reproduction rates, and survival rates) and possible driving factors (trends in breeding and wintering habitats, harvest rates, and abundance of mallards) to estimate model parameters. Our estimated reproduction submodel included parameters describing negative density feedback of black ducks, positive influence of breeding habitat, and negative influence of mallard densities; our survival submodel included terms for positive influence of winter habitat on reproduction rates, and negative influences of black duck density (i.e., compensation to harvest mortality). Individual models within each group (reproduction, survival) involved various combinations of these factors, and each was given an information theoretic weight for use in subsequent prediction. The reproduction model with highest AIC weight (0.70) predicted black duck age ratios increasing as a function of decreasing mallard abundance and increasing acreage of breeding habitat; all models considered involved negative density dependence for black ducks. The survival model with highest AIC weight (0.51) predicted nonharvest survival increasing as a function of increasing acreage of wintering habitat and decreasing harvest rates (additive mortality); models involving compensatory mortality effects received ≈0.12 total weight, vs. 0.88 for additive models. We used the combined model, together with our historical data set, to perform a series of 1-year population forecasts, similar to those that might be performed under adaptive management. Initial model forecasts over-predicted observed breeding populations by ≈25%. Least-squares calibration reduced the bias to ≈0.5% under prediction. After calibration, model-averaged predictions over the 16 alternative models (4 reproduction × 4 survival, weighted by AIC model weights) explained 67% of the variation in annual breeding population abundance for black ducks, suggesting that it might have utility as a predictive tool in adaptive management. We investigated the effects of statistical uncertainty in parameter values on predicted population growth rates for the combined annual model, via sensitivity analyses. Parameter sensitivity varied in relation to the parameter values over the estimated confidence intervals, and in relation to harvest rates and mallard abundance. Forecasts of black duck abundance were extremely sensitive to variation in parameter values for the coefficients for breeding and wintering habitat effects. Model-averaged forecasts of black duck abundance were also sensitive to changes in harvest rate and mallard abundance, with rapid declines in black duck abundance predicted for a range of harvest rates and mallard abundance higher than current levels of either factor, but easily envisaged, particularly given current rates of growth for mallard populations. Because of concerns about sensitivity to habitat coefficients, and particularly in light of deficiencies in the historical data used to estimate these parameters, we developed a simplified model that excludes habitat effects. We also developed alternative models involving a calibration adjustment for reproduction rates, survival rates, or neither. Calibration of survival rates performed best (AIC weight 0.59, % BIAS = -0.280, R2=0.679), with reproduction calibration somewhat inferior (AIC weight 0.41, % BIAS = -0.267, R2=0.672); models without calibration received virtually no AIC weight and were discarded. We recommend that the simplified model set (4 biological models × 2 alternative calibration factors) be retained as the best working set of alternative models for research and management. Finally, we provide some preliminary guidance for the development of adaptive harvest management for black ducks, using our working set of models.

  2. Reliability of a new biokinetic model of zirconium in internal dosimetry: part II, parameter sensitivity analysis.

    PubMed

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential for the assessment of internal doses and a radiation risk analysis for the public and occupational workers exposed to radionuclides. In the present study, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. In the first part of the paper, the parameter uncertainty was analyzed for two biokinetic models of zirconium (Zr); one was reported by the International Commission on Radiological Protection (ICRP), and one was developed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU). In the second part of the paper, the parameter uncertainties and distributions of the Zr biokinetic models evaluated in Part I are used as the model inputs for identifying the most influential parameters in the models. Furthermore, the most influential model parameter on the integral of the radioactivity of Zr over 50 y in source organs after ingestion was identified. The results of the systemic HMGU Zr model showed that over the first 10 d, the parameters of transfer rates between blood and other soft tissues have the largest influence on the content of Zr in the blood and the daily urinary excretion; however, after day 1,000, the transfer rate from bone to blood becomes dominant. For the retention in bone, the transfer rate from blood to bone surfaces has the most influence out to the endpoint of the simulation; the transfer rate from blood to the upper larger intestine contributes a lot in the later days; i.e., after day 300. The alimentary tract absorption factor (fA) influences mostly the integral of radioactivity of Zr in most source organs after ingestion.

  3. Models for estimating photosynthesis parameters from in situ production profiles

    NASA Astrophysics Data System (ADS)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of photosynthesis irradiance functions and parameters for modeling in situ production profiles. In light of the results obtained in this work we argue that the choice of the primary production model should reflect the available data and these models should be data driven regarding parameter estimation.

  4. The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide

    PubMed Central

    Folly, Walter Sydney Dutra

    2011-01-01

    Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431

  5. The threshold bias model: a mathematical model for the nomothetic approach of suicide.

    PubMed

    Folly, Walter Sydney Dutra

    2011-01-01

    Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health.

  6. Autoxidation of jet fuels: Implications for modeling and thermal stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heneghan, S.P.; Chin, L.P.

    1995-05-01

    The study and modeling of jet fuel thermal deposition is dependent on an understanding of and ability to model the oxidation chemistry. Global modeling of jet fuel oxidation is complicated by several facts. First, liquid jet fuels are hard to heat rapidly and fuels may begin to oxidize during the heat-up phase. Non-isothermal conditions can be accounted for but the evaluation of temperature versus time is difficult. Second, the jet fuels are a mixture of many compounds that may oxidize at different rates. Third, jet fuel oxidation may be autoaccelerating through the decomposition of the oxidation products. Attempts to modelmore » the deposition of jet fuels in two different flowing systems showed the inadequacy of a simple two-parameter global Arrhenius oxidation rate constant. Discarding previous assumptions about the form of the global rate constants results in a four parameter model (which accounts for autoacceleration). This paper discusses the source of the rate constant form and the meaning of each parameter. One of these parameters is associated with the pre-exponential of the autoxidation chain length. This value is expected to vary inversely to thermal stability. We calculate the parameters for two different fuels and discuss the implication to thermal and oxidative stability of the fuels. Finally, we discuss the effect of non-Arrhenius behavior on current modeling of deposition efforts.« less

  7. Effects of reaction-kinetic parameters on modeling reaction pathways in GaN MOVPE growth

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Zuo, Ran; Zhang, Guoyi

    2017-11-01

    In the modeling of the reaction-transport process in GaN MOVPE growth, the selections of kinetic parameters (activation energy Ea and pre-exponential factor A) for gas reactions are quite uncertain, which cause uncertainties in both gas reaction path and growth rate. In this study, numerical modeling of the reaction-transport process for GaN MOVPE growth in a vertical rotating disk reactor is conducted with varying kinetic parameters for main reaction paths. By comparisons of the molar concentrations of major Ga-containing species and the growth rates, the effects of kinetic parameters on gas reaction paths are determined. The results show that, depending on the values of the kinetic parameters, the gas reaction path may be dominated either by adduct/amide formation path, or by TMG pyrolysis path, or by both. Although the reaction path varies with different kinetic parameters, the predicted growth rates change only slightly because the total transport rate of Ga-containing species to the substrate changes slightly with reaction paths. This explains why previous authors using different chemical models predicted growth rates close to the experiment values. By varying the pre-exponential factor for the amide trimerization, it is found that the more trimers are formed, the lower the growth rates are than the experimental value, which indicates that trimers are poor growth precursors, because of thermal diffusion effect caused by high temperature gradient. The effective order for the contribution of major species to growth rate is found as: pyrolysis species > amides > trimers. The study also shows that radical reactions have little effect on gas reaction path because of the generation and depletion of H radicals in the chain reactions when NH2 is considered as the end species.

  8. Knowledge transmission model with differing initial transmission and retransmission process

    NASA Astrophysics Data System (ADS)

    Wang, Haiying; Wang, Jun; Small, Michael

    2018-10-01

    Knowledge transmission is a cyclic dynamic diffusion process. The rate of acceptance of knowledge differs upon whether or not the recipient has previously held the knowledge. In this paper, the knowledge transmission process is divided into an initial and a retransmission procedure, each with its own transmission and self-learning parameters. Based on epidemic spreading model, we propose a naive-evangelical-agnostic (VEA) knowledge transmission model and derive mean-field equations to describe the dynamics of knowledge transmission in homogeneous networks. Theoretical analysis identifies a criterion for the persistence of knowledge, i.e., the reproduction number R0 depends on the minor effective parameters between the initial and retransmission process. Moreover, the final size of evangelical individuals is only related to retransmission process parameters. Numerical simulations validate the theoretical analysis. Furthermore, the simulations indicate that increasing the initial transmission parameters, including first transmission and self-learning rates of naive individuals, can accelerate the velocity of knowledge transmission efficiently but have no effect on the final size of evangelical individuals. In contrast, the retransmission parameters, including retransmission and self-learning rates of agnostic individuals, have a significant effect on the rate of knowledge transmission, i.e., the larger parameters the greater final density of evangelical individuals.

  9. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    NASA Astrophysics Data System (ADS)

    Domanskyi, Sergii; Schilling, Joshua E.; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir

    2016-09-01

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of "stiff" equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.

  10. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    NASA Astrophysics Data System (ADS)

    Domanskyi, Sergii; Schilling, Joshua; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of ``stiff'' equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.

  11. A novel epidemic spreading model with decreasing infection rate based on infection times

    NASA Astrophysics Data System (ADS)

    Huang, Yunhan; Ding, Li; Feng, Yun

    2016-02-01

    A new epidemic spreading model where individuals can be infected repeatedly is proposed in this paper. The infection rate decreases according to the times it has been infected before. This phenomenon may be caused by immunity or heightened alertness of individuals. We introduce a new parameter called decay factor to evaluate the decrease of infection rate. Our model bridges the Susceptible-Infected-Susceptible(SIS) model and the Susceptible-Infected-Recovered(SIR) model by this parameter. The proposed model has been studied by Monte-Carlo numerical simulation. It is found that initial infection rate has greater impact on peak value comparing with decay factor. The effect of decay factor on final density and threshold of outbreak is dominant but weakens significantly when considering birth and death rates. Besides, simulation results show that the influence of birth and death rates on final density is non-monotonic in some circumstances.

  12. Forecasting the mortality rates using Lee-Carter model and Heligman-Pollard model

    NASA Astrophysics Data System (ADS)

    Ibrahim, R. I.; Ngataman, N.; Abrisam, W. N. A. Wan Mohd

    2017-09-01

    Improvement in life expectancies has driven further declines in mortality. The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent years because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Among all forecasting methods, the Lee-Carter model has been widely accepted by the actuarial community and Heligman-Pollard model has been widely used by researchers in modelling and forecasting future mortality. Therefore, this paper only focuses on Lee-Carter model and Heligman-Pollard model. The main objective of this paper is to investigate how accurately these two models will perform using Malaysian data. Since these models involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 8.0 (MATLAB 8.0) software will be used to estimate the parameters of the models. Autoregressive Integrated Moving Average (ARIMA) procedure is applied to acquire the forecasted parameters for both models as the forecasted mortality rates are obtained by using all the values of forecasted parameters. To investigate the accuracy of the estimation, the forecasted results will be compared against actual data of mortality rates. The results indicate that both models provide better results for male population. However, for the elderly female population, Heligman-Pollard model seems to underestimate to the mortality rates while Lee-Carter model seems to overestimate to the mortality rates.

  13. A maximum entropy fracture model for low and high strain-rate fracture in TinSilverCopper alloys

    NASA Astrophysics Data System (ADS)

    Chan, Dennis K.

    SnAgCu solder alloys exhibit significant rate-dependent constitutive behavior. Solder joints made of these alloys exhibit failure modes that are also rate-dependent. Solder joints are an integral part of microelectronic packages and are subjected to a wide variety of loading conditions which range from thermo-mechanical fatigue to impact loading. Consequently, there is a need for non-empirical rate-dependent failure theory that is able to accurately predict fracture in these solder joints. In the present thesis, various failure models are first reviewed. But, these models are typically empirical or are not valid for solder joints due to limiting assumptions such as elastic behavior. Here, the development and validation of a maximum entropy fracture model (MEFM) valid for low strain-rate fracture in SnAgCu solders is presented. To this end, work on characterizing SnAgCu solder behavior at low strain-rates using a specially designed tester to estimate parameters for constitutive models is presented. Next, the maximum entropy fracture model is reviewed. This failure model uses a single damage accumulation parameter and relates the risk of fracture to accumulated inelastic dissipation. A methodology is presented to extract this model parameter through a custom-built microscale mechanical tester for Sn3.8Ag0.7Cu solder. This single parameter is used to numerically simulate fracture in two solder joints with entirely different geometries. The simulations are compared to experimentally observed fracture in these same packages. Following the simulations of fracture at low strain rate, the constitutive behavior of solder alloys across nine decades of strain rates through MTS compression tests and split-Hopkinson bar are presented. Preliminary work on using orthogonal machining as novel technique of material characterization at high strain rates is also presented. The resultant data from the MTS compression and split-Hopkinson bar tester is used to demonstrate the localization of stress to the interface of solder joints at high strain rates. The MEFM is further extended to predict failure in brittle materials. Such an extension allows for fracture prediction within intermetallic compounds (IMCs) in solder joints. It has been experimentally observed that the failure mode shifts from bulk solder to the IMC layer with increasing loading rates. The extension of the MEFM would allow for prediction of the fracture mode within the solder joint under different loading conditions. A fracture model capable of predicting failure modes at higher strain rates is necessary, as mobile electronics are becoming ubiquitous. Mobile devices are prone to being dropped which can induce loading rates within solder joints that are much larger than experienced under thermo-mechanical fatigue. A range of possible damage accumulation parameters for Cu6Sn 5 is determined for the MEFM. A value within the aforementioned range is used to demonstrate the increasing likelihood of IMC fracture in solder joints with larger loading rates. The thesis is concluded with remarks about ongoing work that include determining a more accurate damage accumulation parameter for Cu6Sn 5 IMC, and on using machining as a technique for extracting failure parameters for the MEFM.

  14. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    PubMed

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Prediction and Computation of Corrosion Rates of A36 Mild Steel in Oilfield Seawater

    NASA Astrophysics Data System (ADS)

    Paul, Subir; Mondal, Rajdeep

    2018-04-01

    The parameters which primarily control the corrosion rate and life of steel structures are several and they vary across the different ocean and seawater as well as along the depth. While the effect of single parameter on corrosion behavior is known, the conjoint effects of multiple parameters and the interrelationship among the variables are complex. Millions sets of experiments are required to understand the mechanism of corrosion failure. Statistical modeling such as ANN is one solution that can reduce the number of experimentation. ANN model was developed using 170 sets of experimental data of A35 mild steel in simulated seawater, varying the corrosion influencing parameters SO4 2-, Cl-, HCO3 -,CO3 2-, CO2, O2, pH and temperature as input and the corrosion current as output. About 60% of experimental data were used to train the model, 20% for testing and 20% for validation. The model was developed by programming in Matlab. 80% of the validated data could predict the corrosion rate correctly. Corrosion rates predicted by the ANN model are displayed in 3D graphics which show many interesting phenomenon of the conjoint effects of multiple variables that might throw new ideas of mitigation of corrosion by simply modifying the chemistry of the constituents. The model could predict the corrosion rates of some real systems.

  16. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    PubMed

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.

  17. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  18. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  19. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  20. Analysis of regional rainfall-runoff parameters for the Lake Michigan Diversion hydrological modeling

    USGS Publications Warehouse

    Soong, David T.; Over, Thomas M.

    2015-01-01

    Recalibration of the HSPF parameters to the updated inputs and land covers was completed on two representative watershed models selected from the nine by using a manual method (HSPEXP) and an automatic method (PEST). The objective of the recalibration was to develop a regional parameter set that improves the accuracy in runoff volume prediction for the nine study watersheds. Knowledge about flow and watershed characteristics plays a vital role for validating the calibration in both manual and automatic methods. The best performing parameter set was determined by the automatic calibration method on a two-watershed model. Applying this newly determined parameter set to the nine watersheds for runoff volume simulation resulted in “very good” ratings in five watersheds, an improvement as compared to “very good” ratings achieved for three watersheds by the North Branch parameter set.

  1. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  2. Morbidity Rate Prediction of Dengue Hemorrhagic Fever (DHF) Using the Support Vector Machine and the Aedes aegypti Infection Rate in Similar Climates and Geographical Areas

    PubMed Central

    Kesorn, Kraisak; Ongruk, Phatsavee; Chompoosri, Jakkrawarn; Phumee, Atchara; Thavara, Usavadee; Tawatsin, Apiwat; Siriyasatien, Padet

    2015-01-01

    Background In the past few decades, several researchers have proposed highly accurate prediction models that have typically relied on climate parameters. However, climate factors can be unreliable and can lower the effectiveness of prediction when they are applied in locations where climate factors do not differ significantly. The purpose of this study was to improve a dengue surveillance system in areas with similar climate by exploiting the infection rate in the Aedes aegypti mosquito and using the support vector machine (SVM) technique for forecasting the dengue morbidity rate. Methods and Findings Areas with high incidence of dengue outbreaks in central Thailand were studied. The proposed framework consisted of the following three major parts: 1) data integration, 2) model construction, and 3) model evaluation. We discovered that the Ae. aegypti female and larvae mosquito infection rates were significantly positively associated with the morbidity rate. Thus, the increasing infection rate of female mosquitoes and larvae led to a higher number of dengue cases, and the prediction performance increased when those predictors were integrated into a predictive model. In this research, we applied the SVM with the radial basis function (RBF) kernel to forecast the high morbidity rate and take precautions to prevent the development of pervasive dengue epidemics. The experimental results showed that the introduced parameters significantly increased the prediction accuracy to 88.37% when used on the test set data, and these parameters led to the highest performance compared to state-of-the-art forecasting models. Conclusions The infection rates of the Ae. aegypti female mosquitoes and larvae improved the morbidity rate forecasting efficiency better than the climate parameters used in classical frameworks. We demonstrated that the SVM-R-based model has high generalization performance and obtained the highest prediction performance compared to classical models as measured by the accuracy, sensitivity, specificity, and mean absolute error (MAE). PMID:25961289

  3. Modeling of Impression Testing to Obtain Mechanical Properties of Lead-Free Solders Microelectronic Interconnects

    DTIC Science & Technology

    2005-12-01

    hardening exponent and Cimp is the impression strain-rate hardening coefficient. The strain-rate hardening exponent m is a parameter that is...exponent and Cimp is the impression strain-rate hardening coefficient. The strain-rate hardening exponent m is a parameter that is related to the creep

  4. ESTIMATION OF CONSTANT AND TIME-VARYING DYNAMIC PARAMETERS OF HIV INFECTION IN A NONLINEAR DIFFERENTIAL EQUATION MODEL.

    PubMed

    Liang, Hua; Miao, Hongyu; Wu, Hulin

    2010-03-01

    Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.

  5. Syndromes of collateral-reported psychopathology for ages 18-59 in 18 Societies

    PubMed Central

    Ivanova, Masha Y.; Achenbach, Thomas M.; Rescorla, Leslie A.; Turner, Lori V.; Árnadóttir, Hervör Alma; Au, Alma; Caldas, J. Carlos; Chaalal, Nebia; Chen, Yi Chuen; da Rocha, Marina M.; Decoster, Jeroen; Fontaine, Johnny R.J.; Funabiki, Yasuko; Guðmundsson, Halldór S.; Kim, Young Ah; Leung, Patrick; Liu, Jianghong; Malykh, Sergey; Marković, Jasminka; Oh, Kyung Ja; Petot, Jean-Michel; Samaniego, Virginia C.; Silvares, Edwiges Ferreira de Mattos; Šimulionienė, Roma; Šobot, Valentina; Sokoli, Elvisa; Sun, Guiju; Talcott, Joel B.; Vázquez, Natalia; Zasępa, Ewa

    2017-01-01

    The purpose was to advance research and clinical methodology for assessing psychopathology by testing the international generalizability of an 8-syndrome model derived from collateral ratings of adult behavioral, emotional, social, and thought problems. Collateral informants rated 8,582 18–59-year-old residents of 18 societies on the Adult Behavior Checklist (ABCL). Confirmatory factor analyses tested the fit of the 8-syndrome model to ratings from each society. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all societies, while secondary indices (Tucker Lewis Index, Comparative Fit Index) showed acceptable to good fit for 17 societies. Factor loadings were robust across societies and items. Of the 5,007 estimated parameters, 4 (0.08%) were outside the admissible parameter space, but 95% confidence intervals included the admissible space, indicating that the 4 deviant parameters could be due to sampling fluctuations. The findings are consistent with previous evidence for the generalizability of the 8-syndrome model in self-ratings from 29 societies, and support the 8-syndrome model for operationalizing phenotypes of adult psychopathology from multi-informant ratings in diverse societies. PMID:29399019

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golubev, A.; Balashov, Y.; Mavrin, S.

    Washout coefficient Λ is widely used as a parameter in washout models. These models describes overall HTO washout with rain by a first-order kinetic equation, while washout coefficient Λ depends on the type of rain event and rain intensity and empirical parameters a, b. The washout coefficient is a macroscopic parameter and we have considered in this paper its relationship with a microscopic rate K of HTO isotopic exchange in atmospheric humidity and drops of rainwater. We have shown that the empirical parameters a, b can be represented through the rain event characteristics using the relationships of molecular impact rate,more » rain intensity and specific rain water content while washout coefficient Λ can be represented through the exchange rate K, rain intensity, raindrop diameter and terminal raindrop velocity.« less

  7. Correction for photobleaching in dynamic fluorescence microscopy: application in the assessment of pharmacokinetic parameters in ultrasound-mediated drug delivery

    NASA Astrophysics Data System (ADS)

    Derieppe, M.; Bos, C.; de Greef, M.; Moonen, C.; de Senneville, B. Denis

    2016-01-01

    We have previously demonstrated the feasibility of monitoring ultrasound-mediated uptake of a hydrophilic model drug in real time with dynamic confocal fluorescence microscopy. In this study, we evaluate and correct the impact of photobleaching to improve the accuracy of pharmacokinetic parameter estimates. To model photobleaching of the fluorescent model drug SYTOX Green, a photobleaching process was added to the current two-compartment model describing cell uptake. After collection of the uptake profile, a second acquisition was performed when SYTOX Green was equilibrated, to evaluate the photobleaching rate experimentally. Photobleaching rates up to 5.0 10-3 s-1 were measured when applying power densities up to 0.2 W.cm-2. By applying the three-compartment model, the model drug uptake rate of 6.0 10-3 s-1 was measured independent of the applied laser power. The impact of photobleaching on uptake rate estimates measured by dynamic fluorescence microscopy was evaluated. Subsequent compensation improved the accuracy of pharmacokinetic parameter estimates in the cell population subjected to sonopermeabilization.

  8. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  9. Effects of host social hierarchy on disease persistence.

    PubMed

    Davidson, Ross S; Marion, Glenn; Hutchings, Michael R

    2008-08-07

    The effects of social hierarchy on population dynamics and epidemiology are examined through a model which contains a number of fundamental features of hierarchical systems, but is simple enough to allow analytical insight. In order to allow for differences in birth rates, contact rates and movement rates among different sets of individuals the population is first divided into subgroups representing levels in the hierarchy. Movement, representing dominance challenges, is allowed between any two levels, giving a completely connected network. The model includes hierarchical effects by introducing a set of dominance parameters which affect birth rates in each social level and movement rates between social levels, dependent upon their rank. Although natural hierarchies vary greatly in form, the skewing of contact patterns, introduced here through non-uniform dominance parameters, has marked effects on the spread of disease. A simple homogeneous mixing differential equation model of a disease with SI dynamics in a population subject to simple birth and death process is presented and it is shown that the hierarchical model tends to this as certain parameter regions are approached. Outside of these parameter regions correlations within the system give rise to deviations from the simple theory. A Gaussian moment closure scheme is developed which extends the homogeneous model in order to take account of correlations arising from the hierarchical structure, and it is shown that the results are in reasonable agreement with simulations across a range of parameters. This approach helps to elucidate the origin of hierarchical effects and shows that it may be straightforward to relate the correlations in the model to measurable quantities which could be used to determine the importance of hierarchical corrections. Overall, hierarchical effects decrease the levels of disease present in a given population compared to a homogeneous unstructured model, but show higher levels of disease than structured models with no hierarchy. The separation between these three models is greatest when the rate of dominance challenges is low, reducing mixing, and when the disease prevalence is low. This suggests that these effects will often need to be considered in models being used to examine the impact of control strategies where the low disease prevalence behaviour of a model is critical.

  10. Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies

    PubMed Central

    Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas

    2009-01-01

    Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170

  11. FY2016 ILAW Glass Corrosion Testing with the Single-Pass Flow-Through Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neeway, James J.; Asmussen, Robert M.; Parruzot, Benjamin PG

    The inventory of immobilized low-activity waste (ILAW) produced at the Hanford Tank Waste Treatment and Immobilization Plant (WTP) will be disposed of at the near-surface, on-site Integrated Disposal Facility (IDF). When groundwater comes into contact with the waste form, the glass will corrode and radionuclides will be released into the near-field environment. Because the release of the radionuclides is dependent on the dissolution rate of the glass, it is important that the performance assessment (PA) model accounts for the dissolution rate of the glass as a function of various chemical conditions. To accomplish this, an IDF PA model based onmore » Transition State Theory (TST) can be employed. The model is able to account for changes in temperature, exposed surface area, and pH of the contacting solution as well as the effect of silicon concentrations in solution, specifically the activity of orthosilicic acid (H4SiO4), whose concentration is directly linked to the glass dissolution rate. In addition, the IDF PA model accounts for the alkali-ion exchange process as sodium is leached from the glass and into solution. The effect of temperature, pH, H4SiO4 activity, and the rate of ion-exchange can be parameterized and implemented directly into the PA rate law model. The rate law parameters are derived from laboratory tests with the single-pass flow-through (SPFT) method. To date, rate law parameters have been determined for seven ILAW glass compositions, thus additional rate law parameters on a wider range of compositions will supplement the existing body of data for PA maintenance activities. The data provided in this report can be used by ILAW glass scientists to further the understanding of ILAW glass behavior, by IDF PA modelers to use the rate law parameters in PA modeling efforts, and by Department of Energy (DOE) contractors and decision makers as they assess the IDF PA program.« less

  12. COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior

    NASA Technical Reports Server (NTRS)

    Smialek, James L.; Auping, Judith V.

    2002-01-01

    COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,

  13. Global sensitivity analysis of the BSM2 dynamic influent disturbance scenario generator.

    PubMed

    Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf

    2012-01-01

    This paper presents the results of a global sensitivity analysis (GSA) of a phenomenological model that generates dynamic wastewater treatment plant (WWTP) influent disturbance scenarios. This influent model is part of the Benchmark Simulation Model (BSM) family and creates realistic dry/wet weather files describing diurnal, weekend and seasonal variations through the combination of different generic model blocks, i.e. households, industry, rainfall and infiltration. The GSA is carried out by combining Monte Carlo simulations and standardized regression coefficients (SRC). Cluster analysis is then applied, classifying the influence of the model parameters into strong, medium and weak. The results show that the method is able to decompose the variance of the model predictions (R(2)> 0.9) satisfactorily, thus identifying the model parameters with strongest impact on several flow rate descriptors calculated at different time resolutions. Catchment size (PE) and the production of wastewater per person equivalent (QperPE) are two parameters that strongly influence the yearly average dry weather flow rate and its variability. Wet weather conditions are mainly affected by three parameters: (1) the probability of occurrence of a rain event (Llrain); (2) the catchment size, incorporated in the model as a parameter representing the conversion from mm rain · day(-1) to m(3) · day(-1) (Qpermm); and, (3) the quantity of rain falling on permeable areas (aH). The case study also shows that in both dry and wet weather conditions the SRC ranking changes when the time scale of the analysis is modified, thus demonstrating the potential to identify the effect of the model parameters on the fast/medium/slow dynamics of the flow rate. The paper ends with a discussion on the interpretation of GSA results and of the advantages of using synthetic dynamic flow rate data for WWTP influent scenario generation. This section also includes general suggestions on how to use the proposed methodology to any influent generator to adapt the created time series to a modeller's demands.

  14. Modeling sugar cane yield with a process-based model from site to continental scale: uncertainties arising from model structure and parameter values

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Huth, N.; Marin, F.; Martiné, J.-F.

    2014-01-01

    Agro-Land Surface Models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, a particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of Agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS' phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte-Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugar cane cultivation in Australia and Brazil. Ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting climate-mediated different sensitivities of modeled sugar cane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.

  15. Modeling sugarcane yield with a process-based model from site to continental scale: uncertainties arising from model structure and parameter values

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Caubel, A.; Huth, N.; Marin, F.; Martiné, J.-F.

    2014-06-01

    Agro-land surface models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugarcane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte Carlo sampling method associated with the calculation of partial ranked correlation coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugarcane cultivation in Australia and Brazil. The ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting different climate-mediated sensitivities of modeled sugarcane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.

  16. From Spiking Neuron Models to Linear-Nonlinear Models

    PubMed Central

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-01

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777

  17. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  18. Considering inventory distributions in a stochastic periodic inventory routing system

    NASA Astrophysics Data System (ADS)

    Yadollahi, Ehsan; Aghezzaf, El-Houssaine

    2017-07-01

    Dealing with the stochasticity of parameters is one of the critical issues in business and industry nowadays. Supply chain planners have difficulties in forecasting stochastic parameters of a distribution system. Demand rates of customers during their lead time are one of these parameters. In addition, holding a huge level of inventory at the retailers is costly and inefficient. To cover the uncertainty of forecasting demand rates, researchers have proposed the usage of safety stock to avoid stock-out. However, finding the precise level of safety stock depends on forecasting the statistical distribution of demand rates and their variations in different settings among the planning horizon. In this paper the demand rate distributions and its parameters are taken into account for each time period in a stochastic periodic IRP. An analysis of the achieved statistical distribution of the inventory and safety stock level is provided to measure the effects of input parameters on the output indicators. Different values for coefficient of variation are applied to the customers' demand rate in the optimization model. The outcome of the deterministic equivalent model of SPIRP is simulated in form of an illustrative case.

  19. CO2 fluxes and ecosystem dynamics at five European treeless peatlands - merging data and process oriented modelling

    NASA Astrophysics Data System (ADS)

    Metzger, C.; Jansson, P.-E.; Lohila, A.; Aurela, M.; Eickenscheidt, T.; Belelli-Marchesini, L.; Dinsmore, K. J.; Drewer, J.; van Huissteden, J.; Drösler, M.

    2014-06-01

    The carbon dioxide (CO2) exchange of five different peatland systems across Europe with a wide gradient in landuse intensity, water table depth, soil fertility and climate was simulated with the process oriented CoupModel. The aim of the study was to find out to what extent CO2 fluxes measured at different sites, can be explained by common processes and parameters implemented in the model. The CoupModel was calibrated to fit measured CO2 fluxes, soil temperature, snow depth and leaf area index (LAI) and resulting differences in model parameters were analysed. Finding site independent model parameters would mean that differences in the measured fluxes could be explained solely by model input data: water table, meteorological data, management and soil inventory data. The model, utilizing a site independent configuration for most of the parameters, captured seasonal variability in the major fluxes well. Parameters that differed between sites included the rate of soil organic decomposition, photosynthetic efficiency, and regulation of the mobile carbon (C) pool from senescence to shooting in the next year. The largest difference between sites was the rate coefficient for heterotrophic respiration. Setting it to a common value would lead to underestimation of mean total respiration by a factor of 2.8 up to an overestimation by a factor of 4. Despite testing a wide range of different responses to soil water and temperature, heterotrophic respiration rates were consistently lowest on formerly drained sites and highest on the managed sites. Substrate decomposability, pH and vegetation characteristics are possible explanations for the differences in decomposition rates. Applying common parameter values for the timing of plant shooting and senescence, and a minimum temperature for photosynthesis, had only a minor effect on model performance, even though the gradient in site latitude ranged from 48° N (South-Germany) to 68° N (northern Finland). This was also true for common parameters defining the moisture and temperature response for decomposition. CoupModel is able to describe measured fluxes at different sites or under different conditions, providing that the rate of soil organic decomposition, photosynthetic efficiency, and the regulation of the mobile carbon (C) pool are estimated from available information on specific soil conditions, vegetation and management of the ecosystems.

  20. Variable frame rate transmission - A review of methodology and application to narrow-band LPC speech coding

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Makhoul, J.; Schwartz, R. M.; Huggins, A. W. F.

    1982-04-01

    The variable frame rate (VFR) transmission methodology developed, implemented, and tested in the years 1973-1978 for efficiently transmitting linear predictive coding (LPC) vocoder parameters extracted from the input speech at a fixed frame rate is reviewed. With the VFR method, parameters are transmitted only when their values have changed sufficiently over the interval since their preceding transmission. Two distinct approaches to automatic implementation of the VFR method are discussed. The first bases the transmission decisions on comparisons between the parameter values of the present frame and the last transmitted frame. The second, which is based on a functional perceptual model of speech, compares the parameter values of all the frames that lie in the interval between the present frame and the last transmitted frame against a linear model of parameter variation over that interval. Also considered is the application of VFR transmission to the design of narrow-band LPC speech coders with average bit rates of 2000-2400 bts/s.

  1. A calibration protocol of a one-dimensional moving bed bioreactor (MBBR) dynamic model for nitrogen removal.

    PubMed

    Barry, U; Choubert, J-M; Canler, J-P; Héduit, A; Robin, L; Lessard, P

    2012-01-01

    This work suggests a procedure to correctly calibrate the parameters of a one-dimensional MBBR dynamic model in nitrification treatment. The study deals with the MBBR configuration with two reactors in series, one for carbon treatment and the other for nitrogen treatment. Because of the influence of the first reactor on the second one, the approach needs a specific calibration strategy. Firstly, a comparison between measured values and simulated ones obtained with default parameters has been carried out. Simulated values of filtered COD, NH(4)-N and dissolved oxygen are underestimated and nitrates are overestimated compared with observed data. Thus, nitrifying rate and oxygen transfer into the biofilm are overvalued. Secondly, a sensitivity analysis was carried out for parameters and for COD fractionation. It revealed three classes of sensitive parameters: physical, diffusional and kinetic. Then a calibration protocol of the MBBR dynamic model was proposed. It was successfully tested on data recorded at a pilot-scale plant and a calibrated set of values was obtained for four parameters: the maximum biofilm thickness, the detachment rate, the maximum autotrophic growth rate and the oxygen transfer rate.

  2. Experiments and modelling of rate-dependent transition delay in a stochastic subcritical bifurcation

    NASA Astrophysics Data System (ADS)

    Bonciolini, Giacomo; Ebi, Dominik; Boujo, Edouard; Noiray, Nicolas

    2018-03-01

    Complex systems exhibiting critical transitions when one of their governing parameters varies are ubiquitous in nature and in engineering applications. Despite a vast literature focusing on this topic, there are few studies dealing with the effect of the rate of change of the bifurcation parameter on the tipping points. In this work, we consider a subcritical stochastic Hopf bifurcation under two scenarios: the bifurcation parameter is first changed in a quasi-steady manner and then, with a finite ramping rate. In the latter case, a rate-dependent bifurcation delay is observed and exemplified experimentally using a thermoacoustic instability in a combustion chamber. This delay increases with the rate of change. This leads to a state transition of larger amplitude compared with the one that would be experienced by the system with a quasi-steady change of the parameter. We also bring experimental evidence of a dynamic hysteresis caused by the bifurcation delay when the parameter is ramped back. A surrogate model is derived in order to predict the statistic of these delays and to scrutinize the underlying stochastic dynamics. Our study highlights the dramatic influence of a finite rate of change of bifurcation parameters upon tipping points, and it pinpoints the crucial need of considering this effect when investigating critical transitions.

  3. Experiments and modelling of rate-dependent transition delay in a stochastic subcritical bifurcation

    PubMed Central

    Noiray, Nicolas

    2018-01-01

    Complex systems exhibiting critical transitions when one of their governing parameters varies are ubiquitous in nature and in engineering applications. Despite a vast literature focusing on this topic, there are few studies dealing with the effect of the rate of change of the bifurcation parameter on the tipping points. In this work, we consider a subcritical stochastic Hopf bifurcation under two scenarios: the bifurcation parameter is first changed in a quasi-steady manner and then, with a finite ramping rate. In the latter case, a rate-dependent bifurcation delay is observed and exemplified experimentally using a thermoacoustic instability in a combustion chamber. This delay increases with the rate of change. This leads to a state transition of larger amplitude compared with the one that would be experienced by the system with a quasi-steady change of the parameter. We also bring experimental evidence of a dynamic hysteresis caused by the bifurcation delay when the parameter is ramped back. A surrogate model is derived in order to predict the statistic of these delays and to scrutinize the underlying stochastic dynamics. Our study highlights the dramatic influence of a finite rate of change of bifurcation parameters upon tipping points, and it pinpoints the crucial need of considering this effect when investigating critical transitions. PMID:29657803

  4. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domanskyi, Sergii; Schilling, Joshua E.; Privman, Vladimir, E-mail: privman@clarkson.edu

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model wemore » describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of “stiff” equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.« less

  5. Comparing basal area growth models, consistency of parameters, and accuracy of prediction

    Treesearch

    J.J. Colbert; Michael Schuckers; Desta Fekedulegn

    2002-01-01

    We fit alternative sigmoid growth models to sample tree basal area historical data derived from increment cores and disks taken at breast height. We examine and compare the estimated parameters for these models across a range of sample sites. Models are rated on consistency of parameters and on their ability to fit growth data from four sites that are located across a...

  6. Modelling of intermittent microwave convective drying: parameter sensitivity

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  7. Phenomenological Constitutive Modeling of High-Temperature Flow Behavior Incorporating Individual and Coupled Effects of Processing Parameters in Super-austenitic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Roy, Swagata; Biswas, Srija; Babu, K. Arun; Mandal, Sumantra

    2018-05-01

    A novel constitutive model has been developed for predicting flow responses of super-austenitic stainless steel over a wide range of strains (0.05-0.6), temperatures (1173-1423 K) and strain rates (0.001-1 s-1). Further, the predictability of this new model has been compared with the existing Johnson-Cook (JC) and modified Zerilli-Armstrong (M-ZA) model. The JC model is not befitted for flow prediction as it is found to be exhibiting very high ( 36%) average absolute error (δ) and low ( 0.92) correlation coefficient (R). On the contrary, the M-ZA model has demonstrated relatively lower δ ( 13%) and higher R ( 0.96) for flow prediction. The incorporation of couplings of processing parameters in M-ZA model has led to exhibit better prediction than JC model. However, the flow analyses of the studied alloy have revealed the additional synergistic influences of strain and strain rate as well as strain, temperature, and strain rate apart from those considered in M-ZA model. Hence, the new phenomenological model has been formulated incorporating all the individual and synergistic effects of processing parameters and a `strain-shifting' parameter. The proposed model predicted the flow behavior of the alloy with much better correlation and generalization than M-ZA model as substantiated by its lower δ ( 7.9%) and higher R ( 0.99) of prediction.

  8. Estimating Bat and Bird Mortality Occurring at Wind Energy Turbines from Covariates and Carcass Searches Using Mixture Models

    PubMed Central

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144

  9. Estimating bat and bird mortality occurring at wind energy turbines from covariates and carcass searches using mixture models.

    PubMed

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

  10. Recharge characteristics of an unconfined aquifer from the rainfall-water table relationship

    NASA Astrophysics Data System (ADS)

    Viswanathan, M. N.

    1984-02-01

    The determination of recharge levels of unconfined aquifers, recharged entirely by rainfall, is done by developing a model for the aquifer that estimates the water-table levels from the history of rainfall observations and past water-table levels. In the present analysis, the model parameters that influence the recharge were not only assumed to be time dependent but also to have varying dependence rates for various parameters. Such a model is solved by the use of a recursive least-squares method. The variable-rate parameter variation is incorporated using a random walk model. From the field tests conducted at Tomago Sandbeds, Newcastle, Australia, it was observed that the assumption of variable rates of time dependency of recharge parameters produced better estimates of water-table levels compared to that with constant-recharge parameters. It was observed that considerable recharge due to rainfall occurred on the very same day of rainfall. The increase in water-table level was insignificant for subsequent days of rainfall. The level of recharge very much depends upon the intensity and history of rainfall. Isolated rainfalls, even of the order of 25 mm day -1, had no significant effect on the water-table levels.

  11. A computer program (MODFLOWP) for estimating parameters of a transient, three-dimensional ground-water flow model using nonlinear regression

    USGS Publications Warehouse

    Hill, Mary Catherine

    1992-01-01

    This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.

  12. Evaluation of the information content of long-term wastewater characteristics data in relation to activated sludge model parameters.

    PubMed

    Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash

    2017-03-01

    A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.

  13. Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters

    NASA Astrophysics Data System (ADS)

    Selyutina, N. S.; Petrov, Yu. V.

    2018-02-01

    The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.

  14. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  15. Sperm function and assisted reproduction technology

    PubMed Central

    MAAß, GESA; BÖDEKER, ROLF‐HASSO; SCHEIBELHUT, CHRISTINE; STALF, THOMAS; MEHNERT, CLAAS; SCHUPPE, HANS‐CHRISTIAN; JUNG, ANDREAS; SCHILL, WOLF‐BERNHARD

    2005-01-01

    The evaluation of different functional sperm parameters has become a tool in andrological diagnosis. These assays determine the sperm's capability to fertilize an oocyte. It also appears that sperm functions and semen parameters are interrelated and interdependent. Therefore, the question arose whether a given laboratory test or a battery of tests can predict the outcome in in vitro fertilization (IVF). One‐hundred and sixty‐one patients who underwent an IVF treatment were selected from a database of 4178 patients who had been examined for male infertility 3 months before or after IVF. Sperm concentration, motility, acrosin activity, acrosome reaction, sperm morphology, maternal age, number of transferred embryos, embryo score, fertilization rate and pregnancy rate were determined. In addition, logistic regression models to describe fertilization rate and pregnancy were developed. All the parameters in the models were dichotomized and intra‐ and interindividual variability of the parameters were assessed. Although the sperm parameters showed good correlations with IVF when correlated separately, the only essential parameter in the multivariate model was morphology. The enormous intra‐ and interindividual variability of the values was striking. In conclusion, our data indicate that the andrological status at the end of the respective treatment does not necessarily represent the status at the time of IVF. Despite a relatively low correlation coefficient in the logistic regression model, it appears that among the parameters tested, the most reliable parameter to predict fertilization is normal sperm morphology. (Reprod Med Biol 2005; 4: 7–30) PMID:29699207

  16. Predictive control of thermal state of blast furnace

    NASA Astrophysics Data System (ADS)

    Barbasova, T. A.; Filimonova, A. A.

    2018-05-01

    The work describes the structure of the model for predictive control of the thermal state of a blast furnace. The proposed model contains the following input parameters: coke rate; theoretical combustion temperature, comprising: natural gas consumption, blasting temperature, humidity, oxygen, blast furnace cooling water; blast furnace gas utilization rate. The output parameter is the cast iron temperature. The results for determining the cast iron temperature were obtained following the identification using the Hammerstein-Wiener model. The result of solving the cast iron temperature stabilization problem was provided for the calculated values of process parameters of the target area of the respective blast furnace operation mode.

  17. A Statistical Atrioventricular Node Model Accounting for Pathway Switching During Atrial Fibrillation.

    PubMed

    Henriksson, Mikael; Corino, Valentina D A; Sornmo, Leif; Sandberg, Frida

    2016-09-01

    The atrioventricular (AV) node plays a central role in atrial fibrillation (AF), as it influences the conduction of impulses from the atria into the ventricles. In this paper, the statistical dual pathway AV node model, previously introduced by us, is modified so that it accounts for atrial impulse pathway switching even if the preceding impulse did not cause a ventricular activation. The proposed change in model structure implies that the number of model parameters subjected to maximum likelihood estimation is reduced from five to four. The model is evaluated using the data acquired in the RATe control in atrial fibrillation (RATAF) study, involving 24-h ECG recordings from 60 patients with permanent AF. When fitting the models to the RATAF database, similar results were obtained for both the present and the previous model, with a median fit of 86%. The results show that the parameter estimates characterizing refractory period prolongation exhibit considerably lower variation when using the present model, a finding that may be ascribed to fewer model parameters. The new model maintains the capability to model RR intervals, while providing more reliable parameters estimates. The model parameters are expected to convey novel clinical information, and may be useful for predicting the effect of rate control drugs.

  18. Compartmental analysis of [11C]flumazenil kinetics for the estimation of ligand transport rate and receptor distribution using positron emission tomography.

    PubMed

    Koeppe, R A; Holthoff, V A; Frey, K A; Kilbourn, M R; Kuhl, D E

    1991-09-01

    The in vivo kinetic behavior of [11C]flumazenil ([11C]FMZ), a non-subtype-specific central benzodiazepine antagonist, is characterized using compartmental analysis with the aim of producing an optimized data acquisition protocol and tracer kinetic model configuration for the assessment of [11C]FMZ binding to benzodiazepine receptors (BZRs) in human brain. The approach presented is simple, requiring only a single radioligand injection. Dynamic positron emission tomography data were acquired on 18 normal volunteers using a 60- to 90-min sequence of scans and were analyzed with model configurations that included a three-compartment, four-parameter model, a three-compartment, three-parameter model, with a fixed value for free plus nonspecific binding; and a two-compartment, two-parameter model. Statistical analysis indicated that a four-parameter model did not yield significantly better fits than a three-parameter model. Goodness of fit was improved for three- versus two-parameter configurations in regions with low receptor density, but not in regions with moderate to high receptor density. Thus, a two-compartment, two-parameter configuration was found to adequately describe the kinetic behavior of [11C]FMZ in human brain, with stable estimates of the model parameters obtainable from as little as 20-30 min of data. Pixel-by-pixel analysis yields functional images of transport rate (K1) and ligand distribution volume (DV"), and thus provides independent estimates of ligand delivery and BZR binding.

  19. On the identification of cohesive parameters for printed metal-polymer interfaces

    NASA Astrophysics Data System (ADS)

    Heinrich, Felix; Langner, Hauke H.; Lammering, Rolf

    2017-05-01

    The mechanical behavior of printed electronics on fiber reinforced composites is investigated. A methodology based on cohesive zone models is employed, considering interfacial strengths, stiffnesses and critical strain energy release rates. A double cantilever beam test and an end notched flexure test are carried out to experimentally determine critical strain energy release rates under fracture modes I and II. Numerical simulations are performed in Abaqus 6.13 to model both tests. Applying the simulations, an inverse parameter identification is run to determine the full set of cohesive parameters.

  20. Parameter uncertainty analysis of a biokinetic model of caesium

    DOE PAGES

    Li, W. B.; Klein, W.; Blanchardon, Eric; ...

    2014-04-17

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects atmore » different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.« less

  1. Finite hedging in field theory models of interest rates

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Srikant, Marakani

    2004-03-01

    We use path integrals to calculate hedge parameters and efficacy of hedging in a quantum field theory generalization of the Heath, Jarrow, and Morton [Robert Jarrow, David Heath, and Andrew Morton, Econometrica 60, 77 (1992)] term structure model, which parsimoniously describes the evolution of imperfectly correlated forward rates. We calculate, within the model specification, the effectiveness of hedging over finite periods of time, and obtain the limiting case of instantaneous hedging. We use empirical estimates for the parameters of the model to show that a low-dimensional hedge portfolio is quite effective.

  2. Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation

    NASA Astrophysics Data System (ADS)

    Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.

    2017-12-01

    One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).

  3. Determination of Kinetic Parameters for the Thermal Decomposition of Parthenium hysterophorus

    NASA Astrophysics Data System (ADS)

    Dhaundiyal, Alok; Singh, Suraj B.; Hanon, Muammel M.; Rawat, Rekha

    2018-02-01

    A kinetic study of pyrolysis process of Parthenium hysterophorous is carried out by using thermogravimetric analysis (TGA) equipment. The present study investigates the thermal degradation and determination of the kinetic parameters such as activation E and the frequency factor A using model-free methods given by Flynn Wall and Ozawa (FWO), Kissinger-Akahira-Sonuse (KAS) and Kissinger, and model-fitting (Coats Redfern). The results derived from thermal decomposition process demarcate decomposition of Parthenium hysterophorous among the three main stages, such as dehydration, active and passive pyrolysis. It is shown through DTG thermograms that the increase in the heating rate caused temperature peaks at maximum weight loss rate to shift towards higher temperature regime. The results are compared with Coats Redfern (Integral method) and experimental results have shown that values of kinetic parameters obtained from model-free methods are in good agreement. Whereas the results obtained through Coats Redfern model at different heating rates are not promising, however, the diffusion models provided the good fitting with the experimental data.

  4. Effects of uncertainties in hydrological modelling. A case study of a mountainous catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur

    2016-05-01

    In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.

  5. A note on `a replenishment policy for items with price-dependent demand, time-proportional deterioration and no shortages'

    NASA Astrophysics Data System (ADS)

    Shah, Nita H.; Soni, Hardik N.; Gupta, Jyoti

    2014-08-01

    In a recent paper, Begum et al. (2012, International Journal of Systems Science, 43, 903-910) established pricing and replenishment policy for an inventory system with price-sensitive demand rate, time-proportional deterioration rate which follows three parameters, Weibull distribution and no shortages. In their model formulation, it is observed that the retailer's stock level reaches zero before the deterioration occurs. Consequently, the model resulted in traditional inventory model with price sensitive demand rate and no shortages. Hence, the main purpose of this note is to modify and present complete model formulation for Begum et al. (2012). The proposed model is validated by a numerical example and the sensitivity analysis of parameters is carried out.

  6. A new universal dynamic model to describe eating rate and cumulative intake curves123

    PubMed Central

    Paynter, Jonathan; Peterson, Courtney M; Heymsfield, Steven B

    2017-01-01

    Background: Attempts to model cumulative intake curves with quadratic functions have not simultaneously taken gustatory stimulation, satiation, and maximal food intake into account. Objective: Our aim was to develop a dynamic model for cumulative intake curves that captures gustatory stimulation, satiation, and maximal food intake. Design: We developed a first-principles model describing cumulative intake that universally describes gustatory stimulation, satiation, and maximal food intake using 3 key parameters: 1) the initial eating rate, 2) the effective duration of eating, and 3) the maximal food intake. These model parameters were estimated in a study (n = 49) where eating rates were deliberately changed. Baseline data was used to determine the quality of model's fit to data compared with the quadratic model. The 3 parameters were also calculated in a second study consisting of restrained and unrestrained eaters. Finally, we calculated when the gustatory stimulation phase is short or absent. Results: The mean sum squared error for the first-principles model was 337.1 ± 240.4 compared with 581.6 ± 563.5 for the quadratic model, or a 43% improvement in fit. Individual comparison demonstrated lower errors for 94% of the subjects. Both sex (P = 0.002) and eating duration (P = 0.002) were associated with the initial eating rate (adjusted R2 = 0.23). Sex was also associated (P = 0.03 and P = 0.012) with the effective eating duration and maximum food intake (adjusted R2 = 0.06 and 0.11). In participants directed to eat as much as they could compared with as much as they felt comfortable with, the maximal intake parameter was approximately double the amount. The model found that certain parameter regions resulted in both stimulation and satiation phases, whereas others only produced a satiation phase. Conclusions: The first-principles model better quantifies interindividual differences in food intake, shows how aspects of food intake differ across subpopulations, and can be applied to determine how eating behavior factors influence total food intake. PMID:28077377

  7. Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures

    PubMed Central

    Moore, Brian R.; Höhna, Sebastian; May, Michael R.; Rannala, Bruce; Huelsenbeck, John P.

    2016-01-01

    Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM. PMID:27512038

  8. Soil and vegetation parameter uncertainty on future terrestrial carbon sinks

    NASA Astrophysics Data System (ADS)

    Kothavala, Z.; Felzer, B. S.

    2013-12-01

    We examine the role of the terrestrial carbon cycle in a changing climate at the centennial scale using an intermediate complexity Earth system climate model that includes the effects of dynamic vegetation and the global carbon cycle. We present a series of ensemble simulations to evaluate the sensitivity of simulated terrestrial carbon sinks to three key model parameters: (a) The temperature dependence of soil carbon decomposition, (b) the upper temperature limits on the rate of photosynthesis, and (c) the nitrogen limitation of the maximum rate of carboxylation of Rubisco. We integrated the model in fully coupled mode for a 1200-year spin-up period, followed by a 300-year transient simulation starting at year 1800. Ensemble simulations were conducted varying each parameter individually and in combination with other variables. The results of the transient simulations show that terrestrial carbon uptake is very sensitive to the choice of model parameters. Changes in net primary productivity were most sensitive to the upper temperature limit on the rate of photosynthesis, which also had a dominant effect on overall land carbon trends; this is consistent with previous research that has shown the importance of climatic suppression of photosynthesis as a driver of carbon-climate feedbacks. Soil carbon generally decreased with increasing temperature, though the magnitude of this trend depends on both the net primary productivity changes and the temperature dependence of soil carbon decomposition. Vegetation carbon increased in some simulations, but this was not consistent across all configurations of model parameters. Comparing to global carbon budget observations, we identify the subset of model parameters which are consistent with observed carbon sinks; this serves to narrow considerably the future model projections of terrestrial carbon sink changes in comparison with the full model ensemble.

  9. Development of a new model for batch sedimentation and application to secondary settling tanks design.

    PubMed

    Karamisheva, Ralica D; Islam, M A

    2005-01-01

    Assuming that settling takes place in two zones (a constant rate zone and a variable rate zone), a model using four parameters accounting for the nature of the water-suspension system has been proposed for describing batch sedimentation processes. The sludge volume index (SVI) has been expressed in terms of these parameters. Some disadvantages of the SVI application as a design parameter have been pointed out, and it has been shown that a relationship between zone settling velocity and sludge concentration is more consistent for describing the settling behavior and for design of settling tanks. The permissible overflow rate has been related to the technological parameters of secondary settling tank by simple working equations. The graphical representations of these equations could be used to optimize the design and operation of secondary settling tanks.

  10. Biomedical progress rates as new parameters for models of economic growth in developed countries.

    PubMed

    Zhavoronkov, Alex; Litovchenko, Maria

    2013-11-08

    While the doubling of life expectancy in developed countries during the 20th century can be attributed mostly to decreases in child mortality, the trillions of dollars spent on biomedical research by governments, foundations and corporations over the past sixty years are also yielding longevity dividends in both working and retired population. Biomedical progress will likely increase the healthy productive lifespan and the number of years of government support in the old age. In this paper we introduce several new parameters that can be applied to established models of economic growth: the biomedical progress rate, the rate of clinical adoption and the rate of change in retirement age. The biomedical progress rate is comprised of the rejuvenation rate (extending the productive lifespan) and the non-rejuvenating rate (extending the lifespan beyond the age at which the net contribution to the economy becomes negative). While staying within the neoclassical economics framework and extending the overlapping generations (OLG) growth model and assumptions from the life cycle theory of saving behavior, we provide an example of the relations between these new parameters in the context of demographics, labor, households and the firm.

  11. Biomedical Progress Rates as New Parameters for Models of Economic Growth in Developed Countries

    PubMed Central

    Zhavoronkov, Alex; Litovchenko, Maria

    2013-01-01

    While the doubling of life expectancy in developed countries during the 20th century can be attributed mostly to decreases in child mortality, the trillions of dollars spent on biomedical research by governments, foundations and corporations over the past sixty years are also yielding longevity dividends in both working and retired population. Biomedical progress will likely increase the healthy productive lifespan and the number of years of government support in the old age. In this paper we introduce several new parameters that can be applied to established models of economic growth: the biomedical progress rate, the rate of clinical adoption and the rate of change in retirement age. The biomedical progress rate is comprised of the rejuvenation rate (extending the productive lifespan) and the non-rejuvenating rate (extending the lifespan beyond the age at which the net contribution to the economy becomes negative). While staying within the neoclassical economics framework and extending the overlapping generations (OLG) growth model and assumptions from the life cycle theory of saving behavior, we provide an example of the relations between these new parameters in the context of demographics, labor, households and the firm. PMID:24217179

  12. Multistate modelling extended by behavioural rules: An application to migration.

    PubMed

    Klabunde, Anna; Zinn, Sabine; Willekens, Frans; Leuchter, Matthias

    2017-10-01

    We propose to extend demographic multistate models by adding a behavioural element: behavioural rules explain intentions and thus transitions. Our framework is inspired by the Theory of Planned Behaviour. We exemplify our approach with a model of migration from Senegal to France. Model parameters are determined using empirical data where available. Parameters for which no empirical correspondence exists are determined by calibration. Age- and period-specific migration rates are used for model validation. Our approach adds to the toolkit of demographic projection by allowing for shocks and social influence, which alter behaviour in non-linear ways, while sticking to the general framework of multistate modelling. Our simulations yield that higher income growth in Senegal leads to higher emigration rates in the medium term, while a decrease in fertility yields lower emigration rates.

  13. Individual differences in emotion processing: how similar are diffusion model parameters across tasks?

    PubMed

    Mueller, Christina J; White, Corey N; Kuchinke, Lars

    2017-11-27

    The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.

  14. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less

  15. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    PubMed Central

    Gopich, Irina V.

    2015-01-01

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692

  16. Detecting influential observations in nonlinear regression modeling of groundwater flow

    USGS Publications Warehouse

    Yager, Richard M.

    1998-01-01

    Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.

  17. Effects of Uncertainties in Hydrological Modelling. A Case Study of a Mountainous Catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn; Steinsland, Ingelin

    2016-04-01

    The aim of this study is to investigate how the inclusion of uncertainties in inputs and observed streamflow influence the parameter estimation, streamflow predictions and model evaluation. In particular we wanted to answer the following research questions: • What is the effect of including a random error in the precipitation and temperature inputs? • What is the effect of decreased information about precipitation by excluding the nearest precipitation station? • What is the effect of the uncertainty in streamflow observations? • What is the effect of reduced information about the true streamflow by using a rating curve where the measurement of the highest and lowest streamflow is excluded when estimating the rating curve? To answer these questions, we designed a set of calibration experiments and evaluation strategies. We used the elevation distributed HBV model operating on daily time steps combined with a Bayesian formulation and the MCMC routine Dream for parameter inference. The uncertainties in inputs was represented by creating ensembles of precipitation and temperature. The precipitation ensemble were created using a meta-gaussian random field approach. The temperature ensembles were created using a 3D Bayesian kriging with random sampling of the temperature laps rate. The streamflow ensembles were generated by a Bayesian multi-segment rating curve model. Precipitation and temperatures were randomly sampled for every day, whereas the streamflow ensembles were generated from rating curve ensembles, and the same rating curve was always used for the whole time series in a calibration or evaluation run. We chose a catchment with a meteorological station measuring precipitation and temperature, and a rating curve of relatively high quality. This allowed us to investigate and further test the effect of having less information on precipitation and streamflow during model calibration, predictions and evaluation. The results showed that including uncertainty in the precipitation and temperature input has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Reduced information in precipitation input resulted in a and a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using wrong rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions obtained using a wrong rating curve, the evaluation scores varies depending on the true rating curve. Generally, the best evaluation scores were not achieved for the rating curve used for calibration, but for a rating curves giving low variance in streamflow observations. Reduced information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores giving both better and worse scores. This case study shows that estimating the water balance is challenging since both precipitation inputs and streamflow observations have pronounced systematic component in their uncertainties.

  18. Modeling of copper sorption onto GFH and design of full-scale GFH adsorbers.

    PubMed

    Steiner, Michele; Pronk, Wouter; Boller, Markus A

    2006-03-01

    During rain events, copper wash-off occurring from copper roofs results in environmental hazards. In this study, columns filled with granulated ferric hydroxide (GFH) were used to treat copper-containing roof runoff. It was shown that copper could be removed to a high extent. A model was developed to describe this removal process. The model was based on the Two Region Model (TRM), extended with an additional diffusion zone. The extended model was able to describe the copper removal in long-term experiments (up to 125 days) with variable flow rates reflecting realistic runoff events. The four parameters of the model were estimated based on data gained with specific column experiments according to maximum sensitivity for each parameter. After model validation, the parameter set was used for the design of full-scale adsorbers. These full-scale adsorbers show high removal rates during extended periods of time.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthias C. M. Troffaes; Gero Walter; Dana Kelly

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less

  20. Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.

    PubMed

    Robinson, John D; Hall, David W; Wares, John P

    2013-05-01

    Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.

  1. Global sensitivity analysis for identifying important parameters of nitrogen nitrification and denitrification under model uncertainty and scenario uncertainty

    NASA Astrophysics Data System (ADS)

    Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong

    2018-06-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen models.

  2. Modeling of Mitochondria Bioenergetics Using a Composable Chemiosmotic Energy Transduction Rate Law: Theory and Experimental Validation

    PubMed Central

    Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre

    2011-01-01

    Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally fitted parameters to those of existing models. Finally, we present a complete perturbation study of these parameters to reveal how they can significantly and differentially influence global flux and operational thresholds, suggesting that this modeling approach could help enable the comparative analysis of mitochondria from different systems and pathological states. The procedures and results are available in Mathematica notebooks at http://www.igb.uci.edu/tools/sb/mitochondria-modeling.html. PMID:21931590

  3. Modeling of mitochondria bioenergetics using a composable chemiosmotic energy transduction rate law: theory and experimental validation.

    PubMed

    Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre

    2011-01-01

    Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally fitted parameters to those of existing models. Finally, we present a complete perturbation study of these parameters to reveal how they can significantly and differentially influence global flux and operational thresholds, suggesting that this modeling approach could help enable the comparative analysis of mitochondria from different systems and pathological states. The procedures and results are available in Mathematica notebooks at http://www.igb.uci.edu/tools/sb/mitochondria-modeling.html.

  4. Identifiability of altimetry-based rating curve parameters in function of river morphological parameters

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme

    2016-04-01

    Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed elevation Z0is systematically well identified with relative errors on the order of a few %. Eventually, these altimetry-based rating curves provide morphological parameters of river reaches that can be used as inputs into hydraulic models and a priori information that could be useful for SWOT inversion algorithms.

  5. Simultaneous estimation of local-scale and flow path-scale dual-domain mass transfer parameters using geoelectrical monitoring

    USGS Publications Warehouse

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Curtis, Gary P.; Lane, John W.

    2013-01-01

    Anomalous solute transport, modeled as rate-limited mass transfer, has an observable geoelectrical signature that can be exploited to infer the controlling parameters. Previous experiments indicate the combination of time-lapse geoelectrical and fluid conductivity measurements collected during ionic tracer experiments provides valuable insight into the exchange of solute between mobile and immobile porosity. Here, we use geoelectrical measurements to monitor tracer experiments at a former uranium mill tailings site in Naturita, Colorado. We use nonlinear regression to calibrate dual-domain mass transfer solute-transport models to field data. This method differs from previous approaches by calibrating the model simultaneously to observed fluid conductivity and geoelectrical tracer signals using two parameter scales: effective parameters for the flow path upgradient of the monitoring point and the parameters local to the monitoring point. We use regression statistics to rigorously evaluate the information content and sensitivity of fluid conductivity and geophysical data, demonstrating multiple scales of mass transfer parameters can simultaneously be estimated. Our results show, for the first time, field-scale spatial variability of mass transfer parameters (i.e., exchange-rate coefficient, porosity) between local and upgradient effective parameters; hence our approach provides insight into spatial variability and scaling behavior. Additional synthetic modeling is used to evaluate the scope of applicability of our approach, indicating greater range than earlier work using temporal moments and a Lagrangian-based Damköhler number. The introduced Eulerian-based Damköhler is useful for estimating tracer injection duration needed to evaluate mass transfer exchange rates that range over several orders of magnitude.

  6. Estimation of body temperature rhythm based on heart activity parameters in daily life.

    PubMed

    Sooyoung Sim; Heenam Yoon; Hosuk Ryou; Kwangsuk Park

    2014-01-01

    Body temperature contains valuable health related information such as circadian rhythm and menstruation cycle. Also, it was discovered from previous studies that body temperature rhythm in daily life is related with sleep disorders and cognitive performances. However, monitoring body temperature with existing devices during daily life is not easy because they are invasive, intrusive, or expensive. Therefore, the technology which can accurately and nonintrusively monitor body temperature is required. In this study, we developed body temperature estimation model based on heart rate and heart rate variability parameters. Although this work was inspired by previous research, we originally identified that the model can be applied to body temperature monitoring in daily life. Also, we could find out that normalized Mean heart rate (nMHR) and frequency domain parameters of heart rate variability showed better performance than other parameters. Although we should validate the model with more number of subjects and consider additional algorithms to decrease the accumulated estimation error, we could verify the usefulness of this approach. Through this study, we expect that we would be able to monitor core body temperature and circadian rhythm from simple heart rate monitor. Then, we can obtain various health related information derived from daily body temperature rhythm.

  7. Can arsenic occurrence rate in bedrock aquifers be predicted?

    USGS Publications Warehouse

    Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan

    2012-01-01

    A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 μg L–1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 μg L–1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology.

  8. On the Collisionless Asymmetric Magnetic Reconnection Rate

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Hsin; Hesse, M.; Cassak, P. A.; Shay, M. A.; Wang, S.; Chen, L.-J.

    2018-04-01

    A prediction of the steady state reconnection electric field in asymmetric reconnection is obtained by maximizing the reconnection rate as a function of the opening angle made by the upstream magnetic field on the weak magnetic field (magnetosheath) side. The prediction is within a factor of 2 of the widely examined asymmetric reconnection model (Cassak & Shay, 2007, https://doi.org/10.1063/1.2795630) in the collisionless limit, and they scale the same over a wide parameter regime. The previous model had the effective aspect ratio of the diffusion region as a free parameter, which simulations and observations suggest is on the order of 0.1, but the present model has no free parameters. In conjunction with the symmetric case (Liu et al., 2017, https://doi.org/10.1103/PhysRevLett.118.085101), this work further suggests that this nearly universal number 0.1, essentially the normalized fast-reconnection rate, is a geometrical factor arising from maximizing the reconnection rate within magnetohydrodynamic-scale constraints.

  9. Can arsenic occurrence rates in bedrock aquifers be predicted?

    PubMed Central

    Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan

    2012-01-01

    A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 µg L−1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 µg L−1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology. PMID:22260208

  10. Predicting temperature drop rate of mass concrete during an initial cooling period using genetic programming

    NASA Astrophysics Data System (ADS)

    Bhattarai, Santosh; Zhou, Yihong; Zhao, Chunju; Zhou, Huawei

    2018-02-01

    Thermal cracking on concrete dams depends upon the rate at which the concrete is cooled (temperature drop rate per day) within an initial cooling period during the construction phase. Thus, in order to control the thermal cracking of such structure, temperature development due to heat of hydration of cement should be dropped at suitable rate. In this study, an attempt have been made to formulate the relation between cooling rate of mass concrete with passage of time (age of concrete) and water cooling parameters: flow rate and inlet temperature of cooling water. Data measured at summer season (April-August from 2009 to 2012) from recently constructed high concrete dam were used to derive a prediction model with the help of Genetic Programming (GP) software “Eureqa”. Coefficient of Determination (R) and Mean Square Error (MSE) were used to evaluate the performance of the model. The value of R and MSE is 0.8855 and 0.002961 respectively. Sensitivity analysis was performed to evaluate the relative impact on the target parameter due to input parameters. Further, testing the proposed model with an independent dataset those not included during analysis, results obtained from the proposed GP model are close enough to the real field data.

  11. A Bayesian estimation of a stochastic predator-prey model of economic fluctuations

    NASA Astrophysics Data System (ADS)

    Dibeh, Ghassan; Luchinsky, Dmitry G.; Luchinskaya, Daria D.; Smelyanskiy, Vadim N.

    2007-06-01

    In this paper, we develop a Bayesian framework for the empirical estimation of the parameters of one of the best known nonlinear models of the business cycle: The Marx-inspired model of a growth cycle introduced by R. M. Goodwin. The model predicts a series of closed cycles representing the dynamics of labor's share and the employment rate in the capitalist economy. The Bayesian framework is used to empirically estimate a modified Goodwin model. The original model is extended in two ways. First, we allow for exogenous periodic variations of the otherwise steady growth rates of the labor force and productivity per worker. Second, we allow for stochastic variations of those parameters. The resultant modified Goodwin model is a stochastic predator-prey model with periodic forcing. The model is then estimated using a newly developed Bayesian estimation method on data sets representing growth cycles in France and Italy during the years 1960-2005. Results show that inference of the parameters of the stochastic Goodwin model can be achieved. The comparison of the dynamics of the Goodwin model with the inferred values of parameters demonstrates quantitative agreement with the growth cycle empirical data.

  12. Modelling tourists arrival using time varying parameter

    NASA Astrophysics Data System (ADS)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  13. Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies

    ERIC Educational Resources Information Center

    Smith, Carrie E.; Cribbie, Robert A.

    2013-01-01

    When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…

  14. Sensitivity analysis of pulse pileup model parameter in photon counting detectors

    NASA Astrophysics Data System (ADS)

    Shunhavanich, Picha; Pelc, Norbert J.

    2017-03-01

    Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.

  15. Calibration and validation of a general infiltration model

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  16. Estimation of beech pyrolysis kinetic parameters by Shuffled Complex Evolution.

    PubMed

    Ding, Yanming; Wang, Changjian; Chaos, Marcos; Chen, Ruiyu; Lu, Shouxiang

    2016-01-01

    The pyrolysis kinetics of a typical biomass energy feedstock, beech, was investigated based on thermogravimetric analysis over a wide heating rate range from 5K/min to 80K/min. A three-component (corresponding to hemicellulose, cellulose and lignin) parallel decomposition reaction scheme was applied to describe the experimental data. The resulting kinetic reaction model was coupled to an evolutionary optimization algorithm (Shuffled Complex Evolution, SCE) to obtain model parameters. To the authors' knowledge, this is the first study in which SCE has been used in the context of thermogravimetry. The kinetic parameters were simultaneously optimized against data for 10, 20 and 60K/min heating rates, providing excellent fits to experimental data. Furthermore, it was shown that the optimized parameters were applicable to heating rates (5 and 80K/min) beyond those used to generate them. Finally, the predicted results based on optimized parameters were contrasted with those based on the literature. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Modeling the biomechanical and injury response of human liver parenchyma under tensile loading.

    PubMed

    Untaroiu, Costin D; Lu, Yuan-Chiao; Siripurapu, Sundeep K; Kemper, Andrew R

    2015-01-01

    The rapid advancement in computational power has made human finite element (FE) models one of the most efficient tools for assessing the risk of abdominal injuries in a crash event. In this study, specimen-specific FE models were employed to quantify material and failure properties of human liver parenchyma using a FE optimization approach. Uniaxial tensile tests were performed on 34 parenchyma coupon specimens prepared from two fresh human livers. Each specimen was tested to failure at one of four loading rates (0.01s(-1), 0.1s(-1), 1s(-1), and 10s(-1)) to investigate the effects of rate dependency on the biomechanical and failure response of liver parenchyma. Each test was simulated by prescribing the end displacements of specimen-specific FE models based on the corresponding test data. The parameters of a first-order Ogden material model were identified for each specimen by a FE optimization approach while simulating the pre-tear loading region. The mean material model parameters were then determined for each loading rate from the characteristic averages of the stress-strain curves, and a stochastic optimization approach was utilized to determine the standard deviations of the material model parameters. A hyperelastic material model using a tabulated formulation for rate effects showed good predictions in terms of tensile material properties of human liver parenchyma. Furthermore, the tissue tearing was numerically simulated using a cohesive zone modeling (CZM) approach. A layer of cohesive elements was added at the failure location, and the CZM parameters were identified by fitting the post-tear force-time history recorded in each test. The results show that the proposed approach is able to capture both the biomechanical and failure response, and accurately model the overall force-deflection response of liver parenchyma over a large range of tensile loadings rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified Coulomb failure model in which the failure stress threshold is lowered due to weakening, increasing the clock advance. The deviation from a non-Coulomb response also depends on the loading rate, elastic stiffness, initial conditions, and assumptions about how state evolves.

  19. An information propagation model considering incomplete reading behavior in microblog

    NASA Astrophysics Data System (ADS)

    Su, Qiang; Huang, Jiajia; Zhao, Xiande

    2015-02-01

    Microblog is one of the most popular communication channels on the Internet, and has already become the third largest source of news and public opinions in China. Although researchers have studied the information propagation in microblog using the epidemic models, previous studies have not considered the incomplete reading behavior among microblog users. Therefore, the model cannot fit the real situations well. In this paper, we proposed an improved model entitled Microblog-Susceptible-Infected-Removed (Mb-SIR) for information propagation by explicitly considering the user's incomplete reading behavior. We also tested the effectiveness of the model using real data from Sina Microblog. We demonstrate that the new proposed model is more accurate in describing the information propagation in microblog. In addition, we also investigate the effects of the critical model parameters, e.g., reading rate, spreading rate, and removed rate through numerical simulations. The simulation results show that, compared with other parameters, reading rate plays the most influential role in the information propagation performance in microblog.

  20. A size-structured model of bacterial growth and reproduction.

    PubMed

    Ellermeyer, S F; Pilyugin, S S

    2012-01-01

    We consider a size-structured bacterial population model in which the rate of cell growth is both size- and time-dependent and the average per capita reproduction rate is specified as a model parameter. It is shown that the model admits classical solutions. The population-level and distribution-level behaviours of these solutions are then determined in terms of the model parameters. The distribution-level behaviour is found to be different from that found in similar models of bacterial population dynamics. Rather than convergence to a stable size distribution, we find that size distributions repeat in cycles. This phenomenon is observed in similar models only under special assumptions on the functional form of the size-dependent growth rate factor. Our main results are illustrated with examples, and we also provide an introductory study of the bacterial growth in a chemostat within the framework of our model.

  1. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  2. Multiplicity Control in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  3. Automated palpation for breast tissue discrimination based on viscoelastic biomechanical properties.

    PubMed

    Tsukune, Mariko; Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, G Masakatsu

    2015-05-01

    Accurate, noninvasive methods are sought for breast tumor detection and diagnosis. In particular, a need for noninvasive techniques that measure both the nonlinear elastic and viscoelastic properties of breast tissue has been identified. For diagnostic purposes, it is important to select a nonlinear viscoelastic model with a small number of parameters that highly correlate with histological structure. However, the combination of conventional viscoelastic models with nonlinear elastic models requires a large number of parameters. A nonlinear viscoelastic model of breast tissue based on a simple equation with few parameters was developed and tested. The nonlinear viscoelastic properties of soft tissues in porcine breast were measured experimentally using fresh ex vivo samples. Robotic palpation was used for measurements employed in a finite element model. These measurements were used to calculate nonlinear viscoelastic parameters for fat, fibroglandular breast parenchyma and muscle. The ability of these parameters to distinguish the tissue types was evaluated in a two-step statistical analysis that included Holm's pairwise [Formula: see text] test. The discrimination error rate of a set of parameters was evaluated by the Mahalanobis distance. Ex vivo testing in porcine breast revealed significant differences in the nonlinear viscoelastic parameters among combinations of three tissue types. The discrimination error rate was low among all tested combinations of three tissue types. Although tissue discrimination was not achieved using only a single nonlinear viscoelastic parameter, a set of four nonlinear viscoelastic parameters were able to reliably and accurately discriminate fat, breast fibroglandular tissue and muscle.

  4. Characterization of elastic-viscoplastic properties of an AS4/PEEK thermoplastic composite

    NASA Technical Reports Server (NTRS)

    Yoon, K. J.; Sun, C. T.

    1991-01-01

    The elastic-viscoplastic properties of an AS4/PEEK (APC-2) thermoplastic composite were characterized at 24 C (75 F) and 121 C (250 F) by using a one-parameter viscoplasticity model. To determine the strain-rate effects, uniaxial tension tests were performed on unidirectional off-axis coupon specimens with different monotonic strain rates. A modified Bodner and Partom's model was also used to describe the viscoplasticity of the thermoplastic composite. The experimental results showed that viscoplastic behavior can be characterized quite well using the one-parameter overstress viscoplasticity model.

  5. Statistical Parameter Study of the Time Interval Distribution for Nonparalyzable, Paralyzable, and Hybrid Dead Time Models

    NASA Astrophysics Data System (ADS)

    Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon

    2018-05-01

    A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.

  6. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  7. Syndromes of Self-Reported Psychopathology for Ages 18-59 in 29 Societies.

    PubMed

    Ivanova, Masha Y; Achenbach, Thomas M; Rescorla, Leslie A; Tumer, Lori V; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R J; Funabiki, Yasuko; Guðmundsson, Halldór S; Harder, Valerie S; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B; Vazquez, Natalia; Zasepa, Ewa

    2015-06-01

    This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults' self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18-59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½-18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies.

  8. Information spreading dynamics in hypernetworks

    NASA Astrophysics Data System (ADS)

    Suo, Qi; Guo, Jin-Li; Shen, Ai-Zhong

    2018-04-01

    Contact pattern and spreading strategy fundamentally influence the spread of information. Current mathematical methods largely assume that contacts between individuals are fixed by networks. In fact, individuals are affected by all his/her neighbors in different social relationships. Here, we develop a mathematical approach to depict the information spreading process in hypernetworks. Each individual is viewed as a node, and each social relationship containing the individual is viewed as a hyperedge. Based on SIS epidemic model, we construct two spreading models. One model is based on global transmission, corresponding to RP strategy. The other is based on local transmission, corresponding to CP strategy. These models can degenerate into complex network models with a special parameter. Thus hypernetwork models extend the traditional models and are more realistic. Further, we discuss the impact of parameters including structure parameters of hypernetwork, spreading rate, recovering rate as well as information seed on the models. Propagation time and density of informed nodes can reveal the overall trend of information dissemination. Comparing these two models, we find out that there is no spreading threshold in RP, while there exists a spreading threshold in CP. The RP strategy induces a broader and faster information spreading process under the same parameters.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jorgensen, S.

    Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].

  10. Peritoneal fluid transport in CAPD patients with different transport rates of small solutes.

    PubMed

    Sobiecka, Danuta; Waniewski, Jacek; Weryński, Andrzej; Lindholm, Bengt

    2004-01-01

    Continuous ambulatory peritoneal dialysis (CAPD) patients with high peritoneal solute transport rate often have inadequate peritoneal fluid transport. It is not known whether this inadequate fluid transport is due solely to a too rapid fall of osmotic pressure, or if the decreased effectiveness of fluid transport is also a contributing factor. To analyze fluid transport parameters and the effectiveness of dialysis fluid osmotic pressure in the induction of fluid flow in CAPD patients with different small solute transport rates. 44 CAPD patients were placed in low (n = 6), low-average (n = 13), high-average (n = 19), and high (n = 6) transport groups according to a modified peritoneal equilibration test (PET). The study involved a 6-hour peritoneal dialysis dwell with 2 L 3.86% glucose dialysis fluid for each patient. Radioisotopically labeled serum albumin was added as a volume marker.The fluid transport parameters (osmotic conductance and fluid absorption rate) were estimated using three mathematical models of fluid transport: (1) Pyle model (model P), which describes ultrafiltration rate as an exponential function of time; (2) model OS, which is based on the linear relationship of ultrafiltration rate and overall osmolality gradient between dialysis fluid and blood; and (3) model G, which is based on the linear relationship between ultrafiltration rate and glucose concentration gradient between dialysis fluid and blood. Diffusive mass transport coefficients (K(BD)) for glucose, urea, creatinine, potassium, and sodium were estimated using the modified Babb-Randerson-Farrell model. The high transport group had significantly lower dialysate volume and glucose and osmolality gradients between dialysate and blood, but significantly higher K(BD) for small solutes compared with the other transport groups. Osmotic conductance, fluid absorption rate, and initial ultrafiltration rate did not differ among the transport groups for model OS and model P. Model G yielded unrealistic values of fluid transport parameters that differed from those estimated by models OS and P. The K(BD) values for small solutes were significantly different among the groups, and did not correlate with fluid transport parameters for model OS. The difference in fluid transport between the different transport groups was due only to the differences in the rate of disappearance of the overall osmotic pressure of the dialysate, which was a combined result of the transport rate of glucose and other small solutes. Although the glucose gradient is the major factor influencing ultrafiltration rate, other solutes, such as urea, are also of importance. The counteractive effect of plasma small solutes on transcapillary ultrafiltration was found to be especially notable in low transport patients. Thus, glucose gradient alone should not be considered the only force that shapes the ultrafiltration profile during peritoneal dialysis. We did not find any correlations between diffusive mass transport coefficients for small solutes and fluid transport parameters such as osmotic conductance or fluid and volume marker absorption. We may thus conclude that the pathway(s) for fluid transport appears to be partly independent from the pathway(s) for small solute transport, which supports the hypothesis of different pore types for fluid and solute transport.

  11. Biodegradation modelling of a dissolved gasoline plume applying independent laboratory and field parameters

    NASA Astrophysics Data System (ADS)

    Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.

    2000-12-01

    Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.

  12. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    NASA Astrophysics Data System (ADS)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  13. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    PubMed

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  14. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    PubMed Central

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123

  15. Colloid Transport in Saturated Porous Media: Elimination of Attachment Efficiency in a New Colloid Transport Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.

    A new colloid transport model is introduced that is conceptually simple but captures the essential features of complicated attachment and detachment behavior of colloids when conditions of secondary minimum attachment exist. This model eliminates the empirical concept of collision efficiency; the attachment rate is computed directly from colloid filtration theory. Also, a new paradigm for colloid detachment based on colloid population heterogeneity is introduced. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of colloids that attach irreversibly and (2) the rate at which reversibly attached colloids leave themore » surface. These two parameters were correlated to physical parameters that control colloid transport such as the depth of the secondary minimum and pore water velocity. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport. This model can be extended to heterogeneous systems characterized by both primary and secondary minimum deposition by simply increasing the fraction of colloids that attach irreversibly.« less

  16. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  17. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  18. Understanding the mechanisms of solid-water reactions through analysis of surface topography.

    PubMed

    Bandstra, Joel Z; Brantley, Susan L

    2015-12-01

    The topography of a reactive surface contains information about the reactions that form or modify the surface and, therefore, it should be possible to characterize reactivity using topography parameters such as surface area, roughness, or fractal dimension. As a test of this idea, we consider a two-dimensional (2D) lattice model for crystal dissolution and examine a suite of topography parameters to determine which may be useful for predicting rates and mechanisms of dissolution. The model is based on the assumption that the reactivity of a surface site decreases with the number of nearest neighbors. We show that the steady-state surface topography in our model system is a function of, at most, two variables: the ratio of the rate of loss of sites with two neighbors versus three neighbors (d(2)/d(3)) and the ratio of the rate of loss of sites with one neighbor versus three neighbors (d(1)/d(3)). This means that relative rates can be determined from two parameters characterizing the topography of a surface provided that the two parameters are independent of one another. It also means that absolute rates cannot be determined from measurements of surface topography alone. To identify independent sets of topography parameters, we simulated surfaces from a broad range of d(1)/d(3) and d(2)/d(3) and computed a suite of common topography parameters for each surface. Our results indicate that the fractal dimension D and the average spacing between steps, E[s], can serve to uniquely determine d(1)/d(3) and d(2)/d(3) provided that sufficiently strong correlations exist between the steps. Sufficiently strong correlations exist in our model system when D>1.5 (which corresponds to D>2.5 for real 3D reactive surfaces). When steps are uncorrelated, surface topography becomes independent of step retreat rate and D is equal to 1.5. Under these conditions, measures of surface topography are not independent and any single topography parameter contains all of the available mechanistic information about the surface. Our results also indicate that root-mean-square roughness cannot be used to reliably characterize the surface topography of fractal surfaces because it is an inherently noisy parameter for such surfaces with the scale of the noise being independent of length scale.

  19. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.

  20. Modular Aero-Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.; Guo, Ten-Huei

    2006-01-01

    The Modular Aero-Propulsion System Simulation (MAPSS) is a graphical simulation environment designed for the development of advanced control algorithms and rapid testing of these algorithms on a generic computational model of a turbofan engine and its control system. MAPSS is a nonlinear, non-real-time simulation comprising a Component Level Model (CLM) module and a Controller-and-Actuator Dynamics (CAD) module. The CLM module simulates the dynamics of engine components at a sampling rate of 2,500 Hz. The controller submodule of the CAD module simulates a digital controller, which has a typical update rate of 50 Hz. The sampling rate for the actuators in the CAD module is the same as that of the CLM. MAPSS provides a graphical user interface that affords easy access to engine-operation, engine-health, and control parameters; is used to enter such input model parameters as power lever angle (PLA), Mach number, and altitude; and can be used to change controller and engine parameters. Output variables are selectable by the user. Output data as well as any changes to constants and other parameters can be saved and reloaded into the GUI later.

  1. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models.

    PubMed

    Cotten, Cameron; Reed, Jennifer L

    2013-01-30

    Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.

  2. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models

    PubMed Central

    2013-01-01

    Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254

  3. Parameter estimation for a cohesive sediment transport model by assimilating satellite observations in the Hangzhou Bay: Temporal variations and spatial distributions

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu

    2018-01-01

    Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.

  4. Optimization and Prediction of Ultimate Tensile Strength in Metal Active Gas Welding.

    PubMed

    Ampaiboon, Anusit; Lasunon, On-Uma; Bubphachot, Bopit

    2015-01-01

    We investigated the effect of welding parameters on ultimate tensile strength of structural steel, ST37-2, welded by Metal Active Gas welding. A fractional factorial design was used for determining the significance of six parameters: wire feed rate, welding voltage, welding speed, travel angle, tip-to-work distance, and shielded gas flow rate. A regression model to predict ultimate tensile strength was developed. Finally, we verified optimization of the process parameters experimentally. We achieved an optimum tensile strength (558 MPa) and wire feed rate, 19 m/min, had the greatest effect, followed by tip-to-work distance, 7 mm, welding speed, 200 mm/min, welding voltage, 30 V, and travel angle, 60°. Shield gas flow rate, 10 L/min, was slightly better but had little effect in the 10-20 L/min range. Tests showed that our regression model was able to predict the ultimate tensile strength within 4%.

  5. System parameters for erythropoiesis control model: Comparison of normal values in human and mouse model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.

  6. An Indirect System Identification Technique for Stable Estimation of Continuous-Time Parameters of the Vestibulo-Ocular Reflex (VOR)

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Wallin, Ragnar; Boyle, Richard D.

    2013-01-01

    The vestibulo-ocular reflex (VOR) is a well-known dual mode bifurcating system that consists of slow and fast modes associated with nystagmus and saccade, respectively. Estimation of continuous-time parameters of nystagmus and saccade models are known to be sensitive to estimation methodology, noise and sampling rate. The stable and accurate estimation of these parameters are critical for accurate disease modelling, clinical diagnosis, robotic control strategies, mission planning for space exploration and pilot safety, etc. This paper presents a novel indirect system identification method for the estimation of continuous-time parameters of VOR employing standardised least-squares with dual sampling rates in a sparse structure. This approach permits the stable and simultaneous estimation of both nystagmus and saccade data. The efficacy of this approach is demonstrated via simulation of a continuous-time model of VOR with typical parameters found in clinical studies and in the presence of output additive noise.

  7. Linear models of activation cascades: analytical solutions and coarse-graining of delayed signal transduction

    PubMed Central

    Desikan, Radhika

    2016-01-01

    Cellular signal transduction usually involves activation cascades, the sequential activation of a series of proteins following the reception of an input signal. Here, we study the classic model of weakly activated cascades and obtain analytical solutions for a variety of inputs. We show that in the special but important case of optimal gain cascades (i.e. when the deactivation rates are identical) the downstream output of the cascade can be represented exactly as a lumped nonlinear module containing an incomplete gamma function with real parameters that depend on the rates and length of the cascade, as well as parameters of the input signal. The expressions obtained can be applied to the non-identical case when the deactivation rates are random to capture the variability in the cascade outputs. We also show that cascades can be rearranged so that blocks with similar rates can be lumped and represented through our nonlinear modules. Our results can be used both to represent cascades in computational models of differential equations and to fit data efficiently, by reducing the number of equations and parameters involved. In particular, the length of the cascade appears as a real-valued parameter and can thus be fitted in the same manner as Hill coefficients. Finally, we show how the obtained nonlinear modules can be used instead of delay differential equations to model delays in signal transduction. PMID:27581482

  8. Effects of correlated parameters and uncertainty in electronic-structure-based chemical kinetic modelling

    NASA Astrophysics Data System (ADS)

    Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2016-04-01

    Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.

  9. Inference of directional selection and mutation parameters assuming equilibrium.

    PubMed

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. A fully-stochasticized, age-structured population model for population viability analysis of fish: Lower Missouri River endangered pallid sturgeon example

    USGS Publications Warehouse

    Wildhaber, Mark L.; Albers, Janice; Green, Nicholas; Moran, Edward H.

    2017-01-01

    We develop a fully-stochasticized, age-structured population model suitable for population viability analysis (PVA) of fish and demonstrate its use with the endangered pallid sturgeon (Scaphirhynchus albus) of the Lower Missouri River as an example. The model incorporates three levels of variance: parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level, temporal variance (uncertainty caused by random environmental fluctuations over time) applied at the time-step level, and implicit individual variance (uncertainty caused by differences between individuals) applied within the time-step level. We found that population dynamics were most sensitive to survival rates, particularly age-2+ survival, and to fecundity-at-length. The inclusion of variance (unpartitioned or partitioned), stocking, or both generally decreased the influence of individual parameters on population growth rate. The partitioning of variance into parameter and temporal components had a strong influence on the importance of individual parameters, uncertainty of model predictions, and quasiextinction risk (i.e., pallid sturgeon population size falling below 50 age-1+ individuals). Our findings show that appropriately applying variance in PVA is important when evaluating the relative importance of parameters, and reinforce the need for better and more precise estimates of crucial life-history parameters for pallid sturgeon.

  11. Model-Based Approach to Predict Adherence to Protocol During Antiobesity Trials.

    PubMed

    Sharma, Vishnu D; Combes, François P; Vakilynejad, Majid; Lahu, Gezim; Lesko, Lawrence J; Trame, Mirjam N

    2018-02-01

    Development of antiobesity drugs is continuously challenged by high dropout rates during clinical trials. The objective was to develop a population pharmacodynamic model that describes the temporal changes in body weight, considering disease progression, lifestyle intervention, and drug effects. Markov modeling (MM) was applied for quantification and characterization of responder and nonresponder as key drivers of dropout rates, to ultimately support the clinical trial simulations and the outcome in terms of trial adherence. Subjects (n = 4591) from 6 Contrave ® trials were included in this analysis. An indirect-response model developed by van Wart et al was used as a starting point. Inclusion of drug effect was dose driven using a population dose- and time-dependent pharmacodynamic (DTPD) model. Additionally, a population-pharmacokinetic parameter- and data (PPPD)-driven model was developed using the final DTPD model structure and final parameter estimates from a previously developed population pharmacokinetic model based on available Contrave ® pharmacokinetic concentrations. Last, MM was developed to predict transition rate probabilities among responder, nonresponder, and dropout states driven by the pharmacodynamic effect resulting from the DTPD or PPPD model. Covariates included in the models and parameters were diabetes mellitus and race. The linked DTPD-MM and PPPD-MM was able to predict transition rates among responder, nonresponder, and dropout states well. The analysis concluded that body-weight change is an important factor influencing dropout rates, and the MM depicted that overall a DTPD model-driven approach provides a reasonable prediction of clinical trial outcome probabilities similar to a pharmacokinetic-driven approach. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.

  12. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  13. Modeling As(III) oxidation and removal with iron electrocoagulation in groundwater.

    PubMed

    Li, Lei; van Genuchten, Case M; Addy, Susan E A; Yao, Juanjuan; Gao, Naiyun; Gadgil, Ashok J

    2012-11-06

    Understanding the chemical kinetics of arsenic during electrocoagulation (EC) treatment is essential for a deeper understanding of arsenic removal using EC under a variety of operating conditions and solution compositions. We describe a highly constrained, simple chemical dynamic model of As(III) oxidation and As(III,V), Si, and P sorption for the EC system using model parameters extracted from some of our experimental results and previous studies. Our model predictions agree well with both data extracted from previous studies and our observed experimental data over a broad range of operating conditions (charge dosage rate) and solution chemistry (pH, co-occurring ions) without free model parameters. Our model provides insights into why higher pH and lower charge dosage rate (Coulombs/L/min) facilitate As(III) removal by EC and sheds light on the debate in the recent published literature regarding the mechanism of As(III) oxidation during EC. Our model also provides practically useful estimates of the minimum amount of iron required to remove 500 μg/L As(III) to <50 μg/L. Parameters measured in this work include the ratio of rate constants for Fe(II) and As(III) reactions with Fe(IV) in synthetic groundwater (k(1)/k(2) = 1.07) and the apparent rate constant of Fe(II) oxidation with dissolved oxygen at pH 7 (k(app) = 10(0.22) M(-1)s(-1)).

  14. A dynamical model on deposit and loan of banking: A bifurcation analysis

    NASA Astrophysics Data System (ADS)

    Sumarti, Novriana; Hasmi, Abrari Noor

    2015-09-01

    A dynamical model, which is one of sophisticated techniques using mathematical equations, can determine the observed state, for example bank profits, for all future times based on the current state. It will also show small changes in the state of the system create either small or big changes in the future depending on the model. In this research we develop a dynamical system of the form: d/D d t =f (D ,L ,rD,rL,r ), d/L d t =g (D ,L ,rD,rL,r ), Here D and rD are the volume of deposit and its rate, L and rL are the volume of loan and its rate, and r is the interbank market rate. There are parameters required in this model which give connections between two variables or between two derivative functions. In this paper we simulate the model for several parameters values. We do bifurcation analysis on the dynamics of the system in order to identify the appropriate parameters that control the stability behaviour of the system. The result shows that the system will have a limit cycle for small value of interest rate of loan, so the deposit and loan volumes are fluctuating and oscillating extremely. If the interest rate of loan is too high, the loan volume will be decreasing and vanish and the system will converge to its carrying capacity.

  15. Application of response surface methodology and semi-mechanistic model to optimize fluoride removal using crushed concrete in a fixed-bed column.

    PubMed

    Gu, Bon-Wun; Lee, Chang-Gu; Park, Seong-Jik

    2018-03-01

    The aim of this study was to investigate the removal of fluoride from aqueous solutions by using crushed concrete fines as a filter medium under varying conditions of pH 3-7, flow rate of 0.3-0.7 mL/min, and filter depth of 10-20 cm. The performance of fixed-bed columns was evaluated on the basis of the removal ratio (Re), uptake capacity (qe), degree of sorbent used (DoSU), and sorbent usage rate (SUR) obtained from breakthrough curves (BTCs). Three widely used semi-mechanistic models, that is, Bohart-Adams, Thomas, and Yoon-Nelson models, were applied to simulate the BTCs and to derive the design parameters. The Box-Behnken design of response surface methodology (RSM) was used to elucidate the individual and interactive effects of the three operational parameters on the column performance and to optimize these parameters. The results demonstrated that pH is the most important factor in the performance of fluoride removal by a fixed-bed column. The flow rate had a significant negative influence on Re and DoSU, and the effect of filter depth was observed only in the regression model for DoSU. Statistical analysis indicated that the model attained from the RSM study is suitable for describing the semi-mechanistic model parameters.

  16. Rate decline curves analysis of multiple-fractured horizontal wells in heterogeneous reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Jiahang; Wang, Xiaodong; Dong, Wenxiu

    2017-10-01

    In heterogeneous reservoir with multiple-fractured horizontal wells (MFHWs), due to the high density network of artificial hydraulic fractures, the fluid flow around fracture tips behaves like non-linear flow. Moreover, the production behaviors of different artificial hydraulic fractures are also different. A rigorous semi-analytical model for MFHWs in heterogeneous reservoirs is presented by combining source function with boundary element method. The model are first validated by both analytical model and simulation model. Then new Blasingame type curves are established. Finally, the effects of critical parameters on the rate decline characteristics of MFHWs are discussed. The results show that heterogeneity has significant influence on the rate decline characteristics of MFHWs; the parameters related to the MFHWs, such as fracture conductivity and length also can affect the rate characteristics of MFHWs. One novelty of this model is to consider the elliptical flow around artificial hydraulic fracture tips. Therefore, our model can be used to predict rate performance more accurately for MFHWs in heterogeneous reservoir. The other novelty is the ability to model the different production behavior at different fracture stages. Compared to numerical and analytic methods, this model can not only reduce extensive computing processing but also show high accuracy.

  17. Plant growth and respiration re-visited: maintenance respiration defined – it is an emergent property of, not a separate process within, the system – and why the respiration : photosynthesis ratio is conservative

    PubMed Central

    Thornley, John H. M.

    2011-01-01

    Background and Aims Plant growth and respiration still has unresolved issues, examined here using a model. The aims of this work are to compare the model's predictions with McCree's observation-based respiration equation which led to the ‘growth respiration/maintenance respiration paradigm’ (GMRP) – this is required to give the model credibility; to clarify the nature of maintenance respiration (MR) using a model which does not represent MR explicitly; and to examine algebraic and numerical predictions for the respiration:photosynthesis ratio. Methods A two-state variable growth model is constructed, with structure and substrate, applicable on plant to ecosystem scales. Four processes are represented: photosynthesis, growth with growth respiration (GR), senescence giving a flux towards litter, and a recycling of some of this flux. There are four significant parameters: growth efficiency, rate constants for substrate utilization and structure senescence, and fraction of structure returned to the substrate pool. Key Results The model can simulate McCree's data on respiration, providing an alternative interpretation to the GMRP. The model's parameters are related to parameters used in this paradigm. MR is defined and calculated in terms of the model's parameters in two ways: first during exponential growth at zero growth rate; and secondly at equilibrium. The approaches concur. The equilibrium respiration:photosynthesis ratio has the value of 0·4, depending only on growth efficiency and recycling fraction. Conclusions McCree's equation is an approximation that the model can describe; it is mistaken to interpret his second coefficient as a maintenance requirement. An MR rate is defined and extracted algebraically from the model. MR as a specific process is not required and may be replaced with an approach from which an MR rate emerges. The model suggests that the respiration:photosynthesis ratio is conservative because it depends on two parameters only whose values are likely to be similar across ecosystems. PMID:21948663

  18. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  19. Utilizing a one-dimensional multispecies model to simulate the nutrient reduction and biomass structure in two types of H2-based membrane-aeration biofilm reactors (H2-MBfR): model development and parametric analysis.

    PubMed

    Wang, Zuowei; Xia, Siqing; Xu, Xiaoyin; Wang, Chenhui

    2016-02-01

    In this study, a one-dimensional multispecies model (ODMSM) was utilized to simulate NO3(-)-N and ClO4(-) reduction performances in two kinds of H2-based membrane-aeration biofilm reactors (H2-MBfR) within different operating conditions (e.g., NO3(-)-N/ClO4(-) loading rates, H2 partial pressure, etc.). Before the simulation process, we conducted the sensitivity analysis of some key parameters which would fluctuate in different environmental conditions, then we used the experimental data to calibrate the more sensitive parameters μ1 and μ2 (maximum specific growth rates of denitrification bacteria and perchlorate reduction bacteria) in two H2-MBfRs, and the diversity of the two key parameters' values in two types of reactors may be resulted from the different carbon source fed in the reactors. From the simulation results of six different operating conditions (four in H2-MBfR 1 and two in H2-MBfR 2), the applicability of the model was approved, and the variation of the removal tendency in different operating conditions could be well simulated. Besides, the rationality of operating parameters (H2 partial pressure, etc.) could be judged especially in condition of high nutrients' loading rates. To a certain degree, the model could provide theoretical guidance to determine the operating parameters on some specific conditions in practical application.

  20. Differential Evolution algorithm applied to FSW model calibration

    NASA Astrophysics Data System (ADS)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  1. Determining Spacecraft Reaction Wheel Friction Parameters

    NASA Technical Reports Server (NTRS)

    Sarani, Siamak

    2009-01-01

    Software was developed to characterize the drag in each of the Cassini spacecraft's Reaction Wheel Assemblies (RWAs) to determine the RWA friction parameters. This tool measures the drag torque of RWAs for not only the high spin rates (greater than 250 RPM), but also the low spin rates (less than 250 RPM) where there is a lack of an elastohydrodynamic boundary layer in the bearings. RWA rate and drag torque profiles as functions of time are collected via telemetry once every 4 seconds and once every 8 seconds, respectively. Intermediate processing steps single-out the coast-down regions. A nonlinear model for the drag torque as a function of RWA spin rate is incorporated in order to characterize the low spin rate regime. The tool then uses a nonlinear parameter optimization algorithm based on the Nelder-Mead simplex method to determine the viscous coefficient, the Dahl friction, and the two parameters that account for the low spin-rate behavior.

  2. Validating a two-high-threshold measurement model for confidence rating data in recognition.

    PubMed

    Bröder, Arndt; Kellen, David; Schütz, Julia; Rohrmeier, Constanze

    2013-01-01

    Signal Detection models as well as the Two-High-Threshold model (2HTM) have been used successfully as measurement models in recognition tasks to disentangle memory performance and response biases. A popular method in recognition memory is to elicit confidence judgements about the presumed old/new status of an item, allowing for the easy construction of ROCs. Since the 2HTM assumes fewer latent memory states than response options are available in confidence ratings, the 2HTM has to be extended by a mapping function which models individual rating scale usage. Unpublished data from 2 experiments in Bröder and Schütz (2009) validate the core memory parameters of the model, and 3 new experiments show that the response mapping parameters are selectively affected by manipulations intended to affect rating scale use, and this is independent of overall old/new bias. Comparisons with SDT show that both models behave similarly, a case that highlights the notion that both modelling approaches can be valuable (and complementary) elements in a researcher's toolbox.

  3. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  4. Computation of restoration of ligand response in the random kinetics of a prostate cancer cell signaling pathway.

    PubMed

    Dana, Saswati; Nakakuki, Takashi; Hatakeyama, Mariko; Kimura, Shuhei; Raha, Soumyendu

    2011-01-01

    Mutation and/or dysfunction of signaling proteins in the mitogen activated protein kinase (MAPK) signal transduction pathway are frequently observed in various kinds of human cancer. Consistent with this fact, in the present study, we experimentally observe that the epidermal growth factor (EGF) induced activation profile of MAP kinase signaling is not straightforward dose-dependent in the PC3 prostate cancer cells. To find out what parameters and reactions in the pathway are involved in this departure from the normal dose-dependency, a model-based pathway analysis is performed. The pathway is mathematically modeled with 28 rate equations yielding those many ordinary differential equations (ODE) with kinetic rate constants that have been reported to take random values in the existing literature. This has led to us treating the ODE model of the pathways kinetics as a random differential equations (RDE) system in which the parameters are random variables. We show that our RDE model captures the uncertainty in the kinetic rate constants as seen in the behavior of the experimental data and more importantly, upon simulation, exhibits the abnormal EGF dose-dependency of the activation profile of MAP kinase signaling in PC3 prostate cancer cells. The most likely set of values of the kinetic rate constants obtained from fitting the RDE model into the experimental data is then used in a direct transcription based dynamic optimization method for computing the changes needed in these kinetic rate constant values for the restoration of the normal EGF dose response. The last computation identifies the parameters, i.e., the kinetic rate constants in the RDE model, that are the most sensitive to the change in the EGF dose response behavior in the PC3 prostate cancer cells. The reactions in which these most sensitive parameters participate emerge as candidate drug targets on the signaling pathway. 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Maximum urine concentrating capability in a mathematical model of the inner medulla of the rat kidney.

    PubMed

    Marcano, Mariano; Layton, Anita T; Layton, Harold E

    2010-02-01

    In a mathematical model of the urine concentrating mechanism of the inner medulla of the rat kidney, a nonlinear optimization technique was used to estimate parameter sets that maximize the urine-to-plasma osmolality ratio (U/P) while maintaining the urine flow rate within a plausible physiologic range. The model, which used a central core formulation, represented loops of Henle turning at all levels of the inner medulla and a composite collecting duct (CD). The parameters varied were: water flow and urea concentration in tubular fluid entering the descending thin limbs and the composite CD at the outer-inner medullary boundary; scaling factors for the number of loops of Henle and CDs as a function of medullary depth; location and increase rate of the urea permeability profile along the CD; and a scaling factor for the maximum rate of NaCl transport from the CD. The optimization algorithm sought to maximize a quantity E that equaled U/P minus a penalty function for insufficient urine flow. Maxima of E were sought by changing parameter values in the direction in parameter space in which E increased. The algorithm attained a maximum E that increased urine osmolality and inner medullary concentrating capability by 37.5% and 80.2%, respectively, above base-case values; the corresponding urine flow rate and the concentrations of NaCl and urea were all within or near reported experimental ranges. Our results predict that urine osmolality is particularly sensitive to three parameters: the urea concentration in tubular fluid entering the CD at the outer-inner medullary boundary, the location and increase rate of the urea permeability profile along the CD, and the rate of decrease of the CD population (and thus of CD surface area) along the cortico-medullary axis.

  6. Structural kinetic modeling of metabolic networks.

    PubMed

    Steuer, Ralf; Gross, Thilo; Selbig, Joachim; Blasius, Bernd

    2006-08-08

    To develop and investigate detailed mathematical models of metabolic processes is one of the primary challenges in systems biology. However, despite considerable advance in the topological analysis of metabolic networks, kinetic modeling is still often severely hampered by inadequate knowledge of the enzyme-kinetic rate laws and their associated parameter values. Here we propose a method that aims to give a quantitative account of the dynamical capabilities of a metabolic system, without requiring any explicit information about the functional form of the rate equations. Our approach is based on constructing a local linear model at each point in parameter space, such that each element of the model is either directly experimentally accessible or amenable to a straightforward biochemical interpretation. This ensemble of local linear models, encompassing all possible explicit kinetic models, then allows for a statistical exploration of the comprehensive parameter space. The method is exemplified on two paradigmatic metabolic systems: the glycolytic pathway of yeast and a realistic-scale representation of the photosynthetic Calvin cycle.

  7. Uncertainty analysis of multi-rate kinetics of uranium desorption from sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    2014-01-01

    A multi-rate expression for uranyl [U(VI)] surface complexation reactions has been proposed to describe diffusion-limited U(VI) sorption/desorption in heterogeneous subsurface sediments. An important assumption in the rate expression is that its rate constants follow a certain type probability distribution. In this paper, a Bayes-based, Differential Evolution Markov Chain method was used to assess the distribution assumption and to analyze parameter and model structure uncertainties. U(VI) desorption from a contaminated sediment at the US Hanford 300 Area, Washington was used as an example for detail analysis. The results indicated that: 1) the rate constants in the multi-rate expression contain uneven uncertaintiesmore » with slower rate constants having relative larger uncertainties; 2) the lognormal distribution is an effective assumption for the rate constants in the multi-rate model to simualte U(VI) desorption; 3) however, long-term prediction and its uncertainty may be significantly biased by the lognormal assumption for the smaller rate constants; and 4) both parameter and model structure uncertainties can affect the extrapolation of the multi-rate model with a larger uncertainty from the model structure. The results provide important insights into the factors contributing to the uncertainties of the multi-rate expression commonly used to describe the diffusion or mixing-limited sorption/desorption of both organic and inorganic contaminants in subsurface sediments.« less

  8. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  9. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  10. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  11. Buy now, saved later? The critical impact of time-to-pandemic uncertainty on pandemic cost-effectiveness analyses.

    PubMed

    Drake, Tom; Chalabi, Zaid; Coker, Richard

    2015-02-01

    Investment in pandemic preparedness is a long-term gamble, with the return on investment coming at an unknown point in the future. Many countries have chosen to stockpile key resources, and the number of pandemic economic evaluations has risen sharply since 2009. We assess the importance of uncertainty in time-to-pandemic (and associated discounting) in pandemic economic evaluation, a factor frequently neglected in the literature to-date. We use a probability tree model and Monte Carlo parameter sampling to consider the cost effectiveness of antiviral stockpiling in Cambodia under parameter uncertainty. Mean elasticity and mutual information (MI) are used to assess the importance of time-to-pandemic compared with other parameters. We also consider the sensitivity to choice of sampling distribution used to model time-to-pandemic uncertainty. Time-to-pandemic and discount rate are the primary drivers of sensitivity and uncertainty in pandemic cost effectiveness models. Base case cost effectiveness of antiviral stockpiling ranged between is US$112 and US$3599 per DALY averted using historical pandemic intervals for time-to-pandemic. The mean elasticities for time-to-pandemic and discount rate were greater than all other parameters. Similarly, the MI scores for time to pandemic and discount rate were greater than other parameters. Time-to-pandemic and discount rate were key drivers of uncertainty in cost-effectiveness results regardless of time-to-pandemic sampling distribution choice. Time-to-pandemic assumptions can "substantially" affect cost-effectiveness results and, in our model, is a greater contributor to uncertainty in cost-effectiveness results than any other parameter. We strongly recommend that cost-effectiveness models include probabilistic analysis of time-to-pandemic uncertainty. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  12. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  13. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    NASA Astrophysics Data System (ADS)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the Østrem curve. Its large number of parameters might be a limitation, but we show that the model is transferable in time and space to a second glacier with little loss of performance. We thus suggest that the new DETI model can be included in continuous mass balance models of debris-covered glaciers, because of its limited data requirements. As such, we expect its application to lead to an improvement in simulations of the debris-covered glacier response to climate in comparison with models that simply recalibrate empirical parameters to prescribe a constant across glacier reduction in melt.

  14. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect.

    PubMed

    Carenzo, M; Pellicciotti, F; Mabillard, J; Reid, T; Brock, B W

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the Østrem curve. Its large number of parameters might be a limitation, but we show that the model is transferable in time and space to a second glacier with little loss of performance. We thus suggest that the new DETI model can be included in continuous mass balance models of debris-covered glaciers, because of its limited data requirements. As such, we expect its application to lead to an improvement in simulations of the debris-covered glacier response to climate in comparison with models that simply recalibrate empirical parameters to prescribe a constant across glacier reduction in melt.

  15. Analytic model of aurorally coupled magnetospheric and ionospheric electrostatic potentials

    NASA Technical Reports Server (NTRS)

    Cornwall, J. M.

    1994-01-01

    This paper describes modest but significant improvements on earlier studies of electrostatic potential structure in the auroral region using the adiabatic auroral arc model. This model has crucial nonlinearities (connected, for example. with aurorally produced ionization) which have hampered analysis; earlier work has either been linear, which I will show is a poor approximation or, if nonlinear, either numerical or too specialized to study parametric dependencies. With certain simplifying assumptions I find new analytic nonlinear solutions fully exhibiting the parametric dependence of potentials on magnetospheric (e.g.. cross-tail potential) and ionospheric (e.g., recombination rate) parameters. No purely phenomenological parameters are introduced. The results are in reasonable agreement with observed average auroral potential drops, inverted-V scale sizes, and dissipation rates. The dissipation rate is quite comparable to tail energization and transport rates and should have a major effect on tail and magnetospheric dynamics. This paper gives various relations between the cross-tail potential and auroral parameters (e.g., total parallel currents and potential drops) which can be studied with existing data sets.

  16. Diamond Tool Specific Wear Rate Assessment in Granite Machining by Means of Knoop Micro-Hardness and Process Parameters

    NASA Astrophysics Data System (ADS)

    Goktan, R. M.; Gunes Yılmaz, N.

    2017-09-01

    The present study was undertaken to investigate the potential usability of Knoop micro-hardness, both as a single parameter and in combination with operational parameters, for sawblade specific wear rate (SWR) assessment in the machining of ornamental granites. The sawing tests were performed on different commercially available granite varieties by using a fully instrumented side-cutting machine. During the sawing tests, two fundamental productivity parameters, namely the workpiece feed rate and cutting depth, were varied at different levels. The good correspondence observed between the measured Knoop hardness and SWR values for different operational conditions indicates that it has the potential to be used as a rock material property that can be employed in preliminary wear estimations of diamond sawblades. Also, a multiple regression model directed to SWR prediction was developed which takes into account the Knoop hardness, cutting depth and workpiece feed rate. The relative contribution of each independent variable in the prediction of SWR was determined by using test statistics. The prediction accuracy of the established model was checked against new observations. The strong prediction performance of the model suggests that its framework may be applied to other granites and operational conditions for quantifying or differentiating the relative wear performance of diamond sawblades.

  17. Classical nucleation theory of homogeneous freezing of water: thermodynamic and kinetic parameters.

    PubMed

    Ickes, Luisa; Welti, André; Hoose, Corinna; Lohmann, Ulrike

    2015-02-28

    The probability of homogeneous ice nucleation under a set of ambient conditions can be described by nucleation rates using the theoretical framework of Classical Nucleation Theory (CNT). This framework consists of kinetic and thermodynamic parameters, of which three are not well-defined (namely the interfacial tension between ice and water, the activation energy and the prefactor), so that any CNT-based parameterization of homogeneous ice formation is less well-constrained than desired for modeling applications. Different approaches to estimate the thermodynamic and kinetic parameters of CNT are reviewed in this paper and the sensitivity of the calculated nucleation rate to the choice of parameters is investigated. We show that nucleation rates are very sensitive to this choice. The sensitivity is governed by one parameter - the interfacial tension between ice and water, which determines the energetic barrier of the nucleation process. The calculated nucleation rate can differ by more than 25 orders of magnitude depending on the choice of parameterization for this parameter. The second most important parameter is the activation energy of the nucleation process. It can lead to a variation of 16 orders of magnitude. By estimating the nucleation rate from a collection of droplet freezing experiments from the literature, the dependence of these two parameters on temperature is narrowed down. It can be seen that the temperature behavior of these two parameters assumed in the literature does not match with the predicted nucleation rates from the fit in most cases. Moreover a comparison of all possible combinations of theoretical parameterizations of the dominant two free parameters shows that one combination fits the fitted nucleation rates best, which is a description of the interfacial tension coming from a molecular model [Reinhardt and Doye, J. Chem. Phys., 2013, 139, 096102] in combination with the activation energy derived from self-diffusion measurements [Zobrist et al., J. Phys. Chem. C, 2007, 111, 2149]. However, some fundamental understanding of the processes is still missing. Further research in future might help to tackle this problem. The most important questions, which need to be answered to constrain CNT, are raised in this study.

  18. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC.

    PubMed

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R

    2017-07-12

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.

  19. FITPOP, a heuristic simulation model of population dynamics and genetics with special reference to fisheries

    USGS Publications Warehouse

    McKenna, James E.

    2000-01-01

    Although, perceiving genetic differences and their effects on fish population dynamics is difficult, simulation models offer a means to explore and illustrate these effects. I partitioned the intrinsic rate of increase parameter of a simple logistic-competition model into three components, allowing specification of effects of relative differences in fitness and mortality, as well as finite rate of increase. This model was placed into an interactive, stochastic environment to allow easy manipulation of model parameters (FITPOP). Simulation results illustrated the effects of subtle differences in genetic and population parameters on total population size, overall fitness, and sensitivity of the system to variability. Several consequences of mixing genetically distinct populations were illustrated. For example, behaviors such as depression of population size after initial introgression and extirpation of native stocks due to continuous stocking of genetically inferior fish were reproduced. It also was shown that carrying capacity relative to the amount of stocking had an important influence on population dynamics. Uncertainty associated with parameter estimates reduced confidence in model projections. The FITPOP model provided a simple tool to explore population dynamics, which may assist in formulating management strategies and identifying research needs.

  20. Experimental and computational results on exciton/free-carrier ratio, hot/thermalized carrier diffusion, and linear/nonlinear rate constants affecting scintillator proportionality

    NASA Astrophysics Data System (ADS)

    Williams, R. T.; Grim, Joel Q.; Li, Qi; Ucer, K. B.; Bizarri, G. A.; Kerisit, S.; Gao, Fei; Bhattacharya, P.; Tupitsyn, E.; Rowe, E.; Buliga, V. M.; Burger, A.

    2013-09-01

    Models of nonproportional response in scintillators have highlighted the importance of parameters such as branching ratios, carrier thermalization times, diffusion, kinetic order of quenching, associated rate constants, and radius of the electron track. For example, the fraction ηeh of excitations that are free carriers versus excitons was shown by Payne and coworkers to have strong correlation with the shape of electron energy response curves from Compton-coincidence studies. Rate constants for nonlinear quenching are implicit in almost all models of nonproportionality, and some assumption about track radius must invariably be made if one is to relate linear energy deposition dE/dx to volume-based excitation density n (eh/cm3) in terms of which the rates are defined. Diffusion, affecting time-dependent track radius and thus density of excitations, has been implicated as an important factor in nonlinear light yield. Several groups have recently highlighted diffusion of hot electrons in addition to thermalized carriers and excitons in scintillators. However, experimental determination of many of these parameters in the insulating crystals used as scintillators has seemed difficult. Subpicosecond laser techniques including interband z scan light yield, fluence-dependent decay time, and transient optical absorption are now yielding experimental values for some of the missing rates and ratios needed for modeling scintillator response. First principles calculations and Monte Carlo simulations can fill in additional parameters still unavailable from experiment. As a result, quantitative modeling of scintillator electron energy response from independently determined material parameters is becoming possible on an increasingly firmer data base. This paper describes recent laser experiments, calculations, and numerical modeling of scintillator response.

  1. Experimental and computational results on exciton/free-carrier ratio, hot/thermalized carrier diffusion, and linear/nonlinear rate constants affecting scintillator proportionality

    DOE PAGES

    Williams, R. T.; Grim, Joel Q.; Li, Qi; ...

    2013-09-26

    Models of nonproportional response in scintillators have highlighted the importance of parameters such as branching ratios, carrier thermalization times, diffusion, kinetic order of quenching, associated rate constants, and radius of the electron track. For example, the fraction ηeh of excitations that are free carriers versus excitons was shown by Payne and coworkers to have strong correlation with the shape of electron energy response curves from Compton-coincidence studies. Rate constants for nonlinear quenching are implicit in almost all models of nonproportionality, and some assumption about track radius must invariably be made if one is to relate linear energy deposition dE/dx tomore » volume-based excitation density n (eh/cm 3) in terms of which the rates are defined. Diffusion, affecting time-dependent track radius and thus density of excitations, has been implicated as an important factor in nonlinear light yield. Several groups have recently highlighted diffusion of hot electrons in addition to thermalized carriers and excitons in scintillators. However, experimental determination of many of these parameters in the insulating crystals used as scintillators has seemed difficult. Subpicosecond laser techniques including interband z scan light yield, fluence-dependent decay time, and transient optical absorption are now yielding experimental values for some of the missing rates and ratios needed for modeling scintillator response. First principles calculations and Monte Carlo simulations can fill in additional parameters still unavailable from experiment. As a result, quantitative modeling of scintillator electron energy response from independently determined material parameters is becoming possible on an increasingly firmer data base. This study describes recent laser experiments, calculations, and numerical modeling of scintillator response.« less

  2. Modelling chemical depletion profiles in regolith

    USGS Publications Warehouse

    Brantley, S.L.; Bandstra, J.; Moore, J.; White, A.F.

    2008-01-01

    Chemical or mineralogical profiles in regolith display reaction fronts that document depletion of leachable elements or minerals. A generalized equation employing lumped parameters was derived to model such ubiquitously observed patterns:C = frac(C0, frac(C0 - Cx = 0, Cx = 0) exp (??ini ?? over(k, ??) ?? x) + 1)Here C, Cx = 0, and Co are the concentrations of an element at a given depth x, at the top of the reaction front, or in parent respectively. ??ini is the roughness of the dissolving mineral in the parent and k???? is a lumped kinetic parameter. This kinetic parameter is an inverse function of the porefluid advective velocity and a direct function of the dissolution rate constant times mineral surface area per unit volume regolith. This model equation fits profiles of concentration versus depth for albite in seven weathering systems and is consistent with the interpretation that the surface area (m2 mineral m- 3 bulk regolith) varies linearly with the concentration of the dissolving mineral across the front. Dissolution rate constants can be calculated from the lumped fit parameters for these profiles using observed values of weathering advance rate, the proton driving force, the geometric surface area per unit volume regolith and parent concentration of albite. These calculated values of the dissolution rate constant compare favorably to literature values. The model equation, useful for reaction fronts in both steady-state erosional and quasi-stationary non-erosional systems, incorporates the variation of reaction affinity using pH as a master variable. Use of this model equation to fit depletion fronts for soils highlights the importance of buffering of pH in the soil system. Furthermore, the equation should allow better understanding of the effects of important environmental variables on weathering rates. ?? 2008.

  3. Hybrid Modeling of Cell Signaling and Transcriptional Reprogramming and Its Application in C. elegans Development.

    PubMed

    Fertig, Elana J; Danilova, Ludmila V; Favorov, Alexander V; Ochs, Michael F

    2011-01-01

    Modeling of signal driven transcriptional reprogramming is critical for understanding of organism development, human disease, and cell biology. Many current modeling techniques discount key features of the biological sub-systems when modeling multiscale, organism-level processes. We present a mechanistic hybrid model, GESSA, which integrates a novel pooled probabilistic Boolean network model of cell signaling and a stochastic simulation of transcription and translation responding to a diffusion model of extracellular signals. We apply the model to simulate the well studied cell fate decision process of the vulval precursor cells (VPCs) in C. elegans, using experimentally derived rate constants wherever possible and shared parameters to avoid overfitting. We demonstrate that GESSA recovers (1) the effects of varying scaffold protein concentration on signal strength, (2) amplification of signals in expression, (3) the relative external ligand concentration in a known geometry, and (4) feedback in biochemical networks. We demonstrate that setting model parameters based on wild-type and LIN-12 loss-of-function mutants in C. elegans leads to correct prediction of a wide variety of mutants including partial penetrance of phenotypes. Moreover, the model is relatively insensitive to parameters, retaining the wild-type phenotype for a wide range of cell signaling rate parameters.

  4. Development of a Detailed Surface Chemistry Framework in DSMC

    NASA Technical Reports Server (NTRS)

    Swaminathan-Gopalan, K.; Borner, A.; Stephani, K. A.

    2017-01-01

    Many of the current direct simulation Monte Carlo (DSMC) codes still employ only simple surface catalysis models. These include only basic mechanisms such as dissociation, recombination, and exchange reactions, without any provision for adsorption and finite rate kinetics. Incorporating finite rate chemistry at the surface is increasingly becoming a necessity for various applications such as high speed re-entry flows over thermal protection systems (TPS), micro-electro-mechanical systems (MEMS), surface catalysis, etc. In the recent years, relatively few works have examined finite-rate surface reaction modeling using the DSMC method.In this work, a generalized finite-rate surface chemistry framework incorporating a comprehensive list of reaction mechanisms is developed and implemented into the DSMC solver SPARTA. The various mechanisms include adsorption, desorption, Langmuir-Hinshelwood (LH), Eley-Rideal (ER), Collision Induced (CI), condensation, sublimation, etc. The approach is to stochastically model the various competing reactions occurring on a set of active sites. Both gas-surface (e.g., ER, CI) and pure-surface (e.g., LH, desorption) reaction mechanisms are incorporated. The reaction mechanisms could also be catalytic or surface altering based on the participation of the bulk-phase species (e.g., bulk carbon atoms). Marschall and MacLean developed a general formulation in which multiple phases and surface sites are used and we adopt a similar convention in the current work. Microscopic parameters of reaction probabilities (for gas-surface reactions) and frequencies (for pure-surface reactions) that are require for DSMC are computed from the surface properties and macroscopic parameters such as rate constants, sticking coefficients, etc. The energy and angular distributions of the products are decided based on the reaction type and input parameters. Thus, the user has the capability to model various surface reactions via user-specified reaction rate constants, surface properties and parameters.

  5. Air drying modelling of Mastocarpus stellatus seaweed a source of hybrid carrageenan

    NASA Astrophysics Data System (ADS)

    Arufe, Santiago; Torres, Maria D.; Chenlo, Francisco; Moreira, Ramon

    2018-01-01

    Water sorption isotherms from 5 up to 65 °C and air drying kinetics at 35, 45 and 55 °C of Mastocarpus stellatus seaweed were determined. Experimental sorption data were modelled using BET and Oswin models. A four-parameter model, based on Oswin model, was proposed to estimate equilibrium moisture content as function of water activity and temperature simultaneously. Drying experiments showed that water removal rate increased significantly with temperature from 35 to 45 °C, but at higher temperatures drying rate remained constant. Some chemical modifications of the hybrid carrageenans present in the seaweed can be responsible of this unexpected thermal trend. Experimental drying data were modelled using two-parameter Page model (n, k). Page parameter n was constant (1.31 ± 0.10) at tested temperatures, but k varied significantly with drying temperature (from 18.5 ± 0.2 10-3 min-n at 35 °C up to 28.4 ± 0.8 10-3 min-n at 45 and 55 °C). Drying experiments allowed the determination of the critical moisture content of seaweed (0.87 ± 0.06 kg water (kg d.b.)-1). A diffusional model considering slab geometry was employed to determine the effective diffusion coefficient of water during the falling rate period at different temperatures.

  6. A comparison of selected models for estimating cable icing

    NASA Astrophysics Data System (ADS)

    McComber, Pierre; Druez, Jacques; Laflamme, Jean

    In many cold climate countries, it is becoming increasingly important to monitor transmission line icing. Indeed, by knowing in advance of localized danger for icing overloads, electric utilities can take measures in time to prevent generalized failure of the power transmission network. Recently in Canada, a study was made to compare the estimation of a few icing models working from meteorological data in estimating ice loads for freezing rain events. The models tested were using only standard meteorological parameters, i.e. wind speed and direction, temperature and precipitation rate. This study has shown that standard meteorological parameters can only achieve very limited accuracy, especially for longer icing events. However, with the help of an additional instrument monitoring the icing rate intensity, a significant improvement in model prediction might be achieved. The icing rate meter (IRM) which counts icing and de-icing cycles per unit time on a standard probe can be used to estimate the icing intensity. A cable icing estimation is then made by taking into consideration the accretion size, temperature, wind speed and direction, and precipitation rate. In this paper, a comparison is made between the predictions of two previously tested models (one obtained and the other reconstructed from their description in the public literature) and of a model based on the icing rate meter readings. The models are tested against nineteen events recorded on an icing test line at Mt. Valin, Canada, during the winter season 1991-1992. These events are mostly rime resulting from in-cloud icing. However, freezing rain and wet snow events were also recorded. Results indicate that a significant improvement in the estimation is attained by using the icing rate meter data together with the other standard meteorological parameters.

  7. Syndromes of Self-Reported Psychopathology for Ages 18–59 in 29 Societies

    PubMed Central

    Achenbach, Thomas M.; Rescorla, Leslie A.; Tumer, Lori V.; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J. Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M.; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R. J.; Funabiki, Yasuko; Guðmundsson, Halldór S.; Harder, Valerie s; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M.; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C.; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B.; Vazquez, Natalia; Zasepa, Ewa

    2017-01-01

    This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults’ self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18–59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½–18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies. PMID:29805197

  8. Modeling and optimization of actively Q-switched Nd-doped quasi-three-level laser

    NASA Astrophysics Data System (ADS)

    Yan, Renpeng; Yu, Xin; Li, Xudong; Chen, Deying; Gao, Jing

    2013-09-01

    The energy transfer upconversion and the ground state absorption are considered in solving the rate equations for an active Q-switched quasi-three-level laser. The dependence of output pulse characters on the laser parameters is investigated by solving the rate equations. The influence of the energy transfer upconversion on the pulsed laser performance is illustrated and discussed. By this model, the optimal parameters could be achieved for arbitrary quasi-three-level Q-switched lasers. An acousto-optical Q-switched Nd:YAG 946 nm laser is constructed and the reliability of the theoretical model is demonstrated.

  9. Microbially enhanced dissolution and reductive dechlorination of PCE by a mixed culture: Model validation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Abriola, Linda M.; Amos, Benjamin K.; Suchomel, Eric J.; Pennell, Kurt D.; Löffler, Frank E.; Christ, John A.

    2013-08-01

    Reductive dechlorination catalyzed by organohalide-respiring bacteria is often considered for remediation of non-aqueous phase liquid (NAPL) source zones due to cost savings, ease of implementation, regulatory acceptance, and sustainability. Despite knowledge of the key dechlorinators, an understanding of the processes and factors that control NAPL dissolution rates and detoxification (i.e., ethene formation) is lacking. A recent column study demonstrated a 5-fold cumulative enhancement in tetrachloroethene (PCE) dissolution and ethene formation (Amos et al., 2009). Spatial and temporal monitoring of key geochemical and microbial (i.e., Geobacter lovleyi and Dehalococcoides mccartyi strains) parameters in the column generated a data set used herein as the basis for refinement and testing of a multiphase, compositional transport model. The refined model is capable of simulating the reactive transport of multiple chemical constituents produced and consumed by organohalide-respiring bacteria and accounts for substrate limitations and competitive inhibition. Parameter estimation techniques were used to optimize the values of sensitive microbial kinetic parameters, including maximum utilization rates, biomass yield coefficients, and endogenous decay rates. Comparison and calibration of model simulations with the experimental data demonstrate that the model is able to accurately reproduce measured effluent concentrations, while delineating trends in dechlorinator growth and reductive dechlorination kinetics along the column. Sensitivity analyses performed on the optimized model parameters indicate that the rates of PCE and cis-1,2-dichloroethene (cis-DCE) transformation and Dehalococcoides growth govern bioenhanced dissolution, as long as electron donor (i.e., hydrogen flux) is not limiting. Dissolution enhancements were shown to be independent of cis-DCE accumulation; however, accumulation of cis-DCE, as well as column length and flow rate (i.e., column residence time), strongly influenced the extent of reductive dechlorination. When cis-DCE inhibition was neglected, the model over-predicted ethene production ten-fold, while reductions in residence time (i.e., a two-fold decrease in column length or two-fold increase in flow rate) resulted in a more than 70% decline in ethene production. These results suggest that spatial and temporal variations in microbial community composition and activity must be understood to model, predict, and manage bioenhanced NAPL dissolution.

  10. Oxygen consumption rate of cells in 3D culture: the use of experiment and simulation to measure kinetic parameters and optimise culture conditions.

    PubMed

    Streeter, Ian; Cheema, Umber

    2011-10-07

    Understanding the basal O(2) and nutrient requirements of cells is paramount when culturing cells in 3D tissue models. Any scaffold design will need to take such parameters into consideration, especially as the addition of cells introduces gradients of consumption of such molecules from the surface to the core of scaffolds. We have cultured two cell types in 3D native collagen type I scaffolds, and measured the O(2) tension at specific locations within the scaffold. By changing the density of cells, we have established O(2) consumption gradients within these scaffolds and using mathematical modeling have derived rates of consumption for O(2). For human dermal fibroblasts the average rate constant was 1.19 × 10(-17) mol cell(-1) s(-1), and for human bone marrow derived stromal cells the average rate constant was 7.91 × 10(-18) mol cell(-1) s(-1). These values are lower than previously published rates for similar cells cultured in 2D, but the values established in this current study are more representative of rates of consumption measured in vivo. These values will dictate 3D culture parameters, including maximum cell-seeding density and maximum size of the constructs, for long-term viability of tissue models.

  11. Microphysically derived expressions for rate-and-state friction and fault stability parameters

    NASA Astrophysics Data System (ADS)

    Chen, Jianye; Niemeijer, Andre; Spiers, Christopher

    2017-04-01

    Rate-and-state friction (RSF) laws and associated parameters are extensively applied to fault mechanics, mainly on an empirical basis with a limited understanding of the underlying physical mechanisms. We recently established a general microphysical model [Chen and Spiers, 2016], for describing both steady-state and transient frictional behavior of any granular fault gouge material undergoing deformation by granular flow plus an arbitrary creep mechanism at grain contacts, such as pressure solution. We further showed that the model is able to reproduce typical experimental frictional results, namely "velocity stepping" and "slide-hold-slide" sequences, in satisfactory agreement with the main features and trends observed. Here, we extend our model, which we explored only numerically thus far, to obtain analytical solutions for the classical rate and state friction parameters from a purely microphysical modelling basis. By analytically solving the constitutive equations of the model under various boundary conditions, physically meaningful, theoretical expressions for the RSF parameters, i.e. a, b and Dc, are obtained. We also apply linear stability analysis to a spring-slider system, describing interface friction using our model, to yield analytical expressions of the critical stiffness (Kc) and critical recurrence wavelength (Wc) of the system. The values of a , b and Dc, as well as Kc and Wc, predicted by these expressions agree well with the numerical modeling results and acceptably with values obtained from experiments, on calcite for instance. Inserting the parameters obtained into classical RSF laws (slowness and slip laws) and conducting forward modelling gives simulated friction behavior that is fully consistent with the direct predictions of our numerically implemented model. Numerical tests with friction obeying our model show that the slip stability of fault motion exhibits a transition from stable sliding, via self-sustained oscillations, to stick slips with decreasing elastic stiffness, decreasing loading rate, and increasing normal stress, which is fully consistent with our linear stability analysis and also with previous RSF models that employed constant values of the RSF parameters. Importantly, our analytical expressions for. a, b, Dc, Kc and Wc, are functions of the internal microstructure of the fault (porosity, grain size and shear zone thickness), the material properties of the fault gouge (e.g. creep law parameters like activation energy, stress sensitivity, grain size sensitivity), and the ambient conditions the fault is subjected to (temperature and normal stress). The expressions obtained thus have clear physical meaning allowing a more meaningful extrapolation to natural conditions. On the basis of these physics-based expressions, seismological implications for slip on natural faults (e.g. subduction zone interfaces, faults in carbonate terrains) are discussed. Reference Chen, J., and C. J. Spiers (2016), Rate and state frictional and healing behavior of carbonate fault gouge explained using microphysical model, J. Geophys. Res., 121, doi:10.1002/2016JB013470.

  12. Improving Bedload Transport Predictions by Incorporating Hysteresis

    NASA Astrophysics Data System (ADS)

    Crowe Curran, J.; Gaeuman, D.

    2015-12-01

    The importance of unsteady flow on sediment transport rates has long been recognized. However, the majority of sediment transport models were developed under steady flow conditions that did not account for changing bed morphologies and sediment transport during flood events. More recent research has used laboratory data and field data to quantify the influence of hysteresis on bedload transport and adjust transport models. In this research, these new methods are combined to improve further the accuracy of bedload transport rate quantification and prediction. The first approach defined reference shear stresses for hydrograph rising and falling limbs, and used these values to predict total and fractional transport rates during a hydrograph. From this research, a parameter for improving transport predictions during unsteady flows was developed. The second approach applied a maximum likelihood procedure to fit a bedload rating curve to measurements from a number of different coarse bed rivers. Parameters defining the rating curve were optimized for values that maximized the conditional probability of producing the measured bedload transport rate. Bedload sample magnitude was fit to a gamma distribution, and the probability of collecting N particles in a sampler during a given time step was described with a Poisson probability density function. Both approaches improved estimates of total transport during large flow events when compared to existing methods and transport models. Recognizing and accounting for the changes in transport parameters over time frames on the order of a flood or flood sequence influences the choice of method for parameter calculation in sediment transport calculations. Those methods that more tightly link the changing flow rate and bed mobility have the potential to improve bedload transport rates.

  13. Pharmacokinetic modelling of intravenous tobramycin in adolescent and adult patients with cystic fibrosis using the nonparametric expectation maximization (NPEM) algorithm.

    PubMed

    Touw, D J; Vinks, A A; Neef, C

    1997-06-01

    The availability of personal computer programs for individualizing drug dosage regimens has stimulated the interest in modelling population pharmacokinetics. Data from 82 adolescent and adult patients with cystic fibrosis (CF) who were treated with intravenous tobramycin because of an exacerbation of their pulmonary infection were analysed with a non-parametric expectation maximization (NPEM) algorithm. This algorithm estimates the entire discrete joint probability density of the pharmacokinetic parameters. It also provides traditional parametric statistics such as the means, standard deviation, median, covariances and correlations among the various parameters. It also provides graphic-2- and 3-dimensional representations of the marginal densities of the parameters investigated. Several models for intravenous tobramycin in adolescent and adult patients with CF were compared. Covariates were total body weight (for the volume of distribution) and creatinine clearance (for the total body clearance and elimination rate). Because of lack of data on patients with poor renal function, restricted models with non-renal clearance and the non-renal elimination rate constant fixed at literature values of 0.15 L/h and 0.01 h-1 were also included. In this population, intravenous tobramycin could be best described by median (+/-dispersion factor) volume of distribution per unit of total body weight of 0.28 +/- 0.05 L/kg, elimination rate constant of 0.25 +/- 0.10 h-1 and elimination rate constant per unit of creatinine clearance of 0.0008 +/- 0.0009 h-1/(ml/min/1.73 m2). Analysis of populations of increasing size showed that using a restricted model with a non-renal elimination rate constant fixed at 0.01 h-1, a model based on a population of only 10 to 20 patients, contained parameter values similar to those of the entire population and, using the full model, a larger population (at least 40 patients) was needed.

  14. Numerical optimization of Ignition and Growth reactive flow modeling for PAX2A

    NASA Astrophysics Data System (ADS)

    Baker, E. L.; Schimel, B.; Grantham, W. J.

    1996-05-01

    Variable metric nonlinear optimization has been successfully applied to the parameterization of unreacted and reacted products thermodynamic equations of state and reactive flow modeling of the HMX based high explosive PAX2A. The NLQPEB nonlinear optimization program has been recently coupled to the LLNL developed two-dimensional high rate continuum modeling programs DYNA2D and CALE. The resulting program has the ability to optimize initial modeling parameters. This new optimization capability was used to optimally parameterize the Ignition and Growth reactive flow model to experimental manganin gauge records. The optimization varied the Ignition and Growth reaction rate model parameters in order to minimize the difference between the calculated pressure histories and the experimental pressure histories.

  15. Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1990-01-01

    The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.

  16. Estimation and identification study for flexible vehicles

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Englar, T. S., Jr.

    1973-01-01

    Techniques are studied for the estimation of rigid body and bending states and the identification of model parameters associated with the single-axis attitude dynamics of a flexible vehicle. This problem is highly nonlinear but completely observable provided sufficient attitude and attitude rate data is available and provided all system bending modes are excited in the observation interval. A sequential estimator tracks the system states in the presence of model parameter errors. A batch estimator identifies all model parameters with high accuracy.

  17. Determination of MLC model parameters for Monaco using commercial diode arrays.

    PubMed

    Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian

    2016-07-08

    Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors

  18. Important Physiological Parameters and Physical Activity Data for Evaluating Exposure Modeling Performance: a Synthesis

    EPA Science Inventory

    The purpose of this report is to develop a database of physiological parameters needed for understanding and evaluating performance of the APEX and SHEDS exposure/intake dose rate model used by the Environmental Protection Agency (EPA) as part of its regulatory activities. The A...

  19. Individual Differences in a Positional Learning Task across the Adult Lifespan

    ERIC Educational Resources Information Center

    Rast, Philippe; Zimprich, Daniel

    2010-01-01

    This study aimed at modeling individual and average non-linear trajectories of positional learning using a structured latent growth curve approach. The model is based on an exponential function which encompasses three parameters: Initial performance, learning rate, and asymptotic performance. These learning parameters were compared in a positional…

  20. Principles of parametric estimation in modeling language competition

    PubMed Central

    Zhang, Menghan; Gong, Tao

    2013-01-01

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678

  1. Principles of parametric estimation in modeling language competition.

    PubMed

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  2. Non-steady state simulation of BOM removal in drinking water biofilters: model development.

    PubMed

    Hozalski, R M; Bouwer, E J

    2001-01-01

    A numerical model was developed to simulate the non-steady-state behavior of biologically-active filters used for drinking water treatment. The biofilter simulation model called "BIOFILT" simulates the substrate (biodegradable organic matter or BOM) and biomass (both attached and suspended) profiles in a biofilter as a function of time. One of the innovative features of BIOFILT compared to previous biofilm models is the ability to simulate the effects of a sudden loss in attached biomass or biofilm due to filter backwash on substrate removal performance. A sensitivity analysis of the model input parameters indicated that the model simulations were most sensitive to the values of parameters that controlled substrate degradation and biofilm growth and accumulation including the substrate diffusion coefficient, the maximum rate of substrate degradation, the microbial yield coefficient, and a dimensionless shear loss coefficient. Variation of the hydraulic loading rate or other parameters that controlled the deposition of biomass via filtration did not significantly impact the simulation results.

  3. Modeling of the silane FBR system

    NASA Technical Reports Server (NTRS)

    Dudokovic, M. P.; Ramachandran, P. A.; Lai, S.

    1984-01-01

    Development of a mathematical model for fluidized bed pyrolysis of silane that relates production rate and product properties (size, size distribution, presence or absence of fines) with bed size and operating conditions (temperature, feed concentration, flow rate, seed size, etc.) and development of user oriented algorithm for the model are considered. A parameter sensitivity study of the model was also developed.

  4. Deriving a model for influenza epidemics from historical data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Lefantzi, Sophia

    In this report we describe how we create a model for influenza epidemics from historical data collected from both civilian and military societies. We derive the model when the population of the society is unknown but the size of the epidemic is known. Our interest lies in estimating a time-dependent infection rate to within a multiplicative constant. The model form fitted is chosen for its similarity to published models for HIV and plague, enabling application of Bayesian techniques to discriminate among infectious agents during an emerging epidemic. We have developed models for the progression of influenza in human populations. Themore » model is framed as a integral, and predicts the number of people who exhibit symptoms and seek care over a given time-period. The start and end of the time period form the limits of integration. The disease progression model, in turn, contains parameterized models for the incubation period and a time-dependent infection rate. The incubation period model is obtained from literature, and the parameters of the infection rate are fitted from historical data including both military and civilian populations. The calibrated infection rate models display a marked difference in which the 1918 Spanish Influenza pandemic differed from the influenza seasons in the US between 2001-2008 and the progression of H1N1 in Catalunya, Spain. The data for the 1918 pandemic was obtained from military populations, while the rest are country-wide or province-wide data from the twenty-first century. We see that the initial growth of infection in all cases were about the same; however, military populations were able to control the epidemic much faster i.e., the decay of the infection-rate curve is much higher. It is not clear whether this was because of the much higher level of organization present in a military society or the seriousness with which the 1918 pandemic was addressed. Each outbreak to which the influenza model was fitted yields a separate set of parameter values. We suggest 'consensus' parameter values for military and civilian populations in the form of normal distributions so that they may be further used in other applications. Representing the parameter values as distributions, instead of point values, allows us to capture the uncertainty and scatter in the parameters. Quantifying the uncertainty allows us to use these models further in inverse problems, predictions under uncertainty and various other studies involving risk.« less

  5. Throughput and latency programmable optical transceiver by using DSP and FEC control.

    PubMed

    Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki

    2017-05-15

    We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.

  6. Effect of mechanical properties on erosion resistance of ductile materials

    NASA Astrophysics Data System (ADS)

    Levin, Boris Feliksovih

    Solid particle erosion (SPE) resistance of ductile Fe, Ni, and Co-based alloys as well as commercially pure Ni and Cu was studied. A model for SPE behavior of ductile materials is presented. The model incorporates the mechanical properties of the materials at the deformation conditions associated with SPE process, as well as the evolution of these properties during the erosion induced deformation. An erosion parameter was formulated based on consideration of the energy loss during erosion, and incorporates the material's hardness and toughness at high strain rates. The erosion model predicts that materials combining high hardness and toughness can exhibit good erosion resistance. To measure mechanical properties of materials, high strain rate compression tests using Hopkinson bar technique were conducted at strain rates similar to those during erosion. From these tests, failure strength and strain during erosion were estimated and used to calculate toughness of the materials. The proposed erosion parameter shows good correlation with experimentally measured erosion rates for all tested materials. To analyze subsurface deformation during erosion, microhardness and nanoindentation tests were performed on the cross-sections of the eroded materials and the size of the plastically deformed zone and the increase in materials hardness due to erosion were determined. A nanoindentation method was developed to estimate the restitution coefficient within plastically deformed regions of the eroded samples which provides a measure of the rebounding ability of a material during particle impact. An increase in hardness near the eroded surface led to an increase in restitution coefficient. Also, the stress rates imposed below the eroded surface were comparable to those measured during high strain-rate compression tests (10sp3-10sp4 ssp{-1}). A new parameter, "area under the microhardness curve" was developed that represents the ability of a material to absorb impact energy. By incorporating this parameter into a new erosion model, good correlation was observed with experimentally measured erosion rates. An increase in area under the microhardness curve led to an increase in erosion resistance. It was shown that an increase in hardness below the eroded surface occurs mainly due to the strain-rate hardening effect. Strain-rate sensitivities of tested materials were estimated from the nanoindentation tests and showed a decrease with an increase in materials hardness. Also, materials combining high hardness and strain-rate sensitivity may offer good erosion resistance. A methodology is presented to determine the proper mechanical properties to incorporate into the erosion parameter based on the physical model of the erosion mechanism in ductile materials.

  7. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, S.; Shelef, E.; Hilley, G. E.

    2013-12-01

    The Washington Cascades is currently in topographic and erosional disequilibrium after deglaciation occurred around 11- 17 ka ago. The topography still shows the features inherited from prior alpine glacial processes (e.g., cirques, steep side-valleys, and flat valley bottoms), though postglacial processes are currently denuding this landscape. Our previous study in this area calculated the thousand-year-timescale denudation rates using cosmogenic 10Be concentration (CRN-denudation rates), and showed that they were ~ four times higher than million-year-timescale uplift rates. In addition, the spatial distribution of denudation rates showed a good correlation with a factor-of-ten variation in precipitation. We interpreted this correlation as reflecting the sensitivity of landslide triggering in over-steepened deglaciated topography to precipitation, which produced high denudation rates in wet areas that experienced frequent landsliding. We explored this interpretation using a model of postglacial surface processes that predicts the evolution of the topography and denudation rates within the deglaciated Washington Cascades. Specifically, we used the model to understand the controls on and timescales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically-based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-timescale denudation rates measured from cosmogenic 10Be isotopes. The probability distribution of model parameters required to fit the observed denudation rates shows comparable ranges from previous studies in similar rock types and climatic conditions. The calibrated parameters suggest that the dominant sediment source of river sediments originates from stochastic landslides. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), while their spatial distribution is largely controlled by precipitation and slope angles. Simulation results show that denudation rates decay over time and take approximately 130-180 ka to reach steady-state rates. This response timescale is longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may prevent these types of landscapes from reaching a dynamic equilibrium with postglacial processes.

  8. Inclusion of TCAF model in XSPEC to study accretion flow dynamics around black hole candidates

    NASA Astrophysics Data System (ADS)

    Debnath, Dipak; Chakrabarti, Sandip Kumar; Mondal, Santanu

    Spectral and Temporal properties of black hole candidates can be well understood with the Chakrabarti-Titarchuk solution of two component advective flow (TCAF). This model requires two accretion rates, namely, the Keplerian disk accretion rate and the sub-Keplerian halo accretion rate, the latter being composed of a low angular momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disk rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time we are able to create a user friendly version by implementing additive Table model FITS file into GSFC/NASA's spectral analysis software package XSPEC. This enables any user to extract physical parameters of accretion flows, such as two accretion rates, shock location, shock strength etc. for any black hole candidate. Most importantly, unlike any other theoretical model, we show that TCAF is capable of predicting timing properties from spectral fits, since in TCAF, a shock is responsible for deciding spectral slopes as well as QPO frequencies.

  9. On a sparse pressure-flow rate condensation of rigid circulation models

    PubMed Central

    Schiavazzi, D. E.; Hsia, T. Y.; Marsden, A. L.

    2015-01-01

    Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol’ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219

  10. Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao

    2017-11-01

    Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.

  11. Modeling the cooperative and competitive contagions in online social networks

    NASA Astrophysics Data System (ADS)

    Zhuang, Yun-Bei; Chen, J. J.; Li, Zhi-hong

    2017-10-01

    The wide adoption of social media has increased the interaction among different pieces of information, and this interaction includes cooperation and competition for our finite attention. While previous research focus on fully competition, this paper extends the interaction to be both "cooperation" and "competition", by employing an IS1S2 R model. To explore how two different pieces of information interact with each other, the IS1S2 R model splits the agents into four parts-(Ignorant-Spreader I-Spreader II-Stifler), based on SIR epidemic spreading model. Using real data from Weibo.com, a social network site similar to Twitter, we find some parameters, like decaying rates, can both influence the cooperative diffusion process and the competitive process, while other parameters, like infectious rates only have influence on the competitive diffusion process. Besides, the parameters' effect are more significant in the competitive diffusion than in the cooperative diffusion.

  12. Integration of Harvest and Time-to-Event Data Used to Estimate Demographic Parameters for White-tailed Deer

    NASA Astrophysics Data System (ADS)

    Norton, Andrew S.

    An integral component of managing game species is an understanding of population dynamics and relative abundance. Harvest data are frequently used to estimate abundance of white-tailed deer. Unless harvest age-structure is representative of the population age-structure and harvest vulnerability remains constant from year to year, these data alone are of limited value. Additional model structure and auxiliary information has accommodated this shortcoming. Specifically, integrated age-at-harvest (AAH) state-space population models can formally combine multiple sources of data, and regularization via hierarchical model structure can increase flexibility of model parameters. I collected known fates data, which I evaluated and used to inform trends in survival parameters for an integrated AAH model. I used temperature and snow depth covariates to predict survival outside of the hunting season, and opening weekend temperature and percent of corn harvest covariates to predict hunting season survival. When auxiliary empirical data were unavailable for the AAH model, moderately informative priors provided sufficient information for convergence and parameter estimates. The AAH model was most sensitive to errors in initial abundance, but this error was calibrated after 3 years. Among vital rates, the AAH model was most sensitive to reporting rates (percentage of mortality during the hunting season related to harvest). The AAH model, using only harvest data, was able to track changing abundance trends due to changes in survival rates even when prior models did not inform these changes (i.e. prior models were constant when truth varied). I also compared AAH model results with estimates from the Wisconsin Department of Natural Resources (WIDNR). Trends in abundance estimates from both models were similar, although AAH model predictions were systematically higher than WIDNR estimates in the East study area. When I incorporated auxiliary information (i.e. integrated AAH model) about survival outside the hunting season from known fates data, predicted trends appeared more closely related to what was expected. Disagreements between the AAH model and WIDNR estimates in the East were likely related to biased predictions for reporting and survival rates from the AAH model.

  13. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  14. Forecasting the mortality rates of Malaysian population using Heligman-Pollard model

    NASA Astrophysics Data System (ADS)

    Ibrahim, Rose Irnawaty; Mohd, Razak; Ngataman, Nuraini; Abrisam, Wan Nur Azifah Wan Mohd

    2017-08-01

    Actuaries, demographers and other professionals have always been aware of the critical importance of mortality forecasting due to declining trend of mortality and continuous increases in life expectancy. Heligman-Pollard model was introduced in 1980 and has been widely used by researchers in modelling and forecasting future mortality. This paper aims to estimate an eight-parameter model based on Heligman and Pollard's law of mortality. Since the model involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 7.0 (MATLAB 7.0) software will be used in order to estimate the parameters. Statistical Package for the Social Sciences (SPSS) will be applied to forecast all the parameters according to Autoregressive Integrated Moving Average (ARIMA). The empirical data sets of Malaysian population for period of 1981 to 2015 for both genders will be considered, which the period of 1981 to 2010 will be used as "training set" and the period of 2011 to 2015 as "testing set". In order to investigate the accuracy of the estimation, the forecast results will be compared against actual data of mortality rates. The result shows that Heligman-Pollard model fit well for male population at all ages while the model seems to underestimate the mortality rates for female population at the older ages.

  15. Collective firm bankruptcies and phase transition in rating dynamics

    NASA Astrophysics Data System (ADS)

    Sieczka, P.; Hołyst, J. A.

    2009-10-01

    We present a simple model of firm rating evolution. We consider two sources of defaults: individual dynamics of economic development and Potts-like interactions between firms. We show that such a defined model leads to phase transition, which results in collective defaults. The existence of the collective phase depends on the mean interaction strength. For small interaction strength parameters, there are many independent bankruptcies of individual companies. For large parameters, there are giant collective defaults of firm clusters. In the case when the individual firm dynamics favors dumping of rating changes, there is an optimal strength of the firm's interactions from the systemic risk point of view. in here

  16. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  17. VizieR Online Data Catalog: A catalog of exoplanet physical parameters (Foreman-Mackey+, 2014)

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, D.; Hogg, D. W.; Morton, T. D.

    2017-05-01

    The first ingredient for any probabilistic inference is a likelihood function, a description of the probability of observing a specific data set given a set of model parameters. In this particular project, the data set is a catalog of exoplanet measurements and the model parameters are the values that set the shape and normalization of the occurrence rate density. (2 data files).

  18. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  19. Testing models of thorium and particle cycling in the ocean using data from station GT11-22 of the U.S. GEOTRACES North Atlantic section

    NASA Astrophysics Data System (ADS)

    Lerner, Paul; Marchal, Olivier; Lam, Phoebe J.; Anderson, Robert F.; Buesseler, Ken; Charette, Matthew A.; Edwards, R. Lawrence; Hayes, Christopher T.; Huang, Kuo-Fang; Lu, Yanbin; Robinson, Laura F.; Solow, Andrew

    2016-07-01

    Thorium is a highly particle-reactive element that possesses different measurable radio-isotopes in seawater, with well-constrained production rates and very distinct half-lives. As a result, Th has emerged as a key tracer for the cycling of marine particles and of their chemical constituents, including particulate organic carbon. Here two different versions of a model of Th and particle cycling in the ocean are tested using an unprecedented data set from station GT11-22 of the U.S. GEOTRACES North Atlantic Section: (i) 228,230,234Th activities of dissolved and particulate fractions, (ii) 228Ra activities, (iii) 234,238U activities estimated from salinity data and an assumed 234U/238U ratio, and (iv) particle concentrations, below a depth of 125 m. The two model versions assume a single class of particles but rely on different assumptions about the rate parameters for sorption reactions and particle processes: a first version (V1) assumes vertically uniform parameters (a popular description), whereas the second (V2) does not. Both versions are tested by fitting to the GT11-22 data using generalized nonlinear least squares and by analyzing residuals normalized to the data errors. We find that model V2 displays a significantly better fit to the data than model V1. Thus, the mere allowance of vertical variations in the rate parameters can lead to a significantly better fit to the data, without the need to modify the structure or add any new processes to the model. To understand how the better fit is achieved we consider two parameters, K =k1 /(k-1 +β-1) and K/P, where k1 is the adsorption rate constant, k-1 the desorption rate constant, β-1 the remineralization rate constant, and P the particle concentration. We find that the rate constant ratio K is large (⩾ 0.2) in the upper 1000 m and decreases to a nearly uniform value of ca. 0.12 below 2000 m, implying that the specific rate at which Th attaches to particles relative to that at which it is released from particles is higher in the upper ocean than in the deep ocean. In contrast, K/P increases with depth below 500 m. The parameters K and K/P display significant positive and negative monotonic relationship with P, respectively, which is collectively consistent with a particle concentration effect.

  20. Estimating Unsaturated Zone N Fluxes and Travel Times to Groundwater at Watershed Scales

    NASA Astrophysics Data System (ADS)

    Liao, L.; Green, C. T.; Harter, T.; Nolan, B. T.; Juckem, P. F.; Shope, C. L.

    2016-12-01

    Nitrate concentrations in groundwater vary at spatial and temporal scales. Local variability depends on soil properties, unsaturated zone properties, hydrology, reactivity, and other factors. For example, the travel time in the unsaturated zone can cause contaminant responses in aquifers to lag behind changes in N inputs at the land surface, and variable leaching-fractions of applied N fertilizer to groundwater can elevate (or reduce) concentrations in groundwater. In this study, we apply the vertical flux model (VFM) (Liao et al., 2012) to address the importance of travel time of N in the unsaturated zone and its fraction leached from the unsaturated zone to groundwater. The Fox-Wolf-Peshtigo basins, including 34 out of 72 counties in Wisconsin, were selected as the study area. Simulated concentrations of NO3-, N2 from denitrification, O2, and environmental tracers of groundwater age were matched to observations by adjusting parameters for recharge rate, unsaturated zone travel time, fractions of N inputs leached to groundwater, O2 reduction rate, O2 threshold for denitrification, denitrification rate, and dispersivity. Correlations between calibrated parameters and GIS parameters (land use, drainage class and soil properties etc.) were evaluated. Model results revealed a median of recharge rate of 0.11 m/yr, which is comparable with results from three independent estimates of recharge rates in the study area. The unsaturated travel times ranged from 0.2 yr to 25 yr with median of 6.8 yr. The correlation analysis revealed that relationships between VFM parameters and landscape characteristics (GIS parameters) were consistent with expected relationships. Fraction N leached was lower in the vicinity of wetlands and greater in the vicinity of crop lands. Faster unsaturated zone transport in forested areas was consistent with results of studies showing rapid vertical transport in forested soils. Reaction rate coefficients correlated with chemical indicators such as Fe and P concentrations. Overall, the results demonstrate applicability of the VFM at a regional scale, as well as potential to generate N transport estimates continuously across regions based on statistical relationships between VFM model parameters and GIS parameters.

  1. The Compositional Dependence of the Microstructure and Properties of CMSX-4 Superalloys

    NASA Astrophysics Data System (ADS)

    Yu, Hao; Xu, Wei; Van Der Zwaag, Sybrand

    2018-01-01

    The degradation of creep resistance in Ni-based single-crystal superalloys is essentially ascribed to their microstructural evolution. Yet there is a lack of work that manages to predict (even qualitatively) the effect of alloying element concentrations on the rate of microstructural degradation. In this research, a computational model is presented to connect the rafting kinetics of Ni superalloys to their chemical composition by combining thermodynamics calculation and a modified microstructural model. To simulate the evolution of key microstructural parameters during creep, the isotropic coarsening rate and γ/ γ' misfit stress are defined as composition-related parameters, and the effect of service temperature, time, and applied stress are taken into consideration. Two commercial superalloys, for which the kinetics of the rafting process are selected as the reference alloys, and the corresponding microstructural parameters are simulated and compared with experimental observations reported in the literature. The results confirm that our physical model not requiring any fitting parameters manages to predict (semiquantitatively) the microstructural parameters for different service conditions, as well as the effects of alloying element concentrations. The model can contribute to the computational design of new Ni-based superalloys.

  2. Survival and recovery rates of American woodcock banded in Michigan

    USGS Publications Warehouse

    Krementz, David G.; Hines, James E.; Luukkonen, David R.

    2003-01-01

    American woodcock (Scolopax minor) population indices have declined since U.S. Fish and Wildlife Service (USFWS) monitoring began in 1968. Management to stop and/or reverse this population trend has been hampered by the lack of recent information on woodcock population parameters. Without recent information on survival rate trends, managers have had to assume that the recent declines in recruitment indices are the only parameter driving woodcock declines. Using program MARK, we estimated annual survival and recovery rates of adult and juvenile American woodcock, and estimated summer survival of local (young incapable of sustained flight) woodcock banded in Michigan between 1978 and 1998. We constructed a set of candidate models from a global model with age (local, juvenile, adult) and time (year)-dependent survival and recovery rates to no age or time-dependent survival and recovery rates. Five models were supported by the data, with all models suggesting that survival rates differed among age classes, and 4 models had survival rates that were constant over time. The fifth model suggested that juvenile and adult survival rates were linear on a logit scale over time. Survival rates averaged over likelihood-weighted model results were 0.8784 +/- 0.1048 (SE) for locals, 0.2646 +/- 0.0423 (SE) for juveniles, and 0.4898 +/- 0.0329 (SE) for adults. Weighted average recovery rates were 0.0326 +/- 0.0053 (SE) for juveniles and 0.0313 +/- 0.0047 (SE) for adults. Estimated differences between our survival estimates and those from prior years were small, and our confidence around those differences was variable and uncertain. juvenile survival rates were low.

  3. Assessing doses to terrestrial wildlife at a radioactive waste disposal site: inter-comparison of modelling approaches.

    PubMed

    Johansen, M P; Barnett, C L; Beresford, N A; Brown, J E; Černe, M; Howard, B J; Kamboj, S; Keum, D-K; Smodiš, B; Twining, J R; Vandenhove, H; Vives i Batlle, J; Wood, M D; Yu, C

    2012-06-15

    Radiological doses to terrestrial wildlife were examined in this model inter-comparison study that emphasised factors causing variability in dose estimation. The study participants used varying modelling approaches and information sources to estimate dose rates and tissue concentrations for a range of biota types exposed to soil contamination at a shallow radionuclide waste burial site in Australia. Results indicated that the dominant factor causing variation in dose rate estimates (up to three orders of magnitude on mean total dose rates) was the soil-to-organism transfer of radionuclides that included variation in transfer parameter values as well as transfer calculation methods. Additional variation was associated with other modelling factors including: how participants conceptualised and modelled the exposure configurations (two orders of magnitude); which progeny to include with the parent radionuclide (typically less than one order of magnitude); and dose calculation parameters, including radiation weighting factors and dose conversion coefficients (typically less than one order of magnitude). Probabilistic approaches to model parameterisation were used to encompass and describe variable model parameters and outcomes. The study confirms the need for continued evaluation of the underlying mechanisms governing soil-to-organism transfer of radionuclides to improve estimation of dose rates to terrestrial wildlife. The exposure pathways and configurations available in most current codes are limited when considering instances where organisms access subsurface contamination through rooting, burrowing, or using different localised waste areas as part of their habitual routines. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  4. High-speed blanking of copper alloy sheets: Material modeling and simulation

    NASA Astrophysics Data System (ADS)

    Husson, Ch.; Ahzi, S.; Daridon, L.

    2006-08-01

    To optimize the blanking process of thin copper sheets ( ≈ 1. mm thickness), it is necessary to study the influence of the process parameters such as the punch-die clearance and the wear of the punch and the die. For high stroke rates, the strain rate developed in the work-piece can be very high. Therefore, the material modeling must include the dynamic effects.For the modeling part, we propose an elastic-viscoplastic material model combined with a non-linear isotropic damage evolution law based on the theory of the continuum damage mechanics. Our proposed modeling is valid for a wide range of strain rates and temperatures. Finite Element simulations, using the commercial code ABAQUS/Explicit, of the blanking process are then conducted and the results are compared to the experimental investigations. The predicted cut edge of the blanked part and the punch-force displacement curves are discussed as function of the process parameters. The evolution of the shape errors (roll-over depth, fracture depth, shearing depth, and burr formation) as function of the punch-die clearance, the punch and the die wear, and the contact punch/die/blank-holder are presented. A discussion on the different stages of the blanking process as function of the processing parameters is given. The predicted results of the blanking dependence on strain-rate and temperature using our modeling are presented (for the plasticity and damage). The comparison our model results with the experimental ones shows a good agreement.

  5. Geometric model for softwood transverse thermal conductivity. Part I

    Treesearch

    Hong-mei Gu; Audrey Zink-Sharp

    2005-01-01

    Thermal conductivity is a very important parameter in determining heat transfer rate and is required for developing of drying models and in industrial operations such as adhesive cure rate. Geometric models for predicting softwood thermal conductivity in the radial and tangential directions were generated in this study based on obervation and measurements of wood...

  6. A simplified model for predicting malaria entomologic inoculation rates based on entomologic and parasitologic parameters relevant to control.

    PubMed

    Killeen, G F; McKenzie, F E; Foy, B D; Schieffelin, C; Billingsley, P F; Beier, J C

    2000-05-01

    Malaria transmission intensity is modeled from the starting perspective of individual vector mosquitoes and is expressed directly as the entomologic inoculation rate (EIR). The potential of individual mosquitoes to transmit malaria during their lifetime is presented graphically as a function of their feeding cycle length and survival, human biting preferences, and the parasite sporogonic incubation period. The EIR is then calculated as the product of 1) the potential of individual vectors to transmit malaria during their lifetime, 2) vector emergence rate relative to human population size, and 3) the infectiousness of the human population to vectors. Thus, impacts on more than one of these parameters will amplify each other's effects. The EIRs transmitted by the dominant vector species at four malaria-endemic sites from Papua New Guinea, Tanzania, and Nigeria were predicted using field measurements of these characteristics together with human biting rate and human reservoir infectiousness. This model predicted EIRs (+/- SD) that are 1.13 +/- 0.37 (range = 0.84-1.59) times those measured in the field. For these four sites, mosquito emergence rate and lifetime transmission potential were more important determinants of the EIR than human reservoir infectiousness. This model and the input parameters from the four sites allow the potential impacts of various control measures on malaria transmission intensity to be tested under a range of endemic conditions. The model has potential applications for the development and implementation of transmission control measures and for public health education.

  7. A semi-phenomenological model to predict the acoustic behavior of fully and partially reticulated polyurethane foams

    NASA Astrophysics Data System (ADS)

    Doutres, Olivier; Atalla, Noureddine; Dong, Kevin

    2013-02-01

    This paper proposes simple semi-phenomenological models to predict the sound absorption efficiency of highly porous polyurethane foams from microstructure characterization. In a previous paper [J. Appl. Phys. 110, 064901 (2011)], the authors presented a 3-parameter semi-phenomenological model linking the microstructure properties of fully and partially reticulated isotropic polyurethane foams (i.e., strut length l, strut thickness t, and reticulation rate Rw) to the macroscopic non-acoustic parameters involved in the classical Johnson-Champoux-Allard model (i.e., porosity ϕ, airflow resistivity σ, tortuosity α∝, viscous Λ, and thermal Λ' characteristic lengths). The model was based on existing scaling laws, validated for fully reticulated polyurethane foams, and improved using both geometrical and empirical approaches to account for the presence of membrane closing the pores. This 3-parameter model is applied to six polyurethane foams in this paper and is found highly sensitive to the microstructure characterization; particularly to strut's dimensions. A simplified micro-/macro model is then presented. It is based on the cell size Cs and reticulation rate Rw only, assuming that the geometric ratio between strut length l and strut thickness t is known. This simplified model, called the 2-parameter model, considerably simplifies the microstructure characterization procedure. A comparison of the two proposed semi-phenomenological models is presented using six polyurethane foams being either fully or partially reticulated, isotropic or anisotropic. It is shown that the 2-parameter model is less sensitive to measurement uncertainties compared to the original model and allows a better estimation of polyurethane foams sound absorption behavior.

  8. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  9. Predicting key malaria transmission factors, biting and entomological inoculation rates, using modelled soil moisture in Kenya.

    PubMed

    Patz, J A; Strzepek, K; Lele, S; Hedden, M; Greene, S; Noden, B; Hay, S I; Kalkstein, L; Beier, J C

    1998-10-01

    While malaria transmission varies seasonally, large inter-annual heterogeneity of malaria incidence occurs. Variability in entomological parameters, biting rates and entomological inoculation rates (EIR) have been strongly associated with attack rates in children. The goal of this study was to assess the weather's impact on weekly biting and EIR in the endemic area of Kisian, Kenya. Entomological data collected by the U.S. Army from March 1986 through June 1988 at Kisian, Kenya was analysed with concurrent weather data from nearby Kisumu airport. A soil moisture model of surface-water availability was used to combine multiple weather parameters with landcover and soil features to improve disease prediction. Modelling soil moisture substantially improved prediction of biting rates compared to rainfall; soil moisture lagged two weeks explained up to 45% of An. gambiae biting variability, compared to 8% for raw precipitation. For An. funestus, soil moisture explained 32% variability, peaking after a 4-week lag. The interspecies difference in response to soil moisture was significant (P < 0.00001). A satellite normalized differential vegetation index (NDVI) of the study site yielded a similar correlation (r = 0.42 An. gambiae). Modelled soil moisture accounted for up to 56% variability of An. gambiae EIR, peaking at a lag of six weeks. The relationship between temperature and An. gambiae biting rates was less robust; maximum temperature r2 = -0.20, and minimum temperature r2 = 0.12 after lagging one week. Benefits of hydrological modelling are compared to raw weather parameters and to satellite NDVI. These findings can improve both current malaria risk assessments and those based on El Niño forecasts or global climate change model projections.

  10. Leaf photosynthesis and respiration of three bioenergy crops in relation to temperature and leaf nitrogen: how conserved are biochemical model parameters among crop species?

    PubMed Central

    Archontoulis, S. V.; Yin, X.; Vos, J.; Danalatos, N. G.; Struik, P. C.

    2012-01-01

    Given the need for parallel increases in food and energy production from crops in the context of global change, crop simulation models and data sets to feed these models with photosynthesis and respiration parameters are increasingly important. This study provides information on photosynthesis and respiration for three energy crops (sunflower, kenaf, and cynara), reviews relevant information for five other crops (wheat, barley, cotton, tobacco, and grape), and assesses how conserved photosynthesis parameters are among crops. Using large data sets and optimization techniques, the C3 leaf photosynthesis model of Farquhar, von Caemmerer, and Berry (FvCB) and an empirical night respiration model for tested energy crops accounting for effects of temperature and leaf nitrogen were parameterized. Instead of the common approach of using information on net photosynthesis response to CO2 at the stomatal cavity (An–Ci), the model was parameterized by analysing the photosynthesis response to incident light intensity (An–Iinc). Convincing evidence is provided that the maximum Rubisco carboxylation rate or the maximum electron transport rate was very similar whether derived from An–Ci or from An–Iinc data sets. Parameters characterizing Rubisco limitation, electron transport limitation, the degree to which light inhibits leaf respiration, night respiration, and the minimum leaf nitrogen required for photosynthesis were then determined. Model predictions were validated against independent sets. Only a few FvCB parameters were conserved among crop species, thus species-specific FvCB model parameters are needed for crop modelling. Therefore, information from readily available but underexplored An–Iinc data should be re-analysed, thereby expanding the potential of combining classical photosynthetic data and the biochemical model. PMID:22021569

  11. Estimation of the Reactive Flow Model Parameters for an Ammonium Nitrate-Based Emulsion Explosive Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ribeiro, J. B.; Silva, C.; Mendes, R.

    2010-10-01

    A real coded genetic algorithm methodology that has been developed for the estimation of the parameters of the reaction rate equation of the Lee-Tarver reactive flow model is described in detail. This methodology allows, in a single optimization procedure, using only one experimental result and, without the need of any starting solution, to seek the 15 parameters of the reaction rate equation that fit the numerical to the experimental results. Mass averaging and the plate-gap model have been used for the determination of the shock data used in the unreacted explosive JWL equation of state (EOS) assessment and the thermochemical code THOR retrieved the data used in the detonation products' JWL EOS assessments. The developed methodology was applied for the estimation of the referred parameters for an ammonium nitrate-based emulsion explosive using poly(methyl methacrylate) (PMMA)-embedded manganin gauge pressure-time data. The obtained parameters allow a reasonably good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.

  12. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  13. Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim

    2018-03-01

    A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.

  14. Mathematical modeling of power law and Herschel - Buckley non-Newtonian fluid of blood flow through a stenosed artery with permeable wall: Effects of slip velocity

    NASA Astrophysics Data System (ADS)

    Chitra, M.; Karthikeyan, D.

    2018-04-01

    A mathematical model of non-Newtonian blood flow through a stenosed artery is considered. The steadynon-Newtonian model is chosen characterized by the generalized power-law model and Herschel-Bulkley model incorporating the effect of slip velocity due to steanosed artery with permeable wall. The effects of slip velocity for non-Newtonian nature of blood on velocity, flow rate and wall shear stress of the stenosed artery with permeable wall are solved analytically. The effects of various parameters such as slip parameter (λ), power index (m) and different thickness of the stenosis (δ) on velocity, volumetric flow rate and wall shear stress are discussed through graphs.

  15. Formulation, General Features and Global Calibration of a Bioenergetically-Constrained Fishery Model

    PubMed Central

    Bianchi, Daniele; Galbraith, Eric D.

    2017-01-01

    Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change. PMID:28103280

  16. Formulation, General Features and Global Calibration of a Bioenergetically-Constrained Fishery Model.

    PubMed

    Carozza, David A; Bianchi, Daniele; Galbraith, Eric D

    2017-01-01

    Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change.

  17. An algorithm for the kinetics of tire pyrolysis under different heating rates.

    PubMed

    Quek, Augustine; Balasubramanian, Rajashekhar

    2009-07-15

    Tires exhibit different kinetic behaviors when pyrolyzed under different heating rates. A new algorithm has been developed to investigate pyrolysis behavior of scrap tires. The algorithm includes heat and mass transfer equations to account for the different extents of thermal lag as the tire is heated at different heating rates. The algorithm uses an iterative approach to fit model equations to experimental data to obtain quantitative values of kinetic parameters. These parameters describe the pyrolysis process well, with good agreement (r(2)>0.96) between the model and experimental data when the model is applied to three different brands of automobile tires heated under five different heating rates in a pure nitrogen atmosphere. The model agrees with other researchers' results that frequencies factors increased and time constants decreased with increasing heating rates. The model also shows the change in the behavior of individual tire components when the heating rates are increased above 30 K min(-1). This result indicates that heating rates, rather than temperature, can significantly affect pyrolysis reactions. This algorithm is simple in structure and yet accurate in describing tire pyrolysis under a wide range of heating rates (10-50 K min(-1)). It improves our understanding of the tire pyrolysis process by showing the relationship between the heating rate and the many components in a tire that depolymerize as parallel reactions.

  18. Exploring the mechanical behavior of degrading swine neural tissue at low strain rates via the fractional Zener constitutive model.

    PubMed

    Bentil, Sarah A; Dupaix, Rebecca B

    2014-02-01

    The ability of the fractional Zener constitutive model to predict the behavior of postmortem swine brain tissue was examined in this work. Understanding tissue behavior attributed to degradation is invaluable in many fields such as the forensic sciences or cases where only cadaveric tissue is available. To understand how material properties change with postmortem age, the fractional Zener model was considered as it includes parameters to describe brain stiffness and also the parameter α, which quantifies the viscoelasticity of a material. The relationship between the viscoelasticity described by α and tissue degradation was examined by fitting the model to data collected in a previous study (Bentil, 2013). This previous study subjected swine neural tissue to in vitro unconfined compression tests using four postmortem age groups (<6h, 24h, 3 days, and 1 week). All samples were compressed to a strain level of 10% using two compressive rates: 1mm/min and 5mm/min. Statistical analysis was used as a tool to study the influence of the fractional Zener constants on factors such as tissue degradation and compressive rate. Application of the fractional Zener constitutive model to the experimental data showed that swine neural tissue becomes less stiff with increased postmortem age. The fractional Zener model was also able to capture the nonlinear viscoelastic features of the brain tissue at low strain rates. The results showed that the parameter α was better correlated with compressive rate than with postmortem age. © 2013 Published by Elsevier Ltd.

  19. Analytic derivation of bacterial growth laws from a simple model of intracellular chemical dynamics.

    PubMed

    Pandey, Parth Pratim; Jain, Sanjay

    2016-09-01

    Experiments have found that the growth rate and certain other macroscopic properties of bacterial cells in steady-state cultures depend upon the medium in a surprisingly simple manner; these dependencies are referred to as 'growth laws'. Here we construct a dynamical model of interacting intracellular populations to understand some of the growth laws. The model has only three population variables: an amino acid pool, a pool of enzymes that transport an external nutrient and produce the amino acids, and ribosomes that catalyze their own and the enzymes' production from the amino acids. We assume that the cell allocates its resources between the enzyme sector and the ribosomal sector to maximize its growth rate. We show that the empirical growth laws follow from this assumption and derive analytic expressions for the phenomenological parameters in terms of the more basic model parameters. Interestingly, the maximization of the growth rate of the cell as a whole implies that the cell allocates resources to the enzyme and ribosomal sectors in inverse proportion to their respective 'efficiencies'. The work introduces a mathematical scheme in which the cellular growth rate can be explicitly determined and shows that two large parameters, the number of amino acid residues per enzyme and per ribosome, are useful for making approximations.

  20. A continuum mathematical model of endothelial layer maintenance and senescence

    PubMed Central

    Wang, Ying; Aguda, Baltazar D; Friedman, Avner

    2007-01-01

    Background The monolayer of endothelial cells (ECs) lining the inner wall of blood vessels deteriorates as a person ages due to a complex interplay of a variety of causes including cell death arising from shear stress of blood flow and cellular oxidative stress, cellular senescence, and decreased rate of replacement of dead ECs by progenitor stem cells. Results A continuum mathematical model is developed to describe the dynamics of large EC populations of the endothelium using a system of differential equations for the number densities of cells of different generations starting from endothelial progenitors to senescent cells, as well as the densities of dead cells and the holes created upon clearing dead cells. Aging of cells is manifested in three ways, namely, losing the ability to divide when the Hayflick limit of 50 generations is reached, decreasing replication rate parameters and increasing death rate parameters as cells divide; due to the dependence of these rate parameters on cell generation, the model predicts a narrow distribution of cell densities peaking at a particular cell generation. As the chronological age of a person advances, the peak of the distribution – corresponding to the age of the endothelium – moves towards senescence correspondingly. However, computer simulations also demonstrate that sustained and enhanced stem cell homing can halt the aging process of the endothelium by maintaining a stationary cell density distribution that peaks well before the Hayflick limit. The healing rates of damaged endothelia for young, middle-aged, and old persons are compared and are found to be particularly sensitive to the stem cell homing parameter. Conclusion The proposed model describes the aging of the endothelium as being driven by cellular senescence, with a rate that does not necessarily correspond to the chronological aging of a person. It is shown that the age of the endothelium depends sensitively on the homing rates of EC progenitor cells. PMID:17692115

  1. A continuum mathematical model of endothelial layer maintenance and senescence.

    PubMed

    Wang, Ying; Aguda, Baltazar D; Friedman, Avner

    2007-08-10

    The monolayer of endothelial cells (ECs) lining the inner wall of blood vessels deteriorates as a person ages due to a complex interplay of a variety of causes including cell death arising from shear stress of blood flow and cellular oxidative stress, cellular senescence, and decreased rate of replacement of dead ECs by progenitor stem cells. A continuum mathematical model is developed to describe the dynamics of large EC populations of the endothelium using a system of differential equations for the number densities of cells of different generations starting from endothelial progenitors to senescent cells, as well as the densities of dead cells and the holes created upon clearing dead cells. Aging of cells is manifested in three ways, namely, losing the ability to divide when the Hayflick limit of 50 generations is reached, decreasing replication rate parameters and increasing death rate parameters as cells divide; due to the dependence of these rate parameters on cell generation, the model predicts a narrow distribution of cell densities peaking at a particular cell generation. As the chronological age of a person advances, the peak of the distribution - corresponding to the age of the endothelium - moves towards senescence correspondingly. However, computer simulations also demonstrate that sustained and enhanced stem cell homing can halt the aging process of the endothelium by maintaining a stationary cell density distribution that peaks well before the Hayflick limit. The healing rates of damaged endothelia for young, middle-aged, and old persons are compared and are found to be particularly sensitive to the stem cell homing parameter. The proposed model describes the aging of the endothelium as being driven by cellular senescence, with a rate that does not necessarily correspond to the chronological aging of a person. It is shown that the age of the endothelium depends sensitively on the homing rates of EC progenitor cells.

  2. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    NASA Astrophysics Data System (ADS)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  3. Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2017-12-01

    The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.

  4. Bayesian estimation of dynamic matching function for U-V analysis in Japan

    NASA Astrophysics Data System (ADS)

    Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro

    2012-05-01

    In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.

  5. Constitutive modeling of the human Anterior Cruciate Ligament (ACL) under uniaxial loading using viscoelastic prony series and hyperelastic five parameter Mooney-Rivlin model

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Mondal, Debabrata; Motalab, Mohammad

    2016-07-01

    In this present study, the stress-strain behavior of the Human Anterior Cruciate Ligament (ACL) is studied under uniaxial loads applied with various strain rates. Tensile testing of the human ACL samples requires state of the art test facilities. Furthermore, difficulty in finding human ligament for testing purpose results in very limited archival data. Nominal Stress vs. deformation gradient plots for different strain rates, as found in literature, is used to model the material behavior either as a hyperelastic or as a viscoelastic material. The well-known five parameter Mooney-Rivlin constitutivemodel for hyperelastic material and the Prony Series model for viscoelastic material are used and the objective of the analyses comprises of determining the model constants and their variation-trend with strain rates for the Human Anterior Cruciate Ligament (ACL) material using the non-linear curve fitting tool. The relationship between the model constants and strain rate, using the Hyperelastic Mooney-Rivlin model, has been obtained. The variation of the values of each coefficient with strain rates, obtained using Hyperelastic Mooney-Rivlin model are then plotted and variation of the values with strain rates are obtained for all the model constants. These plots are again fitted using the software package MATLAB and a power law relationship between the model constants and strain rates is obtained for each constant. The obtained material model for Human Anterior Cruciate Ligament (ACL) material can be implemented in any commercial finite element software package for stress analysis.

  6. Validation of buoyancy driven spectral tensor model using HATS data

    NASA Astrophysics Data System (ADS)

    Chougule, A.; Mann, J.; Kelly, M.; Larsen, G. C.

    2016-09-01

    We present a homogeneous spectral tensor model for wind velocity and temperature fluctuations, driven by mean vertical shear and mean temperature gradient. Results from the model, including one-dimensional velocity and temperature spectra and the associated co-spectra, are shown in this paper. The model also reproduces two-point statistics, such as coherence and phases, via cross-spectra between two points separated in space. Model results are compared with observations from the Horizontal Array Turbulence Study (HATS) field program (Horst et al. 2004). The spectral velocity tensor in the model is described via five parameters: the dissipation rate (ɛ), length scale of energy-containing eddies (L), a turbulence anisotropy parameter (Γ), gradient Richardson number (Ri) representing the atmospheric stability and the rate of destruction of temperature variance (ηθ).

  7. Developing an Interpretation of Item Parameters for Personality Items: Content Correlates of Parameter Estimates.

    ERIC Educational Resources Information Center

    Zickar, Michael J.; Ury, Karen L.

    2002-01-01

    Attempted to relate content features of personality items to item parameter estimates from the partial credit model of E. Muraki (1990) by administering the Adjective Checklist (L. Goldberg, 1992) to 329 undergraduates. As predicted, the discrimination parameter was related to the item subtlety ratings of personality items but the level of word…

  8. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  9. Using GRACE and climate model simulations to predict mass loss of Alaskan glaciers through 2100

    DOE PAGES

    Wahr, John; Burgess, Evan; Swenson, Sean

    2016-05-30

    Glaciers in Alaska are currently losing mass at a rate of ~–50 Gt a –1, one of the largest ice loss rates of any regional collection of mountain glaciers on Earth. Existing projections of Alaska's future sea-level contributions tend to be divergent and are not tied directly to regional observations. Here we develop a simple, regional observation-based projection of Alaska's future sea-level contribution. We compute a time series of recent Alaska glacier mass variability using monthly GRACE gravity fields from August 2002 through December 2014. We also construct a three-parameter model of Alaska glacier mass variability based on monthly ERA-Interimmore » snowfall and temperature fields. When these three model parameters are fitted to the GRACE time series, the model explains 94% of the variance of the GRACE data. Using these parameter values, we then apply the model to simulated fields of monthly temperature and snowfall from the Community Earth System Model, to obtain predictions of mass variations through 2100. Here, we conclude that mass loss rates may increase between –80 and –110 Gt a –1by 2100, with a total sea-level rise contribution of 19 ± 4 mm during the 21st century.« less

  10. Latent transition models with latent class predictors: attention deficit hyperactivity disorder subtypes and high school marijuana use

    PubMed Central

    Reboussin, Beth A.; Ialongo, Nicholas S.

    2011-01-01

    Summary Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder which is most often diagnosed in childhood with symptoms often persisting into adulthood. Elevated rates of substance use disorders have been evidenced among those with ADHD, but recent research focusing on the relationship between subtypes of ADHD and specific drugs is inconsistent. We propose a latent transition model (LTM) to guide our understanding of how drug use progresses, in particular marijuana use, while accounting for the measurement error that is often found in self-reported substance use data. We extend the LTM to include a latent class predictor to represent empirically derived ADHD subtypes that do not rely on meeting specific diagnostic criteria. We begin by fitting two separate latent class analysis (LCA) models by using second-order estimating equations: a longitudinal LCA model to define stages of marijuana use, and a cross-sectional LCA model to define ADHD subtypes. The LTM model parameters describing the probability of transitioning between the LCA-defined stages of marijuana use and the influence of the LCA-defined ADHD subtypes on these transition rates are then estimated by using a set of first-order estimating equations given the LCA parameter estimates. A robust estimate of the LTM parameter variance that accounts for the variation due to the estimation of the two sets of LCA parameters is proposed. Solving three sets of estimating equations enables us to determine the underlying latent class structures independently of the model for the transition rates and simplifying assumptions about the correlation structure at each stage reduces the computational complexity. PMID:21461139

  11. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    PubMed

    Whittington, Jesse; Sawaya, Michael A

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.

  12. Welding current and melting rate in GMAW of aluminium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandey, S.; Rao, U.R.K.; Aghakhani, M.

    1996-12-31

    Studies on GMAW of aluminium and its alloy 5083, revealed that the welding current and melting rate were affected by any change in wire feed rate, arc voltage, nozzle to plate distance, welding speed and torch angle. Empirical models have been presented to determine accurately the welding current and melting rate for any set of these parameters. These results can be utilized for determining accurately the heat input into the workpiece from which reliable predictions can be made about the mechanical and the metallurgical properties of a welded joint. The analysis of the model also helps in providing a vitalmore » information about the static V-I characteristics of the welding power source. The models were developed using a two-level fractional factorial design. The adequacy of the model was tested by the use of analysis of variance technique and the significance of the coefficients was tested by the student`s t test. The estimated and observed values of the welding current and melting rate have been shown on a scatter diagram and the interaction effects of different parameters involved have been presented in graphical forms.« less

  13. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC

    PubMed Central

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.

    2017-01-01

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958

  14. Estimating demographic parameters using a combination of known-fate and open N-mixture models

    USGS Publications Warehouse

    Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.

    2015-01-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.

  15. Estimating demographic parameters using a combination of known-fate and open N-mixture models.

    PubMed

    Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G

    2015-10-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.

  16. Prediction of silicon oxynitride plasma etching using a generalized regression neural network

    NASA Astrophysics Data System (ADS)

    Kim, Byungwhan; Lee, Byung Teak

    2005-08-01

    A prediction model of silicon oxynitride (SiON) etching was constructed using a neural network. Model prediction performance was improved by means of genetic algorithm. The etching was conducted in a C2F6 inductively coupled plasma. A 24 full factorial experiment was employed to systematically characterize parameter effects on SiON etching. The process parameters include radio frequency source power, bias power, pressure, and C2F6 flow rate. To test the appropriateness of the trained model, additional 16 experiments were conducted. For comparison, four types of statistical regression models were built. Compared to the best regression model, the optimized neural network model demonstrated an improvement of about 52%. The optimized model was used to infer etch mechanisms as a function of parameters. The pressure effect was noticeably large only as relatively large ion bombardment was maintained in the process chamber. Ion-bombardment-activated polymer deposition played the most significant role in interpreting the complex effect of bias power or C2F6 flow rate. Moreover, [CF2] was expected to be the predominant precursor to polymer deposition.

  17. Eruption rate, area, and length relationships for some Hawaiian lava flows

    NASA Technical Reports Server (NTRS)

    Pieri, David C.; Baloga, Stephen M.

    1986-01-01

    The relationships between the morphological parameters of lava flows and the process parameters of lava composition, eruption rate, and eruption temperature were investigated using literature data on Hawaiian lava flows. Two simple models for lava flow heat loss by Stefan-Boltzmann radiation were employed to derive eruption rate versus planimetric area relationship. For the Hawaiian basaltic flows, the eruption rate is highly correlated with the planimetric area. Moreover, this observed correlation is superior to those from other obvious combinations of eruption rate and flow dimensions. The correlations obtained on the basis of the two theoretical models, suggest that the surface of the Hawaiian flows radiates at an effective temperature much less than the inner parts of the flowing lava, which is in agreement with field observations. The data also indicate that the eruption rate versus planimetric area correlations can be markedly degraded when data from different vents, volcanoes, and epochs are combined.

  18. Quantifying the degradation of organic matter in marine sediments: A review and synthesis

    NASA Astrophysics Data System (ADS)

    Arndt, Sandra; Jørgensen, B. B.; LaRowe, D. E.; Middelburg, J. J.; Pancost, R. D.; Regnier, P.

    2013-08-01

    Quantifying the rates of biogeochemical processes in marine sediments is essential for understanding global element cycles and climate change. Because organic matter degradation is the engine behind benthic dynamics, deciphering the impact that various forces have on this process is central to determining the evolution of the Earth system. Therefore, recent developments in the quantitative modeling of organic matter degradation in marine sediments are critically reviewed. The first part of the review synthesizes the main chemical, biological and physical factors that control organic matter degradation in sediments while the second part provides a general review of the mathematical formulations used to model these processes and the third part evaluates their application over different spatial and temporal scales. Key transport mechanisms in sedimentary environments are summarized and the mathematical formulation of the organic matter degradation rate law is described in detail. The roles of enzyme kinetics, bioenergetics, temperature and biomass growth in particular are highlighted. Alternative model approaches that quantify the degradation rate constant are also critically compared. In the third part of the review, the capability of different model approaches to extrapolate organic matter degradation rates over a broad range of temporal and spatial scales is assessed. In addition, the structure, functions and parameterization of more than 250 published models of organic matter degradation in marine sediments are analyzed. The large range of published model parameters illustrates the complex nature of organic matter dynamics, and, thus, the limited transferability of these parameters from one site to another. Compiled model parameters do not reveal a statistically significant correlation with single environmental characteristics such as water depth, deposition rate or organic matter flux. The lack of a generic framework that allows for model parameters to be constrained in data-poor areas seriously limits the quantification of organic matter degradation on a global scale. Therefore, we explore regional patterns that emerge from the compiled more than 250 organic matter rate constants and critically discuss them in their environmental context. This review provides an interdisciplinary view on organic matter degradation in marine sediments. It contributes to an improved understanding of global patterns in benthic organic matter degradation, and helps identify outstanding questions and future directions in the modeling of organic matter degradation in marine sediments.

  19. Model development and parameter estimation for a hybrid submerged membrane bioreactor treating Ametryn.

    PubMed

    Navaratna, Dimuth; Shu, Li; Baskaran, Kanagaratnam; Jegatheesan, Veeriah

    2012-06-01

    A lab-scale membrane bioreactor (MBR) was used to remove Ametryn from synthetic wastewater. It was found that concentrations of MLSS and extra-cellular polymeric substances (EPS) in MBR mixed liquor fluctuated (production and decay) differently for about 40 days (transition period) after the introduction of Ametryn. During the subsequent operations with higher organic loading rates, it was also found that a low net biomass yield (higher death rate) and a higher rate of fouling of membrane (a very high rate during the first 48 h) due to increased levels of bound EPS (eEPS) in MBR mixed liquor. A mathematical model was developed to estimate the kinetic parameters before and after the introduction of Ametryn. This model will be useful in simulating the performance of a MBR treating Ametryn in terms of flux, rate of fouling (in terms of transmembrane pressure and membrane resistance) as well as treatment efficiency. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Effects of various boundary conditions on the response of Poisson-Nernst-Planck impedance spectroscopy analysis models and comparison with a continuous-time random-walk model.

    PubMed

    Macdonald, J Ross

    2011-11-24

    Various electrode reaction rate boundary conditions suitable for mean-field Poisson-Nernst-Planck (PNP) mobile charge frequency response continuum models are defined and incorporated in the resulting Chang-Jaffe (CJ) CJPNP model, the ohmic OHPNP one, and a simplified GPNP one in order to generalize from full to partial blocking of mobile charges at the two plane parallel electrodes. Model responses using exact synthetic PNP data involving only mobile negative charges are discussed and compared for a wide range of CJ dimensionless reaction rate values. The CJPNP and OHPNP ones are shown to be fully equivalent, except possibly for the analysis of nanomaterial structures. The dielectric strengths associated with the CJPNP diffuse double layers at the electrodes were found to decrease toward 0 as the reaction rate increased, consistent with fewer blocked charges and more reacting ones. Parameter estimates from GPNP fits of CJPNP data were shown to lead to accurate calculated values of the CJ reaction rate and of some other CJPNP parameters. Best fits of CaCu(3)Ti(4)O(12) (CCTO) single-crystal data, an electronic conductor, at 80 and 140 K, required the anomalous diffusion model, CJPNPA, and led to medium-size rate estimates of about 0.12 and 0.03, respectively, as well as good estimates of the values of other important CJPNPA parameters such as the independently verified concentration of neutral dissociable centers. These continuum-fit results were found to be only somewhat comparable to those obtained from a composite continuous-time random-walk hopping/trapping semiuniversal UN model.

  1. Probabilistic seismic hazard study based on active fault and finite element geodynamic models

    NASA Astrophysics Data System (ADS)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-04-01

    We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.

  2. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    NASA Astrophysics Data System (ADS)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  3. Global determination of rating curves in the Amazon basin from satellite altimetry

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; Paiva, Rodrigo C. D.; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stéphane; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frédérique

    2014-05-01

    The Amazonian basin is the largest hydrological basin all over the world. Over the past few years, it has experienced an unusual succession of extreme droughts and floods, which origin is still a matter of debate. One of the major issues in understanding such events is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2009. The stage dataset is made of ~900 altimetry series at ENVISAT and Jason-2 virtual stations, sampling the stages over more than a hundred of rivers in the basin. Altimetry series span between 2002 and 2011. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are hydrologicaly meaningful throughout the entire Amazon basin. The rating curve parameters have been computed using an optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best value for the parameters together with their posterior probability distribution, allowing the determination of a credibility interval for calculated discharge. Also the error over discharges estimates from the MGB-IPH model is included in the rating curve determination. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present experiment shows that the stochastic approach is more efficient than the determinist one. By using for the parameters prior credible intervals defined by the user, this method provides an estimate of best rating curve estimate without any unlikely parameter. Results were assessed trough the Nash Sutcliffe efficiency coefficient. Ens superior to 0.7 is found for most of the 920 virtual stations . From these results we were able to determinate a fully coherent map of river bed height, mean depth and Manning's roughness coefficient, information that can be reused in hydrological modeling. Bad results found at a few virtual stations are also of interest. For some sub-basins in the Andean piemont, the bad result confirms that the model failed to estimate discharges overthere. Other are found at tributary mouths experiencing backwater effects from the Amazon. Considering mean monthly slope at the virtual station in the rating curve equation, we obtain rated discharges much more consistent with modeled and measured ones, showing that it is now possible to obtain a meaningful rating curve in such critical areas.

  4. Cognitive diagnosis modelling incorporating item response times.

    PubMed

    Zhan, Peida; Jiao, Hong; Liao, Dandan

    2018-05-01

    To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.

  5. Kinetic parameter estimation model for anaerobic co-digestion of waste activated sludge and microalgae.

    PubMed

    Lee, Eunyoung; Cumberbatch, Jewel; Wang, Meng; Zhang, Qiong

    2017-03-01

    Anaerobic co-digestion has a potential to improve biogas production, but limited kinetic information is available for co-digestion. This study introduced regression-based models to estimate the kinetic parameters for the co-digestion of microalgae and Waste Activated Sludge (WAS). The models were developed using the ratios of co-substrates and the kinetic parameters for the single substrate as indicators. The models were applied to the modified first-order kinetics and Monod model to determine the rate of hydrolysis and methanogenesis for the co-digestion. The results showed that the model using a hyperbola function was better for the estimation of the first-order kinetic coefficients, while the model using inverse tangent function closely estimated the Monod kinetic parameters. The models can be used for estimating kinetic parameters for not only microalgae-WAS co-digestion but also other substrates' co-digestion such as microalgae-swine manure and WAS-aquatic plants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Characterization of material parameters for high speed forming and cutting via experiment and inverse simulation

    NASA Astrophysics Data System (ADS)

    Scheffler, Christian; Psyk, Verena; Linnemann, Maik; Tulke, Marc; Brosius, Alexander; Landgrebe, Dirk

    2018-05-01

    High speed velocity effects in production technology provide a broad range of technological and economic advantages [1, 2]. However, exploiting them necessitates the knowledge of strain rate dependent material behavior in process modelling. In general, high speed material data characterization features several difficulties and requires sophisticated approaches in order to provide reliable material data. This paper proposes two innovative concepts with electromagnetic and pneumatic drive and an approach for material characterization in terms of strain rate dependent flow curves and parameters of failure or damage models. The test setups have been designed for investigations of strain rates up to 105 s-1. In principle, knowledge about the temporary courses and local distributions of stress and strain in the specimen is essential for identifying material characteristics, but short process times, fast changes of the measurement values, small specimen size and frequently limited accessibility of the specimen during the test hinder directly measuring these parameters at high-velocity testing. Therefore, auxiliary test parameters, which are easier to measure, are recorded and used as input data for an inverse numerical simulation that provides the desired material characteristics, e.g. the Johnson-Cook parameters, as a result. These parameters are a force equivalent strain signal on a measurement body and the displacement of the upper specimen edge.

  7. An investigation of the thermoviscoplastic behavior of a metal matrix composite at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Rogacki, John R.; Tuttle, Mark E.

    1992-01-01

    This research investigates the response of a fiberless 13 layer hot isostatically pressed Ti-15-3 laminate to creep, constant strain rate, and cyclic constant strain rate loading at temperatures ranging from 482C to 649C. Creep stresses from 48 to 260 MPa and strain rates of .0001 to .01 m/m/sec were used. Material parameters for three unified constitutive models (Bodner-Partom, Miller, and Walker models) were determined for Ti-15-3 from the experimental data. Each of the three models was subsequently incorporated into a rule of mixtures and evaluated for accuracy and ease of use in predicting the thermoviscoplastic response of unidirectional metal matrix composite laminates (both 0 and 90). The laminates were comprised of a Ti-15-3 matrix with 29 volume percent SCS6 fibers. The predicted values were compared to experimentally determined creep and constant strain rate data. It was found that all three models predicted the viscoplastic response of the 0 specimens reasonably well, but seriously underestimated the viscoplastic response of the 90 specimens. It is believed that this discrepancy is due to compliant and/or weak fiber-matrix interphase. In general, it was found that of the three models studied, the Bodner-Partom model was easiest to implement, primarily because this model does not require the use of cyclic constant strain rate tests to determine the material parameters involved. However, the version of the Bodner-Partom model used in this study does not include back stress as an internal state variable, and hence may not be suitable for use with materials which exhibit a pronounced Baushinger effect. The back stress is accounted for in both the Walker and Miller models; determination of the material parameters associated with the Walker model was somewhat easier than in the Miller model.

  8. Modeling and Characterization of PMMA for High Strain-Rate and Finite Deformations (Postprint)

    DTIC Science & Technology

    2010-05-01

    List of parameters for the modified MuUiken- model for PMMA . Von Mises [MPa] ^AJ3 V 00 ^ Aa ^Afi CR ha hp Value 3386 1748 0.35 298 1979...AFRL-RW-EG-TP-2010-073 Modeling and Characterization of PMMA for High Strain-Rate and Finite Deformations (Postprint) Eric B. Herbold Jennifer L...SUBTITLE Modeling and Characterization of PMMA for High Strain-Rate and Finite Deformations (Postprint) 5a. CONTRACT NUMBER 5b. GRANT NUMBER

  9. Computer modeling and design analysis of a bit rate discrimination circuit based dual-rate burst mode receiver

    NASA Astrophysics Data System (ADS)

    Kota, Sriharsha; Patel, Jigesh; Ghillino, Enrico; Richards, Dwight

    2011-01-01

    In this paper, we demonstrate a computer model for simulating a dual-rate burst mode receiver that can readily distinguish bit rates of 1.25Gbit/s and 10.3Gbit/s and demodulate the data bursts with large power variations of above 5dB. To our knowledge, this is the first such model to demodulate data bursts of different bit rates without using any external control signal such as a reset signal or a bit rate select signal. The model is based on a burst-mode bit rate discrimination circuit (B-BDC) and makes use of a unique preamble sequence attached to each burst to separate out the data bursts with different bit rates. Here, the model is implemented using a combination of the optical system simulation suite OptSimTM, and the electrical simulation engine SPICE. The reaction time of the burst mode receiver model is about 7ns, which corresponds to less than 8 preamble bits for the bit rate of 1.25Gbps. We believe, having an accurate and robust simulation model for high speed burst mode transmission in GE-PON systems, is indispensable and tremendously speeds up the ongoing research in the area, saving a lot of time and effort involved in carrying out the laboratory experiments, while providing flexibility in the optimization of various system parameters for better performance of the receiver as a whole. Furthermore, we also study the effects of burst specifications like the length of preamble sequence, and other receiver design parameters on the reaction time of the receiver.

  10. Mixing effects on apparent reaction rates and isotope fractionation during denitrification in a heterogeneous aquifer

    USGS Publications Warehouse

    Green, Christopher T.; Böhlke, John Karl; Bekins, Barbara A.; Phillips, Steven P.

    2010-01-01

    Gradients in contaminant concentrations and isotopic compositions commonly are used to derive reaction parameters for natural attenuation in aquifers. Differences between field‐scale (apparent) estimated reaction rates and isotopic fractionations and local‐scale (intrinsic) effects are poorly understood for complex natural systems. For a heterogeneous alluvial fan aquifer, numerical models and field observations were used to study the effects of physical heterogeneity on reaction parameter estimates. Field measurements included major ions, age tracers, stable isotopes, and dissolved gases. Parameters were estimated for the O2 reduction rate, denitrification rate, O2 threshold for denitrification, and stable N isotope fractionation during denitrification. For multiple geostatistical realizations of the aquifer, inverse modeling was used to establish reactive transport simulations that were consistent with field observations and served as a basis for numerical experiments to compare sample‐based estimates of “apparent” parameters with “true“ (intrinsic) values. For this aquifer, non‐Gaussian dispersion reduced the magnitudes of apparent reaction rates and isotope fractionations to a greater extent than Gaussian mixing alone. Apparent and true rate constants and fractionation parameters can differ by an order of magnitude or more, especially for samples subject to slow transport, long travel times, or rapid reactions. The effect of mixing on apparent N isotope fractionation potentially explains differences between previous laboratory and field estimates. Similarly, predicted effects on apparent O2threshold values for denitrification are consistent with previous reports of higher values in aquifers than in the laboratory. These results show that hydrogeological complexity substantially influences the interpretation and prediction of reactive transport.

  11. Attitude/attitude-rate estimation from GPS differential phase measurements using integrated-rate parameters

    NASA Technical Reports Server (NTRS)

    Oshman, Yaakov; Markley, Landis

    1998-01-01

    A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.

  12. Relation of Heart Rate and its Variability during Sleep with Age, Physical Activity, and Body Composition in Young Children

    PubMed Central

    Herzig, David; Eser, Prisca; Radtke, Thomas; Wenger, Alina; Rusterholz, Thomas; Wilhelm, Matthias; Achermann, Peter; Arhab, Amar; Jenni, Oskar G.; Kakebeeke, Tanja H.; Leeger-Aschmann, Claudia S.; Messerli-Bürgy, Nadine; Meyer, Andrea H.; Munsch, Simone; Puder, Jardena J.; Schmutz, Einat A.; Stülb, Kerstin; Zysset, Annina E.; Kriemler, Susi

    2017-01-01

    Background: Recent studies have claimed a positive effect of physical activity and body composition on vagal tone. In pediatric populations, there is a pronounced decrease in heart rate with age. While this decrease is often interpreted as an age-related increase in vagal tone, there is some evidence that it may be related to a decrease in intrinsic heart rate. This factor has not been taken into account in most previous studies. The aim of the present study was to assess the association between physical activity and/or body composition and heart rate variability (HRV) independently of the decline in heart rate in young children. Methods: Anthropometric measurements were taken in 309 children aged 2–6 years. Ambulatory electrocardiograms were collected over 14–18 h comprising a full night and accelerometry over 7 days. HRV was determined of three different night segments: (1) over 5 min during deep sleep identified automatically based on HRV characteristics; (2) during a 20 min segment starting 15 min after sleep onset; (3) over a 4-h segment between midnight and 4 a.m. Linear models were computed for HRV parameters with anthropometric and physical activity variables adjusted for heart rate and other confounding variables (e.g., age for physical activity models). Results: We found a decline in heart rate with increasing physical activity and decreasing skinfold thickness. HRV parameters decreased with increasing age, height, and weight in HR-adjusted regression models. These relationships were only found in segments of deep sleep detected automatically based on HRV or manually 15 min after sleep onset, but not in the 4-h segment with random sleep phases. Conclusions: Contrary to most previous studies, we found no increase of standard HRV parameters with age, however, when adjusted for heart rate, there was a significant decrease of HRV parameters with increasing age. Without knowing intrinsic heart rate correct interpretation of HRV in growing children is impossible. PMID:28286485

  13. Relation of Heart Rate and its Variability during Sleep with Age, Physical Activity, and Body Composition in Young Children.

    PubMed

    Herzig, David; Eser, Prisca; Radtke, Thomas; Wenger, Alina; Rusterholz, Thomas; Wilhelm, Matthias; Achermann, Peter; Arhab, Amar; Jenni, Oskar G; Kakebeeke, Tanja H; Leeger-Aschmann, Claudia S; Messerli-Bürgy, Nadine; Meyer, Andrea H; Munsch, Simone; Puder, Jardena J; Schmutz, Einat A; Stülb, Kerstin; Zysset, Annina E; Kriemler, Susi

    2017-01-01

    Background: Recent studies have claimed a positive effect of physical activity and body composition on vagal tone. In pediatric populations, there is a pronounced decrease in heart rate with age. While this decrease is often interpreted as an age-related increase in vagal tone, there is some evidence that it may be related to a decrease in intrinsic heart rate. This factor has not been taken into account in most previous studies. The aim of the present study was to assess the association between physical activity and/or body composition and heart rate variability (HRV) independently of the decline in heart rate in young children. Methods: Anthropometric measurements were taken in 309 children aged 2-6 years. Ambulatory electrocardiograms were collected over 14-18 h comprising a full night and accelerometry over 7 days. HRV was determined of three different night segments: (1) over 5 min during deep sleep identified automatically based on HRV characteristics; (2) during a 20 min segment starting 15 min after sleep onset; (3) over a 4-h segment between midnight and 4 a.m. Linear models were computed for HRV parameters with anthropometric and physical activity variables adjusted for heart rate and other confounding variables (e.g., age for physical activity models). Results: We found a decline in heart rate with increasing physical activity and decreasing skinfold thickness. HRV parameters decreased with increasing age, height, and weight in HR-adjusted regression models. These relationships were only found in segments of deep sleep detected automatically based on HRV or manually 15 min after sleep onset, but not in the 4-h segment with random sleep phases. Conclusions: Contrary to most previous studies, we found no increase of standard HRV parameters with age, however, when adjusted for heart rate, there was a significant decrease of HRV parameters with increasing age. Without knowing intrinsic heart rate correct interpretation of HRV in growing children is impossible.

  14. Evolution of spreading rate and H2 production by serpentinization at mid-ocean ridges from 200 Ma to Present

    NASA Astrophysics Data System (ADS)

    Andreani, M.; García del Real, P.; Daniel, I.; Wright, N.; Coltice, N.

    2017-12-01

    Mid-oceanic ridge (MOR) spreading rate spatially varies today from 20 to 200 mm/yr and geological records attest of important temporal variations, at least during the past 200 My. The spreading rate has a direct impact on the mechanisms accomodating extension (magmatic vs tectonic), hence on the nature of the rocks forming the oceanic lithosphere. The latter is composed of variable amount of magmatic and mantle rocks, that dominate at fast and (ultra-) slow spreading ridges, respectively. Serpentinization of mantle rocks contributes to global fluxes and notably to those of hydrogen and carbon by providing a pathways for dihydrogen (H2) production, carbone storage by mineralization, and carbon reduction to CH4 and possibly complex organic compounds. Quantification of the global chemical impact of serpentinization through geological time requires a coupling of geochemical parameters with plate-tectonic reconstructions. Here we quantify serpentinization extent and concurrent H2 production at MOR from the Jurassic (200 Ma) to present day (0 Ma). We coupled mean values of relevant petro-chemical parameters such as the proportion of mantle rocks, initial iron in olivine, iron redox state in serpentinites, % of serpentinization to high-resolution models of plate motion within the GPlates infrastructure to estimate the lengths in 1 Myr intervals for the global MOR plate boundary (spreading and transform components), and spreading ridges as a function of their rate. The model sensitivity to selected parameters has been tested. The results show that fragmentation of Pangea resulted in elevated H2 rates (>1012 to 1013 mol/yr) starting at 160 Ma compared to Late Mesozoic (<160 Ma) rates (<1011-1012 mol/yr). From 160 Ma to present, the coupled opening of the Atlantic and Indian oceans as well as the variation in spreading rates maintained H2 generation in the 1012 mol/yr level, but with significant excursions mainly related to the length of ultra-slow spreading segments. For the first time, this model offers a framework toward flux modeling at MOR through time. The model can be further implemented by adding supplementary geochemical parameters or serve other geochemical issues.

  15. Use of a reactive gas transport model to determine rates of hydrocarbon biodegradation in unsaturated porous media

    USGS Publications Warehouse

    Baehr, Arthur L.; Baker, Ronald J.

    1995-01-01

    A mathematical model is presented that simulates the transport and reaction of any number of gaseous phase constituents (e.g. CO2, O2, N2, and hydrocarbons) in unsaturated porous media. The model was developed as part of a method to determine rates of hydrocarbon biodegradation associated with natural cleansing at petroleum product spill sites. The one-dimensional model can be applied to analyze data from column experiments or from field sites where gas transport in the unsaturated zone is approximately vertical. A coupled, non-Fickian constitutive relation between fluxes and concentration gradients, together with the capability of incorporating heterogeneity with respect to model parameters, results in model applicability over a wide range of experimental and field conditions. When applied in a calibration mode, the model allows for the determination of constituent production/consumption rates as a function of the spatial coordinate. Alternatively, the model can be applied in a predictive mode to obtain the distribution of constituent concentrations and fluxes on the basis of assumed values of model parameters and a biodegradation hypothesis. Data requirements for the model are illustrated by analyzing data from a column experiment designed to determine the aerobic degradation rate of toluene in sediments collected from a gasoline spill site in Galloway Township, New Jersey.

  16. Influences of brain tissue poroelastic constants on intracranial pressure (ICP) during constant-rate infusion.

    PubMed

    Li, Xiaogai; von Holst, Hans; Kleiven, Svein

    2013-01-01

    A 3D finite element (FE) model has been developed to study the mean intracranial pressure (ICP) response during constant-rate infusion using linear poroelasticity. Due to the uncertainties in the poroelastic constants for brain tissue, the influence of each of the main parameters on the transient ICP infusion curve was studied. As a prerequisite for transient analysis, steady-state simulations were performed first. The simulated steady-state pressure distribution in the brain tissue for a normal cerebrospinal fluid (CSF) circulation system showed good correlation with experiments from the literature. Furthermore, steady-state ICP closely followed the infusion experiments at different infusion rates. The verified steady-state models then served as a baseline for the subsequent transient models. For transient analysis, the simulated ICP shows a similar tendency to that found in the experiments, however, different values of the poroelastic constants have a significant effect on the infusion curve. The influence of the main poroelastic parameters including the Biot coefficient α, Skempton coefficient B, drained Young's modulus E, Poisson's ratio ν, permeability κ, CSF absorption conductance C(b) and external venous pressure p(b) was studied to investigate the influence on the pressure response. It was found that the value of the specific storage term S(ε) is the dominant factor that influences the infusion curve, and the drained Young's modulus E was identified as the dominant parameter second to S(ε). Based on the simulated infusion curves from the FE model, artificial neural network (ANN) was used to find an optimised parameter set that best fit the experimental curve. The infusion curves from both the FE simulation and using ANN confirmed the limitation of linear poroelasticity in modelling the transient constant-rate infusion.

  17. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  18. Comprehensive Analyses of Ventricular Myocyte Models Identify Targets Exhibiting Favorable Rate Dependence

    PubMed Central

    Bugana, Marco; Severi, Stefano; Sobie, Eric A.

    2014-01-01

    Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs. PMID:24675446

  19. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    PubMed

    Cummins, Megan A; Dalal, Pavan J; Bugana, Marco; Severi, Stefano; Sobie, Eric A

    2014-03-01

    Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs.

  20. Comparison in Schemes for Simulating Depositional Growth of Ice Crystal between Theoretical and Laboratory Data

    NASA Astrophysics Data System (ADS)

    Zhai, Guoqing; Li, Xiaofan

    2015-04-01

    The Bergeron-Findeisen process has been simulated using the parameterization scheme for the depositional growth of ice crystal with the temperature-dependent theoretically predicted parameters in the past decades. Recently, Westbrook and Heymsfield (2011) calculated these parameters using the laboratory data from Takahashi and Fukuta (1988) and Takahashi et al. (1991) and found significant differences between the two parameter sets. There are two schemes that parameterize the depositional growth of ice crystal: Hsie et al. (1980), Krueger et al. (1995) and Zeng et al. (2008). In this study, we conducted three pairs of sensitivity experiments using three parameterization schemes and the two parameter sets. The pre-summer torrential rainfall event is chosen as the simulated rainfall case in this study. The analysis of root-mean-squared difference and correlation coefficient between the simulation and observation of surface rain rate shows that the experiment with the Krueger scheme and the Takahashi laboratory-derived parameters produces the best rain-rate simulation. The mean simulated rain rates are higher than the mean observational rain rate. The calculations of 5-day and model domain mean rain rates reveal that the three schemes with Takahashi laboratory-derived parameters tend to reduce the mean rain rate. The Krueger scheme together with the Takahashi laboratory-derived parameters generate the closest mean rain rate to the mean observational rain rate. The decrease in the mean rain rate caused by the Takahashi laboratory-derived parameters in the experiment with the Krueger scheme is associated with the reductions in the mean net condensation and the mean hydrometeor loss. These reductions correspond to the suppressed mean infrared radiative cooling due to the enhanced cloud ice and snow in the upper troposphere.

  1. The Topp-Leone generalized Rayleigh cure rate model and its application

    NASA Astrophysics Data System (ADS)

    Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena

    2017-11-01

    Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.

  2. Comparative assessment of five water infiltration models into the soil

    NASA Astrophysics Data System (ADS)

    Shahsavaramir, M.

    2009-04-01

    The knowledge of the soil hydraulic conditions particularly soil permeability is an important issue hydrological and climatic study. Because of its high spatial and temporal variability, soil infiltration monitoring scheme was investigated in view of its application in infiltration modelling. Some of models for infiltration into the soil have been developed, in this paper; we design and describe capability of five infiltration model into the soil. We took a decision to select the best model suggested. In this research in the first time, we designed a program in Quick Basic software and wrote algorithm of five models that include Kostiakove, Modified Kostiakove, Philip, S.C.S and Horton. Afterwards we supplied amounts of factual infiltration, according of get at infiltration data, by double rings method in 12 series of Saveh plain which situated in Markazi province in Iran. After accessing to models coefficients, these equations were regenerated by Excel software and calculations related to models acuity rate in proportion to observations and also related graphs were done by this software. Amounts of infiltration parameters, such as cumulative infiltration and infiltration rate were obtained from designed models. Then we compared amounts of observation and determination parameters of infiltration. The results show that Kostiakove and Modified Kostiakove models could quantify amounts of cumulative infiltration and infiltration rate in triple period (short, middle and long time). In tree series of soils, Horton model could determine infiltration amounts better than others in time trinal treatments. The results show that Philip model in seven series had a relatively good fitness for determination of infiltration parameters. Also Philip model in five series of soils, after passing of time, had curve shape; in fact this shown that attraction coefficient (s) was less than zero. After all S.C.S model among of others had the least capability to determination of infiltration parameters.

  3. Modeling Piezoelectric Stack Actuators for Control of Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Celanovic, Nikola

    1997-01-01

    A nonlinear lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and, in particular, for microrobotic applications requiring accurate position and/or force control. In formulating this model, the authors propose a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data. Validation is followed by a discussion of model implications for purposes of actuator control.

  4. A new simple local muscle recovery model and its theoretical and experimental validation.

    PubMed

    Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu

    2015-01-01

    This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.

  5. A Primer on the Statistical Modelling of Learning Curves in Health Professions Education

    ERIC Educational Resources Information Center

    Pusic, Martin V.; Boutis, Kathy; Pecaric, Martin R.; Savenkov, Oleksander; Beckstead, Jason W.; Jaber, Mohamad Y.

    2017-01-01

    Learning curves are a useful way of representing the rate of learning over time. Features include an index of baseline performance (y-intercept), the efficiency of learning over time (slope parameter) and the maximal theoretical performance achievable (upper asymptote). Each of these parameters can be statistically modelled on an individual and…

  6. A mathematical model for predicting fire spread in wildland fuels

    Treesearch

    Richard C. Rothermel

    1972-01-01

    A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.

  7. Application of a simplified mathematical model to estimate the effect of forced aeration on composting in a closed system.

    PubMed

    Bari, Quazi H; Koenig, Albert

    2012-11-01

    The aeration rate is a key process control parameter in the forced aeration composting process because it greatly affects different physico-chemical parameters such as temperature and moisture content, and indirectly influences the biological degradation rate. In this study, the effect of a constant airflow rate on vertical temperature distribution and organic waste degradation in the composting mass is analyzed using a previously developed mathematical model of the composting process. The model was applied to analyze the effect of two different ambient conditions, namely, hot and cold ambient condition, and four different airflow rates such as 1.5, 3.0, 4.5, and 6.0 m(3) m(-2) h(-1), respectively, on the temperature distribution and organic waste degradation in a given waste mixture. The typical waste mixture had 59% moisture content and 96% volatile solids, however, the proportion could be varied as required. The results suggested that the model could be efficiently used to analyze composting under variable ambient and operating conditions. A lower airflow rate around 1.5-3.0 m(3) m(-2) h(-1) was found to be suitable for cold ambient condition while a higher airflow rate around 4.5-6.0 m(3) m(-2) h(-1) was preferable for hot ambient condition. The engineered way of application of this model is flexible which allows the changes in any input parameters within the realistic range. It can be widely used for conceptual process design, studies on the effect of ambient conditions, optimization studies in existing composting plants, and process control. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Granular-flow rheology: Role of shear-rate number in transition regime

    USGS Publications Warehouse

    Chen, C.-L.; Ling, C.-H.

    1996-01-01

    This paper examines the rationale behind the semiempirical formulation of a generalized viscoplastic fluid (GVF) model in the light of the Reiner-Rivlin constitutive theory and the viscoplastic theory, thereby identifying the parameters that control the rheology of granular flow. The shear-rate number (N) proves to be among the most significant parameters identified from the GVF model. As N ??? 0 and N ??? ???, the GVF model can reduce asymptotically to the theoretical stress versus shear-rate relations in the macroviscous and graininertia regimes, respectively, where the grain concentration (C) also plays a major role in the rheology of granular flow. Using available data obtained from the rotating-cylinder experiments of neutrally buoyant solid spheres dispersing in an interstitial fluid, the shear stress for granular flow in transition between the two regimes proves dependent on N and C in addition to some material constants, such as the coefficient of restitution. The insufficiency of data on rotating-cylinder experiments cannot presently allow the GVF model to predict how a granular flow may behave in the entire range of N; however, the analyzed data provide an insight on the interrelation among the relevant dimensionless parameters.

  9. Predictive Modeling and Optimization of Vibration-assisted AFM Tip-based Nanomachining

    NASA Astrophysics Data System (ADS)

    Kong, Xiangcheng

    The tip-based vibration-assisted nanomachining process offers a low-cost, low-effort technique in fabricating nanometer scale 2D/3D structures in sub-100 nm regime. To understand its mechanism, as well as provide the guidelines for process planning and optimization, we have systematically studied this nanomachining technique in this work. To understand the mechanism of this nanomachining technique, we firstly analyzed the interaction between the AFM tip and the workpiece surface during the machining process. A 3D voxel-based numerical algorithm has been developed to calculate the material removal rate as well as the contact area between the AFM tip and the workpiece surface. As a critical factor to understand the mechanism of this nanomachining process, the cutting force has been analyzed and modeled. A semi-empirical model has been proposed by correlating the cutting force with the material removal rate, which was validated using experimental data from different machining conditions. With the understanding of its mechanism, we have developed guidelines for process planning of this nanomachining technique. To provide the guideline for parameter selection, the effect of machining parameters on the feature dimensions (depth and width) has been analyzed. Based on ANOVA test results, the feature width is only controlled by the XY vibration amplitude, while the feature depth is affected by several machining parameters such as setpoint force and feed rate. A semi-empirical model was first proposed to predict the machined feature depth under given machining condition. Then, to reduce the computation intensity, linear and nonlinear regression models were also proposed and validated using experimental data. Given the desired feature dimensions, feasible machining parameters could be provided using these predictive feature dimension models. As the tip wear is unavoidable during the machining process, the machining precision will gradually decrease. To maintain the machining quality, the guideline for when to change the tip should be provided. In this study, we have developed several metrics to detect tip wear, such as tip radius and the pull-off force. The effect of machining parameters on the tip wear rate has been studied using these metrics, and the machining distance before a tip must be changed has been modeled using these machining parameters. Finally, the optimization functions have been built for unit production time and unit production cost subject to realistic constraints, and the optimal machining parameters can be found by solving these functions.

  10. [Monitoring of occupational activities under the risk of heat stress: use of mathematical models in the prediction of physiological parameters].

    PubMed

    Terzi, R; Catenacci, G; Marcaletti, G

    1985-01-01

    Some authors proposed mathematical models that, starting from standardized conditions of environmental microclimate parameters, thermal impedance of the clothing, and energetic expenditure allowed the forecast of the body temperature and heart rate variations in respect to the basal values in subjects standing in the same environment. In the present work we verify the usefulness of these models applied to the working tasks characterized by standardized job made under unfavourable thermal conditions. In subject working in an electric power station the values of the body temperature and heart rate are registered and compared with the values obtained by the application of the studied models. The results are discussed in view of the practical use.

  11. An empirical model for dissolution profile and its application to floating dosage forms.

    PubMed

    Weiss, Michael; Kriangkrai, Worawut; Sungthongjeen, Srisagul

    2014-06-02

    A sum of two inverse Gaussian functions is proposed as a highly flexible empirical model for fitting of in vitro dissolution profiles. The model was applied to quantitatively describe theophylline release from effervescent multi-layer coated floating tablets containing different amounts of the anti-tacking agents talc or glyceryl monostearate. Model parameters were estimated by nonlinear regression (mixed-effects modeling). The estimated parameters were used to determine the mean dissolution time, as well as to reconstruct the time course of release rate for each formulation, whereby the fractional release rate can serve as a diagnostic tool for classification of dissolution processes. The approach allows quantification of dissolution behavior and could provide additional insights into the underlying processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, Seulgi; Shelef, Eitan; Hilley, George E.

    2015-05-01

    In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.

  13. Comparison of Radiation Pressure Perturbations on Rocket Bodies and Debris at Geosynchronous Earth Orbit

    DTIC Science & Technology

    2014-09-01

    has highlighted the need for physically consistent radiation pressure and Bidirectional Reflectance Distribution Function ( BRDF ) models . This paper...seeks to evaluate the impact of BRDF -consistent radiation pres- sure models compared to changes in the other BRDF parameters. The differences in...orbital position arising because of changes in the shape, attitude, angular rates, BRDF parameters, and radiation pressure model are plotted as a

  14. A simple reactive-transport model of calcite precipitation in soils and other porous media

    NASA Astrophysics Data System (ADS)

    Kirk, G. J. D.; Versteegen, A.; Ritz, K.; Milodowski, A. E.

    2015-09-01

    Calcite formation in soils and other porous media generally occurs around a localised source of reactants, such as a plant root or soil macro-pore, and the rate depends on the transport of reactants to and from the precipitation zone as well as the kinetics of the precipitation reaction itself. However most studies are made in well mixed systems, in which such transport limitations are largely removed. We developed a mathematical model of calcite precipitation near a source of base in soil, allowing for transport limitations and precipitation kinetics. We tested the model against experimentally-determined rates of calcite precipitation and reactant concentration-distance profiles in columns of soil in contact with a layer of HCO3--saturated exchange resin. The model parameter values were determined independently. The agreement between observed and predicted results was satisfactory given experimental limitations, indicating that the model correctly describes the important processes. A sensitivity analysis showed that all model parameters are important, indicating a simpler treatment would be inadequate. The sensitivity analysis showed that the amount of calcite precipitated and the spread of the precipitation zone were sensitive to parameters controlling rates of reactant transport (soil moisture content, salt content, pH, pH buffer power and CO2 pressure), as well as to the precipitation rate constant. We illustrate practical applications of the model with two examples: pH changes and CaCO3 precipitation in the soil around a plant root, and around a soil macro-pore containing a source of base such as urea.

  15. A SIMPLIFIED MODEL FOR PREDICTING MALARIA ENTOMOLOGIC INOCULATION RATES BASED ON ENTOMOLOGIC AND PARASITOLOGIC PARAMETERS RELEVANT TO CONTROL

    PubMed Central

    KILLEEN, GERRY F.; McKENZIE, F. ELLIS; FOY, BRIAN D.; SCHIEFFELIN, CATHERINE; BILLINGSLEY, PETER F.; BEIER, JOHN C.

    2008-01-01

    Malaria transmission intensity is modeled from the starting perspective of individual vector mosquitoes and is expressed directly as the entomologic inoculation rate (EIR). The potential of individual mosquitoes to transmit malaria during their lifetime is presented graphically as a function of their feeding cycle length and survival, human biting preferences, and the parasite sporogonic incubation period. The EIR is then calculated as the product of 1) the potential of individual vectors to transmit malaria during their lifetime, 2) vector emergence rate relative to human population size, and 3) the infectiousness of the human population to vectors. Thus, impacts on more than one of these parameters will amplify each other’s effects. The EIRs transmitted by the dominant vector species at four malaria-endemic sites from Papua New Guinea, Tanzania, and Nigeria were predicted using field measurements of these characteristics together with human biting rate and human reservoir infectiousness. This model predicted EIRs (± SD) that are 1.13 ± 0.37 (range = 0.84–1.59) times those measured in the field. For these four sites, mosquito emergence rate and lifetime transmission potential were more important determinants of the EIR than human reservoir infectiousness. This model and the input parameters from the four sites allow the potential impacts of various control measures on malaria transmission intensity to be tested under a range of endemic conditions. The model has potential applications for the development and implementation of transmission control measures and for public health education. PMID:11289661

  16. [Collaborative application of BEPS at different time steps.

    PubMed

    Lu, Wei; Fan, Wen Yi; Tian, Tian

    2016-09-01

    BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.

  17. Ab initio-informed maximum entropy modeling of rovibrational relaxation and state-specific dissociation with application to the O2 + O system

    NASA Astrophysics Data System (ADS)

    Kulakhmetov, Marat; Gallis, Michael; Alexeenko, Alina

    2016-05-01

    Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O2 + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociation exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500-20 000 K temperature range. The VRT model captures 80 × 106 state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000-15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.

  18. ROC curves predicted by a model of visual search.

    PubMed

    Chakraborty, D P

    2006-07-21

    In imaging tasks where the observer is uncertain whether lesions are present, and where they could be present, the image is searched for lesions. In the free-response paradigm, which closely reflects this task, the observer provides data in the form of a variable number of mark-rating pairs per image. In a companion paper a statistical model of visual search has been proposed that has parameters characterizing the perceived lesion signal-to-noise ratio, the ability of the observer to avoid marking non-lesion locations, and the ability of the observer to find lesions. The aim of this work is to relate the search model parameters to receiver operating characteristic (ROC) curves that would result if the observer reported the rating of the most suspicious finding on an image as the overall rating. Also presented are the probability density functions (pdfs) of the underlying latent decision variables corresponding to the highest rating for normal and abnormal images. The search-model-predicted ROC curves are 'proper' in the sense of never crossing the chance diagonal and the slope is monotonically changing. They also have the interesting property of not allowing the observer to move the operating point continuously from the origin to (1, 1). For certain choices of parameters the operating points are predicted to be clustered near the initial steep region of the curve, as has been observed by other investigators. The pdfs are non-Gaussians, markedly so for the abnormal images and for certain choices of parameter values, and provide an explanation for the well-known observation that experimental ROC data generally imply a wider pdf for abnormal images than for normal images. Some features of search-model-predicted ROC curves and pdfs resemble those predicted by the contaminated binormal model, but there are significant differences. The search model appears to provide physical explanations for several aspects of experimental ROC curves.

  19. Aerobic stabilization of biological sludge characterized by an extremely low decay rate: modeling, identifiability analysis and parameter estimation.

    PubMed

    Martínez-García, C G; Olguín, M T; Fall, C

    2014-08-01

    Aerobic digestion batch tests were run on a sludge model that contained only two fractions, the heterotrophic biomass (XH) and its endogenous residue (XP). The objective was to describe the stabilization of the sludge and estimate the endogenous decay parameters. Modeling was performed with Aquasim, based on long-term data of volatile suspended solids and chemical oxygen demand (VSS, COD). Sensitivity analyses were carried out to determine the conditions for unique identifiability of the parameters. Importantly, it was found that the COD/VSS ratio of the endogenous residues (1.06) was significantly lower than for the active biomass fraction (1.48). The decay rate constant of the studied sludge (low bH, 0.025 d(-1)) was one-tenth that usually observed (0.2d(-1)), which has two main practical significances. Digestion time required is much more long; also the oxygen uptake rate might be <1.5 mg O₂/gTSSh (biosolids standards), without there being significant decline in the biomass. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Interception loss, throughfall and stemflow in a maritime pine stand. II. An application of Gash's analytical model of interception

    NASA Astrophysics Data System (ADS)

    Loustau, D.; Berbigier, P.; Granier, A.

    1992-10-01

    Interception, throughfall and stemflow were determined in an 18-year-old maritime pine stand for a period of 30 months. This involved 71 rainfall events, each corresponding either to a single storm or to several storms. Gash's analytical model of interception was used to estimate the sensitivity of interception to canopy structure and climatic parameters. The seasonal cumulative interception loss corresponded to 12.6-21.0% of the amount of rainfall, whereas throughfall and stemflow accounted for 77-83% and 1-6%, respectively. On a seasonal basis, simulated data fitted the measured data satisfactorily ( r2 = 0.75). The rainfall partitioning between interception, throughfall and stemflow was shown to be sensitive to (1) the rainfall regime, i.e. the relative importance of light storms to total rainfall, (2) the climatic parameters, rainfall rate and average evaporation rate during storms, and (3) the canopy structure parameters of the model. The low interception rate of the canopy was attributed primarily to the low leaf area index of the stand.

  1. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake.

    PubMed

    Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin

    2015-09-02

    The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.

  2. Comparisons of Solar Wind Coupling Parameters with Auroral Energy Deposition Rates

    NASA Technical Reports Server (NTRS)

    Elsen, R.; Brittnacher, M. J.; Fillingim, M. O.; Parks, G. K.; Germany G. A.; Spann, J. F., Jr.

    1997-01-01

    Measurement of the global rate of energy deposition in the ionosphere via auroral particle precipitation is one of the primary goals of the Polar UVI program and is an important component of the ISTP program. The instantaneous rate of energy deposition for the entire month of January 1997 has been calculated by applying models to the UVI images and is presented by Fillingim et al. In this session. A number of parameters that predict the rate of coupling of solar wind energy into the magnetosphere have been proposed in the last few decades. Some of these parameters, such as the epsilon parameter of Perrault and Akasofu, depend on the instantaneous values in the solar wind. Other parameters depend on the integrated values of solar wind parameters, especially IMF Bz, e.g. applied flux which predicts the net transfer of magnetic flux to the tail. While these parameters have often been used successfully with substorm studies, their validity in terms of global energy input has not yet been ascertained, largely because data such as that supplied by the ISTP program was lacking. We have calculated these and other energy coupling parameters for January 1997 using solar wind data provided by WIND and other solar wind monitors. The rates of energy input predicted by these parameters are compared to those measured through UVI data and correlations are sought. Whether these parameters are better at providing an instantaneous rate of energy input or an average input over some time period is addressed. We also study if either type of parameter may provide better correlations if a time delay is introduced; if so, this time delay may provide a characteristic time for energy transport in the coupled solar wind-magnetosphere-ionosphere system.

  3. Black Hole Mergers as Probes of Structure Formation

    NASA Technical Reports Server (NTRS)

    Alicea-Munoz, E.; Miller, M. Coleman

    2008-01-01

    Intense structure formation and reionization occur at high redshift, yet there is currently little observational information about this very important epoch. Observations of gravitational waves from massive black hole (MBH) mergers can provide us with important clues about the formation of structures in the early universe. Past efforts have been limited to calculating merger rates using different models in which many assumptions are made about the specific values of physical parameters of the mergers, resulting in merger rate estimates that span a very wide range (0.1 - 104 mergers/year). Here we develop a semi-analytical, phenomenological model of MBH mergers that includes plausible combinations of several physical parameters, which we then turn around to determine how well observations with the Laser Interferometer Space Antenna (LISA) will be able to enhance our understanding of the universe during the critical z 5 - 30 structure formation era. We do this by generating synthetic LISA observable data (total BH mass, BH mass ratio, redshift, merger rates), which are then analyzed using a Markov Chain Monte Carlo method. This allows us to constrain the physical parameters of the mergers. We find that our methodology works well at estimating merger parameters, consistently giving results within 1- of the input parameter values. We also discover that the number of merger events is a key discriminant among models. This helps our method be robust against observational uncertainties. Our approach, which at this stage constitutes a proof of principle, can be readily extended to physical models and to more general problems in cosmology and gravitational wave astrophysics.

  4. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  5. Modeling CO{sub 2} and H{sub 2}S solubility in MDEA and DEA: Design implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rochelle, G.T.; Posey, M.

    1996-12-31

    The solubility of H{sub 2}S and CO{sub 2} in aqueous alkanolamines affects solution capacity and the required circulation rate for acid gas absorption. These thermodynamics also determine the relationship of steam rate and the lean loading of the solution which in turn sets the leak of acid gas from the top of the absorber. Finally, the mechanisms of mass transfer and the role of kinetics, especially in stripping, depend on the vapor/liquid equilibria. Published measurements of CO{sub 2} and H{sub 2}S solubility in methyldiethanolamine (MDEA) and diethanolamine (DEA) are not in general agreement, especially at low loading of acid gas.more » The available sets of solubility data have been regressed with the AspenPlus electrolyte/NRTL model. All of the parameters and constants that make up this model have been carefully evaluated. Independent thermodynamic data such as freezing point and heat of mixing have been included in the regression to strengthen the estimates of model parameters. The parameters for each set of solubility data have been evaluated in an attempt to determine which set is correct. Each evaluated model has been used to calculate the acid gas capacity and minimum stripping steam rate for several industrial cases of acid gas absorption/stripping.« less

  6. Modeling and Tool Wear in Routing of CFRP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliescu, D.; Fernandez, A.; Gutierrez-Orrantia, M. E.

    2011-01-17

    This paper presents the prediction and evaluation of feed force in routing of carbon composite material. In order to extend tool life and improve quality of the machined surface, a better understanding of uncoated and coated tool behaviors is required. This work describes (1) the optimization of the geometry of multiple teeth tools minimizing the tool wear and the feed force, (2) the optimization of tool coating and (3) the development of a phenomenological model between the feed force, the routing parameters and the tool wear. The experimental results indicate that the feed rate, the cutting speed and the toolmore » wear are the most significant factors affecting the feed force. In the case of multiple teeth tools, a particular geometry with 14 teeth right helix right cut and 11 teeth left helix right cut gives the best results. A thick AlTiN coating or a diamond coating can dramatically improve the tool life while minimizing the axial force, roughness and delamination. A wear model has then been developed based on an abrasive behavior of the tool. The model links the feed rate to the tool geometry parameters (tool diameter), to the process parameters (feed rate, cutting speed and depth of cut) and to the wear. The model presented has been verified by experimental tests.« less

  7. Dynamics of morphological evolution in experimental Escherichia coli populations.

    PubMed

    Cui, F; Yuan, B

    2016-08-30

    Here, we applied a two-stage clonal expansion model of morphological (cell-size) evolution to a long-term evolution experiment with Escherichia coli. Using this model, we derived the incidence function of the appearance of cell-size stability, the waiting time until this morphological stability, and the conditional and unconditional probabilities of morphological stability. After assessing the parameter values, we verified that the calculated waiting time was consistent with the experimental results, demonstrating the effectiveness of the two-stage model. According to the relative contributions of parameters to the incidence function and the waiting time, cell-size evolution is largely determined by the promotion rate, i.e., the clonal expansion rate of selectively advantageous organisms. This rate plays a prominent role in the evolution of cell size in experimental populations, whereas all other evolutionary forces were found to be less influential.

  8. Modeling Soot Oxidation and Gasification with Bayesian Statistics

    DOE PAGES

    Josephson, Alexander J.; Gaffin, Neal D.; Smith, Sean T.; ...

    2017-08-22

    This paper presents a statistical method for model calibration using data collected from literature. The method is used to calibrate parameters for global models of soot consumption in combustion systems. This consumption is broken into two different submodels: first for oxidation where soot particles are attacked by certain oxidizing agents; second for gasification where soot particles are attacked by H 2O or CO 2 molecules. Rate data were collected from 19 studies in the literature and evaluated using Bayesian statistics to calibrate the model parameters. Bayesian statistics are valued in their ability to quantify uncertainty in modeling. The calibrated consumptionmore » model with quantified uncertainty is presented here along with a discussion of associated implications. The oxidation results are found to be consistent with previous studies. Significant variation is found in the CO 2 gasification rates.« less

  9. Modeling Soot Oxidation and Gasification with Bayesian Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josephson, Alexander J.; Gaffin, Neal D.; Smith, Sean T.

    This paper presents a statistical method for model calibration using data collected from literature. The method is used to calibrate parameters for global models of soot consumption in combustion systems. This consumption is broken into two different submodels: first for oxidation where soot particles are attacked by certain oxidizing agents; second for gasification where soot particles are attacked by H 2O or CO 2 molecules. Rate data were collected from 19 studies in the literature and evaluated using Bayesian statistics to calibrate the model parameters. Bayesian statistics are valued in their ability to quantify uncertainty in modeling. The calibrated consumptionmore » model with quantified uncertainty is presented here along with a discussion of associated implications. The oxidation results are found to be consistent with previous studies. Significant variation is found in the CO 2 gasification rates.« less

  10. Enrichment in a stoichiometric model of two producers and one consumer.

    PubMed

    Lin, Laurence Hao-Ran; Peckham, Bruce B; Stech, Harlan W; Pastor, John

    2012-01-01

    We consider a stoichiometric population model of two producers and one consumer. Stoichiometry can be thought of as the tracking of food quality in addition to food quantity. Our model assumes a reduced rate of conversion of biomass from producer to consumer when food quality is low. The model is open for carbon but closed for nutrient. The introduction of the second producer, which competes with the first, leads to new equilibria, new limit cycles, and new bifurcations. The focus of this paper is on the bifurcations which are the result of enrichment. The primary parameters we vary are the growth rates of both producers. Secondary variable parameters are the total nutrients in the system, and the producer nutrient uptake rates. The possible equilibria are: no-life, one-producer, coexistence of both producers, the consumer coexisting with either producer, and the consumer coexisting with both producers. We observe limit cycles in the latter three coexistence combinations. Bifurcation diagrams along with corresponding representative time series summarize the behaviours observed for this model.

  11. Sensitivity of Turbine-Height Wind Speeds to Parameters in Planetary Boundary-Layer and Surface-Layer Schemes in the Weather Research and Forecasting Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ben; Qian, Yun; Berg, Larry K.

    We evaluate the sensitivity of simulated turbine-height winds to 26 parameters applied in a planetary boundary layer (PBL) scheme and a surface layer scheme of the Weather Research and Forecasting (WRF) model over an area of complex terrain during the Columbia Basin Wind Energy Study. An efficient sampling algorithm and a generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of modeled turbine-height winds. The results indicate that most of the variability in the ensemble simulations is contributed by parameters related to the dissipation of the turbulence kinetic energy (TKE), Prandtl number, turbulencemore » length scales, surface roughness, and the von Kármán constant. The relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability. The parameter associated with the TKE dissipation rate is found to be the most important one, and a larger dissipation rate can produce larger hub-height winds. A larger Prandtl number results in weaker nighttime winds. Increasing surface roughness reduces the frequencies of both extremely weak and strong winds, implying a reduction in the variability of the wind speed. All of the above parameters can significantly affect the vertical profiles of wind speed, the altitude of the low-level jet and the magnitude of the wind shear strength. The wind direction is found to be modulated by the same subset of influential parameters. Remainder of abstract is in attachment.« less

  12. Estimation of Staphylococcus aureus growth parameters from turbidity data: characterization of strain variation and comparison of methods.

    PubMed

    Lindqvist, R

    2006-07-01

    Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.

  13. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  14. Desorption kinetics of hydrophobic organic chemicals from sediment to water: a review of data and models.

    PubMed

    Birdwell, Justin; Cook, Robert L; Thibodeaux, Louis J

    2007-03-01

    Resuspension of contaminated sediment can lead to the release of toxic compounds to surface waters where they are more bioavailable and mobile. Because the timeframe of particle resettling during such events is shorter than that needed to reach equilibrium, a kinetic approach is required for modeling the release process. Due to the current inability of common theoretical approaches to predict site-specific release rates, empirical algorithms incorporating the phenomenological assumption of biphasic, or fast and slow, release dominate the descriptions of nonpolar organic chemical release in the literature. Two first-order rate constants and one fraction are sufficient to characterize practically all of the data sets studied. These rate constants were compared to theoretical model parameters and functionalities, including chemical properties of the contaminants and physical properties of the sorbents, to determine if the trends incorporated into the hindered diffusion model are consistent with the parameters used in curve fitting. The results did not correspond to the parameter dependence of the hindered diffusion model. No trend in desorption rate constants, for either fast or slow release, was observed to be dependent on K(OC) or aqueous solubility for six and seven orders of magnitude, respectively. The same was observed for aqueous diffusivity and sediment fraction organic carbon. The distribution of kinetic rate constant values was approximately log-normal, ranging from 0.1 to 50 d(-1) for the fast release (average approximately 5 d(-1)) and 0.0001 to 0.1 d(-1) for the slow release (average approximately 0.03 d(-1)). The implications of these findings with regard to laboratory studies, theoretical desorption process mechanisms, and water quality modeling needs are presented and discussed.

  15. The Influence of Sediment Isostatic Adjustment on Sea-Level Change and Land Motion along the US Gulf Coast

    NASA Astrophysics Data System (ADS)

    Kuchar, J.; Milne, G. A.; Wolstencroft, M.; Love, R.; Tarasov, L.; Hijma, M.

    2017-12-01

    Sea level rise presents a hazard for coastal populations and the Mississippi Delta (MD) is a region particularly at risk due to the high rates of land subsidence. We apply a gravitationally self-consistent model of glacial and sediment isostatic adjustment (SIA) along with a realistic sediment load reconstruction in this region for the first time to determine isostatic contributions to relative sea level (RSL) and land motion. We determine optimal model parameters (Earth rheology and ice history) using a new high quality compaction-free sea level indicator database and a parameter space of four ice histories and 400 Earth rheologies. Using the optimal model parameters, we show that SIA is capable of lowering predicted RSL in the MD area by several metres over the Holocene and so should be taken into account when modelling these data. We compare modelled contemporary rates of vertical land motion with those inferred using GPS. This comparison indicates that isostatic processes can explain the majority of the observed vertical land motion north of latitude 30.7oN, where subsidence rates average about 1 mm/yr; however, vertical rates south of this latitude shows large data-model discrepancies of greater than 3 mm/yr, indicating the importance of non-isostatic processes controlling the observed subsidence. This discrepancy extends to contemporary RSL change, where we find that the SIA contribution in the Delta is on the order of 10-1 mm per year. We provide estimates of the isostatic contributions to 20th and 21st century sea level rates at Gulf Coast PSMSL tide gauge locations as well as vertical and horizontal land motion at GPS station locations near the Mississippi Delta.

  16. Biological reduction of chlorinated solvents: Batch-scale geochemical modeling

    NASA Astrophysics Data System (ADS)

    Kouznetsova, Irina; Mao, Xiaomin; Robinson, Clare; Barry, D. A.; Gerhard, Jason I.; McCarty, Perry L.

    2010-09-01

    Simulation of biodegradation of chlorinated solvents in dense non-aqueous phase liquid (DNAPL) source zones requires a model that accounts for the complexity of processes involved and that is consistent with available laboratory studies. This paper describes such a comprehensive modeling framework that includes microbially mediated degradation processes, microbial population growth and decay, geochemical reactions, as well as interphase mass transfer processes such as DNAPL dissolution, gas formation and mineral precipitation/dissolution. All these processes can be in equilibrium or kinetically controlled. A batch modeling example was presented where the degradation of trichloroethene (TCE) and its byproducts and concomitant reactions (e.g., electron donor fermentation, sulfate reduction, pH buffering by calcite dissolution) were simulated. Local and global sensitivity analysis techniques were applied to delineate the dominant model parameters and processes. Sensitivity analysis indicated that accurate values for parameters related to dichloroethene (DCE) and vinyl chloride (VC) degradation (i.e., DCE and VC maximum utilization rates, yield due to DCE utilization, decay rate for DCE/VC dechlorinators) are important for prediction of the overall dechlorination time. These parameters influence the maximum growth rate of the DCE and VC dechlorinating microorganisms and, thus, the time required for a small initial population to reach a sufficient concentration to significantly affect the overall rate of dechlorination. Self-inhibition of chlorinated ethenes at high concentrations and natural buffering provided by the sediment were also shown to significantly influence the dechlorination time. Furthermore, the analysis indicated that the rates of the competing, nonchlorinated electron-accepting processes relative to the dechlorination kinetics also affect the overall dechlorination time. Results demonstrated that the model developed is a flexible research tool that is able to provide valuable insight into the fundamental processes and their complex interactions during bioremediation of chlorinated ethenes in DNAPL source zones.

  17. Satellite altimetry based rating curves throughout the entire Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present experiment shows that the stochastic approach is more efficient than the determinist one. By using for the parameters prior credible intervals defined by the user, this method provides an estimate of best rating curve estimate without any unlikely parameter, and all sites achieved convergence before reaching the maximum number of model evaluations. Results were assessed trough the Nash Sutcliffe efficiency coefficient, applied both to discharge and logarithm of discharges. Most of the virtual stations had good or very good results, showing values of Ens going from 0.7 to 0.98. However, worse results were found at a few virtual stations, unveiling the necessity of investigating possibilities of segmentation of the rating curve, depending on the stage or the rising or recession limb, but also possible errors in the altimetry series.

  18. More grain per drop of water: Screening rice genotype for physiological parameters of drought tolerance

    NASA Astrophysics Data System (ADS)

    Massanelli, J.; Meadows-McDonnell, M.; Konzelman, C.; Moon, J. B.; Kumar, A.; Thomas, J.; Pereira, A.; Naithani, K. J.

    2016-12-01

    Meeting agricultural water demands is becoming progressively difficult due to population growth and changes in climate. Breeding stress-resilient crops is a viable solution, as information about genetic variation and their role in stress tolerance is becoming available due to advancement in technology. In this study we screened eight diverse rice genotypes for photosynthetic capacity under greenhouse conditions. These include the Asian rice (Oryza sativa) genotypes, drought sensitive Nipponbare, and a transgenic line overexpressing the HYR gene in Nipponbare; six genotypes (Vandana, Bengal, Nagina-22, Glaberrima, Kaybonnet, Ai Chueh Ta Pai Ku) and an African rice O. glaberrima, all selected for varying levels of drought tolerance. We collected CO2 and light response curve data under well-watered and simulated drought conditions in greenhouse. From these curves we estimated photosynthesis model parameters, such as the maximum carboxylation rate (Vcmax), the maximum electron transport rate (Jmax), the maximum gross photosynthesis rate, daytime respiration (Rd), and quantum yield (f). Our results suggest that O. glaberrima and Nipponbare were the most sensitive to drought because Vcmax and Pgmax declined under drought conditions; other drought tolerant genotypes did not show significant changes in these model parameters. Our integrated approach, combining genetic information and photosynthesis modeling, shows promise to quantify drought response parameters and improve crop yield under drought stress conditions.

  19. A Combined Precipitation, Yield Stress, and Work Hardening Model for Al-Mg-Si Alloys Incorporating the Effects of Strain Rate and Temperature

    NASA Astrophysics Data System (ADS)

    Myhr, Ole Runar; Hopperstad, Odd Sture; Børvik, Tore

    2018-05-01

    In this study, a combined precipitation, yield strength, and work hardening model for Al-Mg-Si alloys known as NaMo has been further developed to include the effects of strain rate and temperature on the resulting stress-strain behavior. The extension of the model is based on a comprehensive experimental database, where thermomechanical data for three different Al-Mg-Si alloys are available. In the tests, the temperature was varied between 20 °C and 350 °C with strain rates ranging from 10-6 to 750 s-1 using ordinary tension tests for low strain rates and a split-Hopkinson tension bar system for high strain rates, respectively. This large span in temperatures and strain rates covers a broad range of industrial relevant problems from creep to impact loading. Based on the experimental data, a procedure for calibrating the different physical parameters of the model has been developed, starting with the simplest case of a stable precipitate structure and small plastic strains, from which basic kinetic data for obstacle limited dislocation glide were extracted. For larger strains, when work hardening becomes significant, the dynamic recovery was linked to the Zener-Hollomon parameter, again using a stable precipitate structure as a basis for calibration. Finally, the complex situation of concurrent work hardening and dynamic evolution of the precipitate structure was analyzed using a stepwise numerical solution algorithm where parameters representing the instantaneous state of the structure were used to calculate the corresponding instantaneous yield strength and work hardening rate. The model was demonstrated to exhibit a high degree of predictive power as documented by a good agreement between predictions and measurements, and it is deemed well suited for simulations of thermomechanical processing of Al-Mg-Si alloys where plastic deformation is carried out at various strain rates and temperatures.

  20. Probabilistic models and uncertainty quantification for the ionization reaction rate of atomic Nitrogen

    NASA Astrophysics Data System (ADS)

    Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.

    2012-05-01

    The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.

  1. Concepts, challenges, and successes in modeling thermodynamics of metabolism.

    PubMed

    Cannon, William R

    2014-01-01

    The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.

  2. Spread of a disease and its effect on population dynamics in an eco-epidemiological system

    NASA Astrophysics Data System (ADS)

    Upadhyay, Ranjit Kumar; Roy, Parimita

    2014-12-01

    In this paper, an eco-epidemiological model with simple law of mass action and modified Holling type II functional response has been proposed and analyzed to understand how a disease may spread among natural populations. The proposed model is a modification of the model presented by Upadhyay et al. (2008) [1]. Existence of the equilibria and their stability analysis (linear and nonlinear) has been studied. The dynamical transitions in the model have been studied by identifying the existence of backward Hopf-bifurcations and demonstrated the period-doubling route to chaos when the death rate of predator (μ1) and the growth rate of susceptible prey population (r) are treated as bifurcation parameters. Our studies show that the system exhibits deterministic chaos when some control parameters attain their critical values. Chaotic dynamics is depicted using the 2D parameter scans and bifurcation analysis. Possible implications of the results for disease eradication or its control are discussed.

  3. How Hot Precursor Modify Island Nucleation: A Rate-Equation Model

    NASA Astrophysics Data System (ADS)

    Morales-Cifuentes, Josue; Einstein, T. L.; Pimpinelli, Alberto

    2015-03-01

    We describe the analysis, based on rate equations, of the hot precursor model mentioned in the previous talk. Two key parameters are the competing times of ballistic monomers decaying into thermalized monomers vs. being captured by an island, which naturally define a ``thermalization'' scale for the system. We interpret the energies and dimmensionless parameters used in the model, and provide both an implicit analytic solution and a convenient asymptotic approximation. Further analysis reveals novel scaling regimes and nonmonotonic crossovers between them. To test our model, we applied it to experiments on parahexaphenyl (6P) on sputtered mica. With the resulting parameters, the curves derived from our analytic treatment account very well for the data at the 4 different temperatures. The fit shows that the high-flux regime corresponds not to ALA (attachment-limited aggregation) or HMA (hot monomer aggregation) but rather to an intermediate scaling regime related to DLA (diffusion-limited aggregation). We hope this work stimulates further experimental investigations. Work at UMD supported by NSF CHE 13-05892.

  4. Impact of increasing treatment rates on cost-effectiveness of subcutaneous immunotherapy (SCIT) in respiratory allergy: a decision analytic modelling approach.

    PubMed

    Richter, Ann-Kathrin; Klimek, Ludger; Merk, Hans F; Mülleneisen, Norbert; Renz, Harald; Wehrmann, Wolfgang; Werfel, Thomas; Hamelmann, Eckard; Siebert, Uwe; Sroczynski, Gaby; Wasem, Jürgen; Biermann-Stallwitz, Janine

    2018-03-24

    Specific immunotherapy is the only causal treatment in respiratory allergy. Due to high treatment cost and possible severe side effects subcutaneous immunotherapy (SCIT) is not indicated in all patients. Nevertheless, reported treatment rates seem to be low. This study aims to analyze the effects of increasing treatment rates of SCIT in respiratory allergy in terms of costs and quality-adjusted life years (QALYs). A state-transition Markov model simulates the course of disease of patients with allergic rhinitis, allergic asthma and both diseases over 10 years including a symptom-free state and death. Treatment comprises symptomatic pharmacotherapy alone or combined with SCIT. The model compares two strategies of increased and status quo treatment rates. Transition probabilities are based on routine data. Costs are calculated from the societal perspective applying German unit costs to literature-derived resource consumption. QALYs are determined by translating the mean change in non-preference-based quality of life scores to a change in utility. Key parameters are subjected to deterministic sensitivity analyses. Increasing treatment rates is a cost-effective strategy with an incremental cost-effectiveness ratio (ICER) of 3484€/QALY compared to the status quo. The most influential parameters are SCIT discontinuation rates, treatment effects on the transition probabilities and cost of SCIT. Across all parameter variations, the best case leads to dominance of increased treatment rates while the worst case ICER is 34,315€/QALY. Excluding indirect cost leads to a twofold increase in the ICER. Measures to increase SCIT initiation rates should be implemented and also address improving adherence.

  5. Effect of Common Cryoprotectants on Critical Warming Rates and Ice Formation in Aqueous Solutions

    PubMed Central

    Hopkins, Jesse B.; Badeau, Ryan; Warkentin, Matthew; Thorne, Robert E.

    2012-01-01

    Ice formation on warming is of comparable or greater importance to ice formation on cooling in determining survival of cryopreserved samples. Critical warming rates required for ice-free warming of vitrified aqueous solutions of glycerol, dimethyl sulfoxide, ethylene glycol, polyethylene glycol 200 and sucrose have been measured for warming rates of order 10 to 104 K/s. Critical warming rates are typically one to three orders of magnitude larger than critical cooling rates. Warming rates vary strongly with cooling rates, perhaps due to the presence of small ice fractions in nominally vitrified samples. Critical warming and cooling rate data spanning orders of magnitude in rates provide rigorous tests of ice nucleation and growth models and their assumed input parameters. Current models with current best estimates for input parameters provide a reasonable account of critical warming rates for glycerol solutions at high concentrations/low rates, but overestimate both critical warming and cooling rates by orders of magnitude at lower concentrations and larger rates. In vitrification protocols, minimizing concentrations of potentially damaging cryoprotectants while minimizing ice formation will require ultrafast warming rates, as well as fast cooling rates to minimize the required warming rates. PMID:22728046

  6. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  7. Cybernetic modeling based on pathway analysis for Penicillium chrysogenum fed-batch fermentation.

    PubMed

    Geng, Jun; Yuan, Jingqi

    2010-08-01

    A macrokinetic model employing cybernetic methodology is proposed to describe mycelium growth and penicillin production. Based on the primordial and complete metabolic network of Penicillium chrysogenum found in the literature, the modeling procedure is guided by metabolic flux analysis and cybernetic modeling framework. The abstracted cybernetic model describes the transients of the consumption rates of the substrates, the assimilation rates of intermediates, the biomass growth rate, as well as the penicillin formation rate. Combined with the bioreactor model, these reaction rates are linked with the most important state variables, i.e., mycelium, substrate and product concentrations. Simplex method is used to estimate the sensitive parameters of the model. Finally, validation of the model is carried out with 20 batches of industrial-scale penicillin cultivation.

  8. A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods

    PubMed Central

    Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian

    2014-01-01

    One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515

  9. Spectral optimization and uncertainty quantification in combustion modeling

    NASA Astrophysics Data System (ADS)

    Sheen, David Allan

    Reliable simulations of reacting flow systems require a well-characterized, detailed chemical model as a foundation. Accuracy of such a model can be assured, in principle, by a multi-parameter optimization against a set of experimental data. However, the inherent uncertainties in the rate evaluations and experimental data leave a model still characterized by some finite kinetic rate parameter space. Without a careful analysis of how this uncertainty space propagates into the model's predictions, those predictions can at best be trusted only qualitatively. In this work, the Method of Uncertainty Minimization using Polynomial Chaos Expansions is proposed to quantify these uncertainties. In this method, the uncertainty in the rate parameters of the as-compiled model is quantified. Then, the model is subjected to a rigorous multi-parameter optimization, as well as a consistency-screening process. Lastly, the uncertainty of the optimized model is calculated using an inverse spectral optimization technique, and then propagated into a range of simulation conditions. An as-compiled, detailed H2/CO/C1-C4 kinetic model is combined with a set of ethylene combustion data to serve as an example. The idea that the hydrocarbon oxidation model should be understood and developed in a hierarchical fashion has been a major driving force in kinetics research for decades. How this hierarchical strategy works at a quantitative level, however, has never been addressed. In this work, we use ethylene and propane combustion as examples and explore the question of hierarchical model development quantitatively. The Method of Uncertainty Minimization using Polynomial Chaos Expansions is utilized to quantify the amount of information that a particular combustion experiment, and thereby each data set, contributes to the model. This knowledge is applied to explore the relationships among the combustion chemistry of hydrogen/carbon monoxide, ethylene, and larger alkanes. Frequently, new data will become available, and it will be desirable to know the effect that inclusion of these data has on the optimized model. Two cases are considered here. In the first, a study of H2/CO mass burning rates has recently been published, wherein the experimentally-obtained results could not be reconciled with any extant H2/CO oxidation model. It is shown in that an optimized H2/CO model can be developed that will reproduce the results of the new experimental measurements. In addition, the high precision of the new experiments provide a strong constraint on the reaction rate parameters of the chemistry model, manifested in a significant improvement in the precision of simulations. In the second case, species time histories were measured during n-heptane oxidation behind reflected shock waves. The highly precise nature of these measurements is expected to impose critical constraints on chemical kinetic models of hydrocarbon combustion. The results show that while an as-compiled, prior reaction model of n-alkane combustion can be accurate in its prediction of the detailed species profiles, the kinetic parameter uncertainty in the model remains to be too large to obtain a precise prediction of the data. Constraining the prior model against the species time histories within the measurement uncertainties led to notable improvements in the precision of model predictions against the species data as well as the global combustion properties considered. Lastly, we show that while the capability of the multispecies measurement presents a step-change in our precise knowledge of the chemical processes in hydrocarbon combustion, accurate data of global combustion properties are still necessary to predict fuel combustion.

  10. Chapter 8: Demographic characteristics and population modeling

    Treesearch

    Scott H. Stoleson; Mary J. Whitfield; Mark K. Sogge

    2000-01-01

    An understanding of the basic demography of a species is necessary to estimate and evaluate population trends. The relative impact of different demographic parameters on growth rates can be assessed through a sensitivity analysis, in which different parameters are altered singly to assess the effect on population growth. Identification of critical parameters can allow...

  11. Behavioral Implications of Piezoelectric Stack Actuators for Control of Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Celanovic, Nikola

    1996-01-01

    A lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and in particular for microrobotic applications requiring accurate position and/or force control. In addition to describing the input-output dynamic behavior, the proposed model explains aspects of non-intuitive behavioral phenomena evinced by piezoelectric actuators, such as the input-output rate-independent hysteresis and the change in mechanical stiffness that results from altering electrical load. The authors incorporate a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data.

  12. FEAST: sensitive local alignment with multiple rates of evolution.

    PubMed

    Hudek, Alexander K; Brown, Daniel G

    2011-01-01

    We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.

  13. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  14. A rate insensitive linear viscoelastic model for soft tissues

    PubMed Central

    Zhang, Wei; Chen, Henry Y.; Kassab, Ghassan S.

    2012-01-01

    It is well known that many biological soft tissues behave as viscoelastic materials with hysteresis curves being nearly independent of strain rate when loading frequency is varied over a large range. In this work, the rate insensitive feature of biological materials is taken into account by a generalized Maxwell model. To minimize the number of model parameters, it is assumed that the characteristic frequencies of Maxwell elements form a geometric series. As a result, the model is characterized by five material constants: μ0, τ, m, ρ and β, where μ0 is the relaxed elastic modulus, τ the characteristic relaxation time, m the number of Maxwell elements, ρ the gap between characteristic frequencies, and β = μ1/μ0 with μ1 being the elastic modulus of the Maxwell body that has relaxation time τ. The physical basis of the model is motivated by the microstructural architecture of typical soft tissues. The novel model shows excellent fit of relaxation data on the canine aorta and captures the salient features of vascular viscoelasticity with significantly fewer model parameters. PMID:17512585

  15. Figure and caption for LDRD annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suratwala, T.

    2017-10-16

    Material removal rate of various optical material workpieces polished using various colloidal slurries as a function of partial charge difference. Partial charge difference is a parameter calculated from a new chemical model proposed to link the condensation reaction rate with polishing material removal rate. This chemical model can serve as a global platform to predict & design polishing processes for a wide variety of workpiece materials and slurry compositions.

  16. Mathematical optimization of high dose-rate brachytherapy—derivation of a linear penalty model from a dose-volume model

    NASA Astrophysics Data System (ADS)

    Morén, B.; Larsson, T.; Carlsson Tedgren, Å.

    2018-03-01

    High dose-rate brachytherapy is a method for cancer treatment where the radiation source is placed within the body, inside or close to a tumour. For dose planning, mathematical optimization techniques are being used in practice and the most common approach is to use a linear model which penalizes deviations from specified dose limits for the tumour and for nearby organs. This linear penalty model is easy to solve, but its weakness lies in the poor correlation of its objective value and the dose-volume objectives that are used clinically to evaluate dose distributions. Furthermore, the model contains parameters that have no clear clinical interpretation. Another approach for dose planning is to solve mixed-integer optimization models with explicit dose-volume constraints which include parameters that directly correspond to dose-volume objectives, and which are therefore tangible. The two mentioned models take the overall goals for dose planning into account in fundamentally different ways. We show that there is, however, a mathematical relationship between them by deriving a linear penalty model from a dose-volume model. This relationship has not been established before and improves the understanding of the linear penalty model. In particular, the parameters of the linear penalty model can be interpreted as dual variables in the dose-volume model.

  17. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  18. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    PubMed

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-08-28

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  19. Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger

    NASA Astrophysics Data System (ADS)

    Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.

    2016-12-01

    Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.

  20. Analysis and numerical simulation of a laboratory analog of radiatively induced cloud-top entrainment.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerstein, Alan R.; Sayler, Bentley J.; Wunsch, Scott Edward

    2010-11-01

    Numerical simulations using the One-Dimensional-Turbulence model are compared to water-tank measurements [B. J. Sayler and R. E. Breidenthal, J. Geophys. Res. 103 (D8), 8827 (1998)] emulating convection and entrainment in stratiform clouds driven by cloud-top cooling. Measured dependences of the entrainment rate on Richardson number, molecular transport coefficients, and other experimental parameters are reproduced. Additional parameter variations suggest more complicated dependences of the entrainment rate than previously anticipated. A simple algebraic model indicates the ways in which laboratory and cloud entrainment behaviors might be similar and different.

  1. Rate heterogeneity across Squamata, misleading ancestral state reconstruction and the importance of proper null model specification.

    PubMed

    Harrington, S; Reeder, T W

    2017-02-01

    The binary-state speciation and extinction (BiSSE) model has been used in many instances to identify state-dependent diversification and reconstruct ancestral states. However, recent studies have shown that the standard procedure of comparing the fit of the BiSSE model to constant-rate birth-death models often inappropriately favours the BiSSE model when diversification rates vary in a state-independent fashion. The newly developed HiSSE model enables researchers to identify state-dependent diversification rates while accounting for state-independent diversification at the same time. The HiSSE model also allows researchers to test state-dependent models against appropriate state-independent null models that have the same number of parameters as the state-dependent models being tested. We reanalyse two data sets that originally used BiSSE to reconstruct ancestral states within squamate reptiles and reached surprising conclusions regarding the evolution of toepads within Gekkota and viviparity across Squamata. We used this new method to demonstrate that there are many shifts in diversification rates across squamates. We then fit various HiSSE submodels and null models to the state and phylogenetic data and reconstructed states under these models. We found that there is no single, consistent signal for state-dependent diversification associated with toepads in gekkotans or viviparity across all squamates. Our reconstructions show limited support for the recently proposed hypotheses that toepads evolved multiple times independently in Gekkota and that transitions from viviparity to oviparity are common in Squamata. Our results highlight the importance of considering an adequate pool of models and null models when estimating diversification rate parameters and reconstructing ancestral states. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  2. Determination of the diffusion coefficient and phase-transfer rate parameter in LaNi{sub 5} and MmNi{sub 3.6}Co{sub 0.8}Mn{sub 0.4}Al{sub 0.3} using microelectrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundqvist, A.; Lindbergh, G.

    1998-11-01

    A potential-step method for determining the diffusion coefficient and phase-transfer parameter in metal hydrides by using microelectrodes was investigated. It was shown that a large potential step is not enough to ensure a completely diffusion-limited mass transfer if a surface-phase transfer reaction takes place at a finite rate. It was shown, using a kinetic expression for the surface phase-transfer reaction, that the slope of the logarithm of the current vs. time curve will be constant both in the case of the mass-transfer limited by diffusion or by diffusion and a surface-phase transfer. The diffusion coefficient and phase-transfer rate parameter weremore » accurately determined for MmNi{sub 3.6}Co{sub 0.8}Mn{sub 0.4}Al{sub 0.3} using a fit to the whole transient. The diffusion coefficient was found to be (1.3 {+-} 0.3) {times} 10{sup {minus}13} m{sup 2}/s. The fit was good and showed that a pure diffusion model was not enough to explain the observed transient. The diffusion coefficient and phase-transfer rate parameter were also estimated for pure LaNi{sub 5}. A fit of the whole curve showed that neither a pure diffusion model nor a model including phase transfer could explain the whole transient.« less

  3. Biokinetic modelling development and analysis of arsenic dissolution into the gastrointestinal tract using SAAM II

    NASA Astrophysics Data System (ADS)

    Perama, Yasmin Mohd Idris; Siong, Khoo Kok

    2018-04-01

    A mathematical model comprising 8 compartments were designed to describe the kinetic dissolution of arsenic (As) from water leach purification (WLP) waste samples ingested into the gastrointestinal system. A totally reengineered software system named Simulation, Analysis and Modelling II (SAAM II) was employed to aid in the experimental design and data analysis. As a powerful tool that creates, simulate and analyze data accurately and rapidly, SAAM II computationally creates a system of ordinary differential equations according to the specified compartmental model structure and simulates the solutions based upon the parameter and model inputs provided. The experimental design of in vitro DIN approach was applied to create an artificial gastric and gastrointestinal fluids. These synthetic fluids assay were produced to determine the concentrations of As ingested into the gastrointestinal tract. The model outputs were created based upon the experimental inputs and the recommended fractional transfer rates parameter. As a result, the measured and predicted As concentrations in gastric fluids were much similar against the time of study. In contrast, the concentrations of As in the gastrointestinal fluids were only similar during the first hour and eventually started decreasing until the fifth hours of study between the measured and predicted values. This is due to the loss of As through the fractional transfer rates of q2 compartment to corresponding compartments of q3 and q5 which are involved with excretion and distribution to the whole body, respectively. The model outputs obtained after best fit to the data were influenced significantly by the fractional transfer rates between each compartment. Therefore, a series of compartmental model created with the association of fractional transfer rates parameter with the aid of SAAM II provides better estimation that simulate the kinetic behavior of As ingested into the gastrointestinal system.

  4. Sensitivity analysis of a multilayer, finite-difference model of the Southeastern Coastal Plain regional aquifer system; Mississippi, Alabama, Georgia, and South Carolina

    USGS Publications Warehouse

    Pernik, Meribeth

    1987-01-01

    The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)

  5. Sensitivity of turbine-height wind speeds to parameters in planetary boundary-layer and surface-layer schemes in the weather research and forecasting model

    DOE PAGES

    Yang, Ben; Qian, Yun; Berg, Larry K.; ...

    2016-07-21

    We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less

  6. Sensitivity of turbine-height wind speeds to parameters in planetary boundary-layer and surface-layer schemes in the weather research and forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ben; Qian, Yun; Berg, Larry K.

    We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less

  7. Some aspects of the analysis of geodetic strain observations in kinematic models

    NASA Astrophysics Data System (ADS)

    Welsch, W. M.

    1986-11-01

    Frequently, deformation processes are analyzed in static models. In many cases, this procedure is justified, in particular if the deformation occurring is a singular event. If. however, the deformation is a continuous process, as is the case, for instance, with recent crustal movements, the analysis in kinematic models is more commensurate with the problem because the factor "time" is considered an essential part of the model. Some specialities have to be considered when analyzing geodetic strain observations in kinematic models. They are dealt with in this paper. After a brief derivation of the basic kinematic model and the kinematic strain model, the following subjects are treated: the adjustment of the pointwise velocity field and the derivation of strain-rate parameters; the fixing of the kinematic reference system as part of the geodetic datum; statistical tests of models by testing linear hypotheses; the invariance of kinematic strain-rate parameters with respect to transformations of the coordinate-system and the geodetic datum; the interpolation of strain rates by finite-element methods. After the representation of some advanced models for the description of secular and episodic kinematic processes, the data analysis in dynamic models is regarded as a further generalization of deformation analysis.

  8. A mathematical model for HIV and hepatitis C co-infection and its assessment from a statistical perspective.

    PubMed

    Castro Sanchez, Amparo Yovanna; Aerts, Marc; Shkedy, Ziv; Vickerman, Peter; Faggiano, Fabrizio; Salamina, Guiseppe; Hens, Niel

    2013-03-01

    The hepatitis C virus (HCV) and the human immunodeficiency virus (HIV) are a clear threat for public health, with high prevalences especially in high risk groups such as injecting drug users. People with HIV infection who are also infected by HCV suffer from a more rapid progression to HCV-related liver disease and have an increased risk for cirrhosis and liver cancer. Quantifying the impact of HIV and HCV co-infection is therefore of great importance. We propose a new joint mathematical model accounting for co-infection with the two viruses in the context of injecting drug users (IDUs). Statistical concepts and methods are used to assess the model from a statistical perspective, in order to get further insights in: (i) the comparison and selection of optional model components, (ii) the unknown values of the numerous model parameters, (iii) the parameters to which the model is most 'sensitive' and (iv) the combinations or patterns of values in the high-dimensional parameter space which are most supported by the data. Data from a longitudinal study of heroin users in Italy are used to illustrate the application of the proposed joint model and its statistical assessment. The parameters associated with contact rates (sharing syringes) and the transmission rates per syringe-sharing event are shown to play a major role. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Simplification and Validation of a Spectral-Tensor Model for Turbulence Including Atmospheric Stability

    NASA Astrophysics Data System (ADS)

    Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.

    2018-06-01

    A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L, a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.

  10. Mathematical Model Of Variable-Polarity Plasma Arc Welding

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1996-01-01

    Mathematical model of variable-polarity plasma arc (VPPA) welding process developed for use in predicting characteristics of welds and thus serves as guide for selection of process parameters. Parameters include welding electric currents in, and durations of, straight and reverse polarities; rates of flow of plasma and shielding gases; and sizes and relative positions of welding electrode, welding orifice, and workpiece.

  11. An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand

    ERIC Educational Resources Information Center

    Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.

    2005-01-01

    An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…

  12. Estimation of death rates in US states with small subpopulations.

    PubMed

    Voulgaraki, Anastasia; Wei, Rong; Kedem, Benjamin

    2015-05-20

    In US states with small subpopulations, the observed mortality rates are often zero, particularly among young ages. Because in life tables, death rates are reported mostly on a log scale, zero mortality rates are problematic. To overcome the observed zero death rates problem, appropriate probability models are used. Using these models, observed zero mortality rates are replaced by the corresponding expected values. This enables logarithmic transformations and, in some cases, the fitting of the eight-parameter Heligman-Pollard model to produce mortality estimates for ages 0-130 years, a procedure illustrated in terms of mortality data from several states. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Modelling of slaughterhouse solid waste anaerobic digestion: determination of parameters and continuous reactor simulation.

    PubMed

    López, Iván; Borzacconi, Liliana

    2010-10-01

    A model based on the work of Angelidaki et al. (1993) was applied to simulate the anaerobic biodegradation of ruminal contents. In this study, two fractions of solids with different biodegradation rates were considered. A first-order kinetic was used for the easily biodegradable fraction and a kinetic expression that is function of the extracellular enzyme concentration was used for the slowly biodegradable fraction. Batch experiments were performed to obtain an accumulated methane curve that was then used to obtain the model parameters. For this determination, a methodology derived from the "multiple-shooting" method was successfully used. Monte Carlo simulations allowed a confidence range to be obtained for each parameter. Simulations of a continuous reactor were performed using the optimal set of model parameters. The final steady-states were determined as functions of the operational conditions (solids load and residence time). The simulations showed that methane flow peaked at a flow rate of 0.5-0.8 Nm(3)/d/m(reactor)(3) at a residence time of 10-20 days. Simulations allow the adequate selection of operating conditions of a continuous reactor. (c) 2010 Elsevier Ltd. All rights reserved.

  14. Modeling seasonal measles transmission in China

    NASA Astrophysics Data System (ADS)

    Bai, Zhenguo; Liu, Dan

    2015-08-01

    A discrete-time deterministic measles model with periodic transmission rate is formulated and studied. The basic reproduction number R0 is defined and used as the threshold parameter in determining the dynamics of the model. It is shown that the disease will die out if R0 < 1 , and the disease will persist in the population if R0 > 1 . Parameters in the model are estimated on the basis of demographic and epidemiological data. Numerical simulations are presented to describe the seasonal fluctuation of measles infection in China.

  15. Nanoseismicity and picoseismicity rate changes from static stress triggering caused by a Mw 2.2 earthquake in Mponeng gold mine, South Africa

    NASA Astrophysics Data System (ADS)

    Kozłowska, Maria; Orlecka-Sikora, Beata; Kwiatek, Grzegorz; Boettcher, Margaret S.; Dresen, Georg

    2015-01-01

    Static stress changes following large earthquakes are known to affect the rate and distribution of aftershocks, yet this process has not been thoroughly investigated for nanoseismicity and picoseismicity at centimeter length scales. Here we utilize a unique data set of M ≥ -3.4 earthquakes following a Mw 2.2 earthquake in Mponeng gold mine, South Africa, that was recorded during a quiet interval in the mine to investigate if rate- and state-based modeling is valid for shallow, mining-induced seismicity. We use Dieterich's (1994) rate- and state-dependent formulation for earthquake productivity, which requires estimation of four parameters: (1) Coulomb stress changes due to the main shock, (2) the reference seismicity rate, (3) frictional resistance parameter, and (4) the duration of aftershock relaxation time. Comparisons of the modeled spatiotemporal patterns of seismicity based on two different source models with the observed distribution show that while the spatial patterns match well, the rate of modeled aftershocks is lower than the observed rate. To test our model, we used three metrics of the goodness-of-fit evaluation. The null hypothesis, of no significant difference between modeled and observed seismicity rates, was only rejected in the depth interval containing the main shock. Results show that mining-induced earthquakes may be followed by a stress relaxation expressed through aftershocks located on the rupture plane and in regions of positive Coulomb stress change. Furthermore, we demonstrate that the main features of the temporal and spatial distributions of very small, mining-induced earthquakes can be successfully determined using rate- and state-based stress modeling.

  16. Integrating acoustic telemetry into mark-recapture models to improve the precision of apparent survival and abundance estimates.

    PubMed

    Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam

    2015-07-01

    Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.

  17. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  18. Star formation in the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Leichenauer, Stefan

    2009-03-15

    We develop a simple semianalytic model of the star formation rate as a function of time. We estimate the star formation rate for a wide range of values of the cosmological constant, spatial curvature, and primordial density contrast. Our model can predict such parameters in the multiverse, if the underlying theory landscape and the cosmological measure are known.

  19. Development a fluvial detachment rate model to predict the erodibility of cohesive soils under the influence of seepage

    USDA-ARS?s Scientific Manuscript database

    Seepage influences the erodibility of streambanks, streambeds, dams, and embankments. Usually the erosion rate of cohesive soils due to fluvial forces is computed using an excess shear stress model, dependent on two major soil parameters: the critical shear stress (tc) and the erodibility coefficie...

  20. Transition regime analytical solution to gas mass flow rate in a rectangular micro channel

    NASA Astrophysics Data System (ADS)

    Dadzie, S. Kokou; Dongari, Nishanth

    2012-11-01

    We present an analytical model predicting the experimentally observed gas mass flow rate in rectangular micro channels over slip and transition regimes without the use of any fitting parameter. Previously, Sone reported a class of pure continuum regime flows that requires terms of Burnett order in constitutive equations of shear stress to be predicted appropriately. The corrective terms to the conventional Navier-Stokes equation were named the ghost effect. We demonstrate in this paper similarity between Sone ghost effect model and newly so-called 'volume diffusion hydrodynamic model'. A generic analytical solution to gas mass flow rate in a rectangular micro channel is then obtained. It is shown that the volume diffusion hydrodynamics allows to accurately predict the gas mass flow rate up to Knudsen number of 5. This can be achieved without necessitating the use of adjustable parameters in boundary conditions or parametric scaling laws for constitutive relations. The present model predicts the non-linear variation of pressure profile along the axial direction and also captures the change in curvature with increase in rarefaction.

  1. Fractional time-dependent apparent viscosity model for semisolid foodstuffs

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Chen, Wen; Sun, HongGuang

    2017-10-01

    The difficulty in the description of thixotropic behaviors in semisolid foodstuffs is the time dependent nature of apparent viscosity under constant shear rate. In this study, we propose a novel theoretical model via fractional derivative to address the high demand by industries. The present model adopts the critical parameter of fractional derivative order α to describe the corresponding time-dependent thixotropic behavior. More interestingly, the parameter α provides a quantitative insight into discriminating foodstuffs. With the re-exploration of three groups of experimental data (tehineh, balangu, and natillas), the proposed methodology is validated in good applicability and efficiency. The results show that the present fractional apparent viscosity model performs successfully for tested foodstuffs in the shear rate range of 50-150 s^{ - 1}. The fractional order α decreases with the increase of temperature at low temperature, below 50 °C, but increases with growing shear rate. While the ideal initial viscosity k decreases with the increase of temperature, shear rate, and ingredient content. It is observed that the magnitude of α is capable of characterizing the thixotropy of semisolid foodstuffs.

  2. Computations and estimates of rate coefficients for hydrocarbon reactions of interest to the atmospheres of outer solar system

    NASA Technical Reports Server (NTRS)

    Laufer, A. H.; Gardner, E. P.; Kwok, T. L.; Yung, Y. L.

    1983-01-01

    The rate coefficients, including Arrhenius parameters, have been computed for a number of chemical reactions involving hydrocarbon species for which experimental data are not available and which are important in planetary atmospheric models. The techniques used to calculate the kinetic parameters include the Troe and semiempirical bond energy-bond order (BEBO) or bond strength-bond length (BSBL) methods.

  3. Mathematical Modeling of Dual Layer Shell Type Recuperation System for Biogas Dehumidification

    NASA Astrophysics Data System (ADS)

    Gendelis, S.; Timuhins, A.; Laizans, A.; Bandeniece, L.

    2015-12-01

    The main aim of the current paper is to create a mathematical model for dual layer shell type recuperation system, which allows reducing the heat losses from the biomass digester and water amount in the biogas without any additional mechanical or chemical components. The idea of this system is to reduce the temperature of the outflowing gas by creating two-layered counter-flow heat exchanger around the walls of biogas digester, thus increasing a thermal resistance and the gas temperature, resulting in a condensation on a colder surface. Complex mathematical model, including surface condensation, is developed for this type of biogas dehumidifier and the parameter study is carried out for a wide range of parameters. The model is reduced to 1D case to make numerical calculations faster. It is shown that latent heat of condensation is very important for the total heat balance and the condensation rate is highly dependent on insulation between layers and outside temperature. Modelling results allow finding optimal geometrical parameters for the known gas flow and predicting the condensation rate for different system setups and seasons.

  4. A parameter estimation technique for stochastic self-assembly systems and its application to human papillomavirus self-assembly.

    PubMed

    Kumar, M Senthil; Schwartz, Russell

    2010-12-09

    Virus capsid assembly has been a key model system for studies of complex self-assembly but it does pose some significant challenges for modeling studies. One important limitation is the difficulty of determining accurate rate parameters. The large size and rapid assembly of typical viruses make it infeasible to directly measure coat protein binding rates or deduce them from the relatively indirect experimental measures available. In this work, we develop a computational strategy to deduce coat-coat binding rate parameters for viral capsid assembly systems by fitting stochastic simulation trajectories to experimental measures of assembly progress. Our method combines quadratic response surface and quasi-gradient descent approximations to deal with the high computational cost of simulations, stochastic noise in simulation trajectories and limitations of the available experimental data. The approach is demonstrated on a light scattering trajectory for a human papillomavirus (HPV) in vitro assembly system, showing that the method can provide rate parameters that produce accurate curve fits and are in good concordance with prior analysis of the data. These fits provide an insight into potential assembly mechanisms of the in vitro system and give a basis for exploring how these mechanisms might vary between in vitro and in vivo assembly conditions.

  5. A parameter estimation technique for stochastic self-assembly systems and its application to human papillomavirus self-assembly

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, M.; Schwartz, Russell

    2010-12-01

    Virus capsid assembly has been a key model system for studies of complex self-assembly but it does pose some significant challenges for modeling studies. One important limitation is the difficulty of determining accurate rate parameters. The large size and rapid assembly of typical viruses make it infeasible to directly measure coat protein binding rates or deduce them from the relatively indirect experimental measures available. In this work, we develop a computational strategy to deduce coat-coat binding rate parameters for viral capsid assembly systems by fitting stochastic simulation trajectories to experimental measures of assembly progress. Our method combines quadratic response surface and quasi-gradient descent approximations to deal with the high computational cost of simulations, stochastic noise in simulation trajectories and limitations of the available experimental data. The approach is demonstrated on a light scattering trajectory for a human papillomavirus (HPV) in vitro assembly system, showing that the method can provide rate parameters that produce accurate curve fits and are in good concordance with prior analysis of the data. These fits provide an insight into potential assembly mechanisms of the in vitro system and give a basis for exploring how these mechanisms might vary between in vitro and in vivo assembly conditions.

  6. Exploring the Cattaneo-Christov heat flux phenomenon on a Maxwell-type nanofluid coexisting with homogeneous/heterogeneous reactions

    NASA Astrophysics Data System (ADS)

    Sarkar, Amit; Kundu, Prabir Kumar

    2017-12-01

    This specific article unfolds the efficacy of Cattaneo-Christov heat flux on the heat and mass transport of Maxwell nanofluid flow over a stretched sheet with changeable thickness. Homogeneous/heterogeneous reactions in the fluid are additionally considered. The Cattaneo-Christov heat flux model is initiated in the energy equation. Appropriate similarity transformations are taken up to form a system of nonlinear ODEs. The impact of related parameters on the nanoparticle concentration and temperature is inspected through tables and diagrams. It is renowned that temperature distribution increases for lower values of the thermal relaxation parameter. The rate of mass transfer is enhanced for increasing in the heterogeneous reaction parameter but the reverse tendency is ensued for the homogeneous reaction parameter. On the other side, the rate of heat transfer is getting enhanced for the Cattaneo-Christov model compared to the classical Fourier's model for some flow factors. Thus the implication of the current study is to delve its unique effort towards the generalized version of traditional Fourier's law at nano level.

  7. Analysis on the crime model using dynamical approach

    NASA Astrophysics Data System (ADS)

    Mohammad, Fazliza; Roslan, Ummu'Atiqah Mohd

    2017-08-01

    A research is carried out to analyze a dynamical model of the spread crime system. A Simplified 2-Dimensional Model is used in this research. The objectives of this research are to investigate the stability of the model of the spread crime, to summarize the stability by using a bifurcation analysis and to study the relationship of basic reproduction number, R0 with the parameter in the model. Our results for stability of equilibrium points shows that we have two types of stability, which are asymptotically stable and saddle node. While the result for bifurcation analysis shows that the number of criminally active and incarcerated increases as we increase the value of a parameter in the model. The result for the relationship of R0 with the parameter shows that as the parameter increases, R0 increase too, and the rate of crime increase too.

  8. Spatial and temporal variability in growth of southern flounder (Paralichthys lethostigma)

    USGS Publications Warehouse

    Midway, Stephen R.; Wagner, Tyler; Arnott, Stephen A.; Biondo, Patrick; Martinez-Andrade, Fernando; Wadsworth, Thomas F.

    2015-01-01

    Delineation of stock structure is important for understanding the ecology and management of many fish populations, particularly those with wide-ranging distributions and high levels of harvest. Southern flounder (Paralichthys lethostigma) is a popular commercial and recreational species along the southeast Atlantic coast and Gulf of Mexico, USA. Recent studies have provided genetic and otolith morphology evidence that the Gulf of Mexico and Atlantic Ocean stocks differ. Using age and growth data from four states (Texas, Alabama, South Carolina, and North Carolina) we expanded upon the traditional von Bertalanffy model in order to compare growth rates of putative geographic stocks of southern flounder. We improved the model fitting process by adding a hierarchical Bayesian framework to allow each parameter to vary spatially or temporally as a random effect, as well as log transforming the three model parameters (L∞, K, andt0). Multiple comparisons of parameters showed that growth rates varied (even within states) for females, but less for males. Growth rates were also consistent through time, when long-term data were available. Since within-basin populations are thought to be genetically well-mixed, our results suggest that consistent small-scale environmental conditions (i.e., within estuaries) likely drive growth rates and should be considered when developing broader scale management plans.

  9. Discharge flow of a granular media from a silo: effect of the packing fraction and of the hopper angle

    NASA Astrophysics Data System (ADS)

    Benyamine, Mebirika; Aussillous, Pascale; Dalloz-Dubrujeaud, Blanche

    2017-06-01

    Silos are widely used in the industry. While empirical predictions of the flow rate, based on scaling laws, have existed for more than a century (Hagen 1852, translated in [1] - Beverloo et al. [2]), recent advances have be made on the understanding of the control parameters of the flow. In particular, using continuous modeling together with a mu(I) granular rheology seem to be successful in predicting the flow rate for large numbers of beads at the aperture (Staron et al.[3], [4]). Moreover Janda et al.[5] have shown that the packing fraction at the outlet plays an important role when the number of beads at the apeture decreases. Based on these considerations, we have studied experimentally the discharge flow of a granular media from a rectangular silo. We have varied two main parameters: the angle of the hopper, and the bulk packing fraction of the granular material by using bidisperse mixtures. We propose a simple physical model to describe the effect of these parameters, considering a continuous granular media with a dilatancy law at the outlet. This model predicts well the dependance of the flow rate on the hopper angle as well as the dependance of the flow rate on the fine mass fraction of a bidisperse mixture.

  10. Electro-kinetically driven peristaltic transport of viscoelastic physiological fluids through a finite length capillary: Mathematical modeling.

    PubMed

    Tripathi, Dharmendra; Yadav, Ashu; Bég, O Anwar

    2017-01-01

    Analytical solutions are developed for the electro-kinetic flow of a viscoelastic biological liquid in a finite length cylindrical capillary geometry under peristaltic waves. The Jefferys' non-Newtonian constitutive model is employed to characterize rheological properties of the fluid. The unsteady conservation equations for mass and momentum with electro-kinetic and Darcian porous medium drag force terms are reduced to a system of steady linearized conservation equations in an axisymmetric coordinate system. The long wavelength, creeping (low Reynolds number) and Debye-Hückel linearization approximations are utilized. The resulting boundary value problem is shown to be controlled by a number of parameters including the electro-osmotic parameter, Helmholtz-Smoluchowski velocity (maximum electro-osmotic velocity), and Jefferys' first parameter (ratio of relaxation and retardation time), wave amplitude. The influence of these parameters and also time on axial velocity, pressure difference, maximum volumetric flow rate and streamline distributions (for elucidating trapping phenomena) is visualized graphically and interpreted in detail. Pressure difference magnitudes are enhanced consistently with both increasing electro-osmotic parameter and Helmholtz-Smoluchowski velocity, whereas they are only elevated with increasing Jefferys' first parameter for positive volumetric flow rates. Maximum time averaged flow rate is enhanced with increasing electro-osmotic parameter, Helmholtz-Smoluchowski velocity and Jefferys' first parameter. Axial flow is accelerated in the core (plug) region of the conduit with greater values of electro-osmotic parameter and Helmholtz-Smoluchowski velocity whereas it is significantly decelerated with increasing Jefferys' first parameter. The simulations find applications in electro-osmotic (EO) transport processes in capillary physiology and also bio-inspired EO pump devices in chemical and aerospace engineering. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    PubMed

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  12. Asymmetrical effects of mesophyll conductance on fundamental photosynthetic parameters and their relationships estimated from leaf gas exchange measurements.

    PubMed

    Sun, Ying; Gu, Lianhong; Dickinson, Robert E; Pallardy, Stephen G; Baker, John; Cao, Yonghui; DaMatta, Fábio Murilo; Dong, Xuejun; Ellsworth, David; Van Goethem, Davina; Jensen, Anna M; Law, Beverly E; Loos, Rodolfo; Martins, Samuel C Vitor; Norby, Richard J; Warren, Jeffrey; Weston, David; Winter, Klaus

    2014-04-01

    Worldwide measurements of nearly 130 C3 species covering all major plant functional types are analysed in conjunction with model simulations to determine the effects of mesophyll conductance (g(m)) on photosynthetic parameters and their relationships estimated from A/Ci curves. We find that an assumption of infinite g(m) results in up to 75% underestimation for maximum carboxylation rate V(cmax), 60% for maximum electron transport rate J(max), and 40% for triose phosphate utilization rate T(u) . V(cmax) is most sensitive, J(max) is less sensitive, and T(u) has the least sensitivity to the variation of g(m). Because of this asymmetrical effect of g(m), the ratios of J(max) to V(cmax), T(u) to V(cmax) and T(u) to J(max) are all overestimated. An infinite g(m) assumption also limits the freedom of variation of estimated parameters and artificially constrains parameter relationships to stronger shapes. These findings suggest the importance of quantifying g(m) for understanding in situ photosynthetic machinery functioning. We show that a nonzero resistance to CO2 movement in chloroplasts has small effects on estimated parameters. A non-linear function with gm as input is developed to convert the parameters estimated under an assumption of infinite gm to proper values. This function will facilitate gm representation in global carbon cycle models. © 2013 John Wiley & Sons Ltd.

  13. A stock-flow consistent input-output model with applications to energy price shocks, interest rates, and heat emissions

    NASA Astrophysics Data System (ADS)

    Berg, Matthew; Hartley, Brian; Richters, Oliver

    2015-01-01

    By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.

  14. Growth rate in the dynamical dark energy models.

    PubMed

    Avsajanishvili, Olga; Arkhipova, Natalia A; Samushia, Lado; Kahniashvili, Tina

    Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter [Formula: see text] that describes the steepness of the scalar field potential.

  15. Scaling methane oxidation: From laboratory incubation experiments to landfill cover field conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abichou, Tarek, E-mail: abichou@eng.fsu.edu; Mahieu, Koenraad; Chanton, Jeff

    2011-05-15

    Evaluating field-scale methane oxidation in landfill cover soils using numerical models is gaining interest in the solid waste industry as research has made it clear that methane oxidation in the field is a complex function of climatic conditions, soil type, cover design, and incoming flux of landfill gas from the waste mass. Numerical models can account for these parameters as they change with time and space under field conditions. In this study, we developed temperature, and water content correction factors for methane oxidation parameters. We also introduced a possible correction to account for the different soil structure under field conditions.more » These parameters were defined in laboratory incubation experiments performed on homogenized soil specimens and were used to predict the actual methane oxidation rates to be expected under field conditions. Water content and temperature corrections factors were obtained for the methane oxidation rate parameter to be used when modeling methane oxidation in the field. To predict in situ measured rates of methane with the model it was necessary to set the half saturation constant of methane and oxygen, K{sub m}, to 5%, approximately five times larger than laboratory measured values. We hypothesize that this discrepancy reflects differences in soil structure between homogenized soil conditions in the lab and actual aggregated soil structure in the field. When all of these correction factors were re-introduced into the oxidation module of our model, it was able to reproduce surface emissions (as measured by static flux chambers) and percent oxidation (as measured by stable isotope techniques) within the range measured in the field.« less

  16. Host mating system and the prevalence of a disease in a plant population

    USGS Publications Warehouse

    Koslow, Jennifer M.; DeAngelis, Donald L.

    2006-01-01

    A modified susceptible–infected–recovered (SIR) host–pathogen model is used to determine the influence of plant mating system on the outcome of a host–pathogen interaction. Unlike previous models describing how interactions between mating system and pathogen infection affect individual fitness, this model considers the potential consequences of varying mating systems on the prevalence of resistance alleles and disease within the population. If a single allele for disease resistance is sufficient to confer complete resistance in an individual and if both homozygote and heterozygote resistant individuals have the same mean birth and death rates, then, for any parameter set, the selfing rate does not affect the proportions of resistant, susceptible or infected individuals at equilibrium. If homozygote and heterozygote individual birth rates differ, however, the mating system can make a difference in these proportions. In that case, depending on other parameters, increased selfing can either increase or decrease the rate of infection in the population. Results from this model also predict higher frequencies of resistance alleles in predominantly selfing compared to predominantly outcrossing populations for most model conditions. In populations that have higher selfing rates, the resistance alleles are concentrated in homozygotes, whereas in more outcrossing populations, there are more resistant heterozygotes.

  17. FY2017 ILAW Glass Corrosion Testing with the Single-Pass Flow-Through Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neeway, James J.; Asmussen, Robert M.; Cordova, Elsa

    The inventory of immobilized low-activity waste (ILAW) produced at the Hanford Tank Waste Treatment and Immobilization Plant (WTP) will be disposed of at the near-surface, on-site Integrated Disposal Facility (IDF). When groundwater comes into contact with the waste form, the glass will corrode and radionuclides will be released into the near-field environment. Because the release of the radionuclides is dependent on the dissolution rate of the glass, it is important that the performance assessment (PA) model accounts for the dissolution rate of the glass as a function of various conditions. To accomplish this, an IDF PA glass dissolution model basedmore » on Transition State Theory (TST) can be employed. The model is able to account for changes in temperature, exposed surface area, and pH of the contacting solution as well as the effect of silicon solution concentrations, specifically the activity of orthosilicic acid (H4SiO4), whose concentration is directly linked to the glass dissolution rate. In addition, the IDF PA model accounts for the ion exchange process. The effect of temperature, pH, H4SiO4 activity, and the rate of ion exchange can be parameterized and implemented directly into the PA rate model. The rate model parameters are derived from laboratory tests with the single-pass flow-through (SPFT) method. The provided data can be used by glass researchers to further the understanding of ILAW glass behavior, by IDF PA modelers to use the rate model parameters in PA modeling efforts, and by Department of Energy (DOE) contractors and decision makers as they assess the IDF PA program.« less

  18. Bayesian Modeling of Exposure and Airflow Using Two-Zone Models

    PubMed Central

    Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy

    2009-01-01

    Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of uncertainty and variability in these quantities can be used to not only provide better estimates of model outputs but also model parameters. PMID:19403840

  19. Estimating taxonomic diversity, extinction rates, and speciation rates from fossil data using capture-recapture models

    USGS Publications Warehouse

    Nichols, J.D.; Pollock, K.H.

    1983-01-01

    Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.

  20. Dynamics of eco-epidemiological model with harvesting

    NASA Astrophysics Data System (ADS)

    Purnomo, Anna Silvia; Darti, Isnani; Suryanto, Agus

    2017-12-01

    In this paper, we study an eco-epidemiology model which is derived from S I epidemic model with bilinear incidence rate and modified Leslie Gower predator-prey model with harvesting on susceptible prey. Existence condition and stability of all equilibrium points are discussed for the proposed model. Furthermore, we show that the model exhibits a Hopf bifurcation around interior equilibrium point which is driven by the rate of infection. Our numerical simulations using some different value of parameters confirm our analytical analysis.

  1. Prediction of penetration rate of rotary-percussive drilling using artificial neural networks - a case study / Prognozowanie postępu wiercenia przy użyciu wiertła udarowo-obrotowego przy wykorzystaniu sztucznych sieci neuronowych - studium przypadku

    NASA Astrophysics Data System (ADS)

    Aalizad, Seyed Ali; Rashidinejad, Farshad

    2012-12-01

    Penetration rate in rocks is one of the most important parameters of determination of drilling economics. Total drilling costs can be determined by predicting the penetration rate and utilized for mine planning. The factors which affect penetration rate are exceedingly numerous and certainly are not completely understood. For the prediction of penetration rate in rotary-percussive drilling, four types of rocks in Sangan mine have been chosen. Sangan is situated in Khorasan-Razavi province in Northeastern Iran. The selected parameters affect penetration rate is divided in three categories: rock properties, drilling condition and drilling pattern. The rock properties are: density, rock quality designation (RQD), uni-axial compressive strength, Brazilian tensile strength, porosity, Mohs hardness, Young modulus, P-wave velocity. Drilling condition parameters are: percussion, rotation, feed (thrust load) and flushing pressure; and parameters for drilling pattern are: blasthole diameter and length. Rock properties were determined in the laboratory, and drilling condition and drilling pattern were determined in the field. For create a correlation between penetration rate and rock properties, drilling condition and drilling pattern, artificial neural networks (ANN) were used. For this purpose, 102 blastholes were observed and drilling condition, drilling pattern and time of drilling in each blasthole were recorded. To obtain a correlation between this data and prediction of penetration rate, MATLAB software was used. To train the pattern of ANN, 77 data has been used and 25 of them found for testing the pattern. Performance of ANN models was assessed through the root mean square error (RMSE) and correlation coefficient (R2). For optimized model (14-14-10-1) RMSE and R2 is 0.1865 and 86%, respectively, and its sensitivity analysis showed that there is a strong correlation between penetration rate and RQD, rotation and blasthole diameter. High correlation coefficient and low root mean square error of these models showed that the ANN is a suitable tool for penetration rate prediction.

  2. Application of a whole-body pharmacokinetic model for targeted radionuclide therapy to NM404 and FLT

    NASA Astrophysics Data System (ADS)

    Grudzinski, Joseph J.; Floberg, John M.; Mudd, Sarah R.; Jeffery, Justin J.; Peterson, Eric T.; Nomura, Alice; Burnette, Ronald R.; Tomé, Wolfgang A.; Weichert, Jamey P.; Jeraj, Robert

    2012-03-01

    We have previously developed a model that provides relative dosimetry estimates for targeted radionuclide therapy (TRT) agents. The whole-body and tumor pharmacokinetic (PK) parameters of this model can be noninvasively measured with molecular imaging, providing a means of comparing potential TRT agents. Parameter sensitivities and noise will affect the accuracy and precision of the estimated PK values and hence dosimetry estimates. The aim of this work is to apply a PK model for TRT to two agents with different magnitudes of clearance rates, NM404 and FLT, explore parameter sensitivity with respect to time and investigate the effect of noise on parameter precision and accuracy. Twenty-three tumor bearing mice were injected with a ‘slow-clearing’ agent, 124I-NM404 (n = 10), or a ‘fast-clearing’ agent, 18F-FLT (3‧-deoxy-3‧-fluorothymidine) (n = 13) and imaged via micro-PET/CT pseudo-dynamically or dynamically, respectively. Regions of interest were drawn within the heart and tumor to create time-concentration curves for blood pool and tumor. PK analysis was performed to estimate the mean and standard error of the central compartment efflux-to-influx ratio (k12/k21), central elimination rate constant (kel), and tumor influx-to-efflux ratio (k34/k43), as well as the mean and standard deviation of the dosimetry estimates. NM404 and FLT parameter estimation results were used to analyze model accuracy and parameter sensitivity. The accuracy of the experimental sampling schedule was compared to that of an optimal sampling schedule found using Cramer-Rao lower bounds theory. Accuracy was assessed using correlation coefficient, bias and standard error of the estimate normalized to the mean (SEE/mean). The PK parameter estimation of NM404 yielded a central clearance, kel (0.009 ± 0.003 h-1), normal body retention, k12/k21 (0.69 ± 0.16), tumor retention, k34/k43 (1.44 ± 0.46) and predicted dosimetry, Dtumor (3.47 ± 1.24 Gy). The PK parameter estimation of FLT yielded a central elimination rate constant, kel (0.050 ± 0.025 min-1), normal body retention, k12/k21 (2.21 ± 0.62) and tumor retention, k34/k43 (0.65 ± 0.17), and predicted dosimetry, Dtumor (0.61 ± 0.20 Gy). Compared to experimental sampling, optimal sampling decreases the dosimetry bias and SEE/mean for NM404; however, it increases bias and decreases SEE/mean for FLT. For both NM404 and FLT, central compartment efflux rate constant, k12, and central compartment influx rate constant, k21, possess mirroring sensitivities at relatively early time points. The instantaneous concentration in the blood, C0, was most sensitive at early time points; central elimination, kel, and tumor efflux, k43, are most sensitive at later time points. A PK model for TRT was applied to both a slow-clearing, NM404, and a fast-clearing, FLT, agents in a xenograft murine model. NM404 possesses more favorable PK values according to the PK TRT model. The precise and accurate measurement of k12, k21, kel, k34 and k43 will translate into improved and precise dosimetry estimations. This work will guide the future use of this PK model for assessing the relative effectiveness of potential TRT agents.

  3. Recent developments in rotary-balance testing of fighter aircraft configurations at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Malcolm, G. N.; Schiff, L. B.

    1985-01-01

    Two rotary balance apparatuses were developed for testing airplane models in a coning motion. A large scale apparatus, developed for use in the 12-Foot Pressure Wind tunnel primarily to permit testing at high Reynolds numbers, was recently used to investigate the aerodynamics of 0.05-scale model of the F-15 fighter aircraft. Effects of Reynolds number, spin rate parameter, model attitude, presence of a nose boom, and model/sting mounting angle were investigated. A smaller apparatus, which investigates the aerodynamics of bodies of revolution in a coning motion, was used in the 6-by-6 foot Supersonic Wind Tunnel to investigate the aerodynamic behavior of a simple representation of a modern fighter, the Standard Dynamic Model (SDM). Effects of spin rate parameter and model attitude were investigated. A description of the two rigs and a discussion of some of the results obtained in the respective test are presented.

  4. Mathematical modeling of fluid flow in aluminum ladles for degasification with impeller - injector

    NASA Astrophysics Data System (ADS)

    Ramos-Gómez, E.; González-Rivera, C.; Ramírez-Argáez, M. A.

    2012-09-01

    In this work a fundamental Eulerian mathematical model was developed to simulate fluid flow in a water physical model of an aluminum ladle equipped with impeller for degassing treatment. The effect of critical process parameters such as rotor speed, gas flow rate on the fluid flow and vortex formation was analyzed with this model. Commercial CFD code PHOENICS 3.4 was used to solve all conservation equations governing the process for this twophase fluid flow system. The mathematical model was successfully validated against experimentally measured liquid velocity and turbulent profiles in a physical model. From the results it was concluded that the angular speed of the impeller is the most important parameter promoting better stirred baths. Pumping effect of the impeller is increased as impeller rotation speed increases. Gas flow rate is detrimental on bath stirring and diminishes pumping effect of impeller.

  5. The frequency response of dynamic friction: Enhanced rate-and-state models

    NASA Astrophysics Data System (ADS)

    Cabboi, A.; Putelat, T.; Woodhouse, J.

    2016-07-01

    The prediction and control of friction-induced vibration requires a sufficiently accurate constitutive law for dynamic friction at the sliding interface: for linearised stability analysis, this requirement takes the form of a frictional frequency response function. Systematic measurements of this frictional frequency response function are presented for small samples of nylon and polycarbonate sliding against a glass disc. Previous efforts to explain such measurements from a theoretical model have failed, but an enhanced rate-and-state model is presented which is shown to match the measurements remarkably well. The tested parameter space covers a range of normal forces (10-50 N), of sliding speeds (1-10 mm/s) and frequencies (100-2000 Hz). The key new ingredient in the model is the inclusion of contact stiffness to take into account elastic deformations near the interface. A systematic methodology is presented to discriminate among possible variants of the model, and then to identify the model parameter values.

  6. Analysis of factors that influence rates of carbon monoxide uptake, distribution, and washout from blood and extravascular tissues using a multicompartment model.

    PubMed

    Bruce, Margaret C; Bruce, Eugene N

    2006-04-01

    To better understand factors that influence carbon monoxide (CO) washout rates, we utilized a multicompartment mathematical model to predict rates of CO uptake, distribution in vascular and extravascular (muscle vs. other soft tissue) compartments, and washout over a range of exposure and washout conditions with varied subject-specific parameters. We fitted this model to experimental data from 15 human subjects, for whom subject-specific parameters were known, multiple washout carboxyhemoglobin (COHb) levels were available, and CO exposure conditions were identical, to investigate the contributions of exposure conditions and individual variability to CO washout from blood. We found that CO washout from venous blood was biphasic and that postexposure times at which COHb samples were obtained significantly influenced the calculated CO half times (P < 0.0001). The first, more rapid, phase of CO washout from the blood reflected the loss of CO to the expired air and to a slow uptake by the muscle compartment, whereas the second, slower washout phase was attributable to CO flow from the muscle compartment back to the blood and removal from blood via the expired air. When the model was used to predict the effects of varying exposure conditions for these subjects, the CO exposure duration, concentration, peak COHb levels, and subject-specific parameters each influenced washout half times. Blood volume divided by ventilation correlated better with half-time predictions than did cardiac output, muscle mass, or ventilation, but it explained only approximately 50% of half-time variability. Thus exposure conditions, COHb sampling times, and individual parameters should be considered when estimating CO washout rates for poisoning victims.

  7. Cost Minimization for Joint Energy Management and Production Scheduling Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Shah, Rahul H.

    Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.

  8. Estimation of the dynamics and rate of transmission of classical swine fever (hog cholera) in wild pigs.

    PubMed Central

    Hone, J.; Pech, R.; Yip, P.

    1992-01-01

    Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed. PMID:1582476

  9. Prediction of corridor effect from the launching of the satellite power system. [air pollutant concentration into narrow band of latitude

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Whitten, R. C.; Woodward, H. T.; Capone, L. A.; Riegel, C. A.

    1982-01-01

    A diagnostic model is developed to define the parameters which control the corridor effect of contaminants deposited in a narrow latitudinal band of the earth's atmosphere by numerous launches of the STS and heavy lift launch vehicles for construction of satellite solar power systems. Identified factors included the pollution injection rate, the ambient background levels of the pollutant species, and the transport properties related to the dilution rate of the chemicals. If the chemical life of the pollutant was shorter or the same length of time as the transport time, alterations in the chemical production and loss rates were found to be parameters necessarily added to the model. A comparison with NASA Ames Research Center two-dimensional model results indicate that the corridor effect was possile with operations above 60 km in the case of H2O, H2, and NO production.

  10. Spatiotemporal Bayesian analysis of Lyme disease in New York state, 1990-2000.

    PubMed

    Chen, Haiyan; Stratton, Howard H; Caraco, Thomas B; White, Dennis J

    2006-07-01

    Mapping ordinarily increases our understanding of nontrivial spatial and temporal heterogeneities in disease rates. However, the large number of parameters required by the corresponding statistical models often complicates detailed analysis. This study investigates the feasibility of a fully Bayesian hierarchical regression approach to the problem and identifies how it outperforms two more popular methods: crude rate estimates (CRE) and empirical Bayes standardization (EBS). In particular, we apply a fully Bayesian approach to the spatiotemporal analysis of Lyme disease incidence in New York state for the period 1990-2000. These results are compared with those obtained by CRE and EBS in Chen et al. (2005). We show that the fully Bayesian regression model not only gives more reliable estimates of disease rates than the other two approaches but also allows for tractable models that can accommodate more numerous sources of variation and unknown parameters.

  11. Implementation of two-component advective flow solution in XSPEC

    NASA Astrophysics Data System (ADS)

    Debnath, Dipak; Chakrabarti, Sandip K.; Mondal, Santanu

    2014-05-01

    Spectral and temporal properties of black hole candidates can be explained reasonably well using Chakrabarti-Titarchuk solution of two-component advective flow (TCAF). This model requires two accretion rates, namely the Keplerian disc accretion rate and the halo accretion rate, the latter being composed of a sub-Keplerian, low-angular-momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disc rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time, we made it user friendly by implementing it into XSPEC software of Goddard Space Flight Center (GSFC)/NASA. This enables any user to extract physical parameters of the accretion flows, such as two accretion rates, the shock location, the shock strength, etc., for any black hole candidate. We provide some examples of fitting a few cases using this model. Most importantly, unlike any other model, we show that TCAF is capable of predicting timing properties from the spectral fits, since in TCAF, a shock is responsible for deciding spectral slopes as well as quasi-periodic oscillation frequencies. L86

  12. Relaxed Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A parsimonious characterization of change in global age-specific and total fertility rates

    PubMed Central

    2018-01-01

    This study aims to understand trends in global fertility from 1950-2010 though the analysis of age-specific fertility rates. This approach incorporates both the overall level, as when the total fertility rate is modeled, and different patterns of age-specific fertility to examine the relationship between changes in age-specific fertility and fertility decline. Singular value decomposition is used to capture the variation in age-specific fertility curves while reducing the number of dimensions, allowing curves to be described nearly fully with three parameters. Regional patterns and trends over time are evident in parameter values, suggesting this method provides a useful tool for considering fertility decline globally. The second and third parameters were analyzed using model-based clustering to examine patterns of age-specific fertility over time and place; four clusters were obtained. A country’s demographic transition can be traced through time by membership in the different clusters, and regional patterns in the trajectories through time and with fertility decline are identified. PMID:29377899

  14. Fast pyrolysis kinetics of alkali lignin: Evaluation of apparent rate parameters and product time evolution.

    PubMed

    Ojha, Deepak Kumar; Viju, Daniel; Vinu, R

    2017-10-01

    In this study, the apparent kinetics of fast pyrolysis of alkali lignin was evaluated by obtaining isothermal mass loss data in the timescale of 2-30s at 400-700°C in an analytical pyrolyzer. The data were analyzed using different reaction models to determine the rate constants and apparent rate parameters. First order and one dimensional diffusion models resulted in good fits with experimental data with apparent activation energy of 23kJmol -1 . Kinetic compensation effect was established using a large number of kinetic parameters reported in the literature for pyrolysis of different lignins. The time evolution of the major functional groups in the pyrolysate was analyzed using in situ Fourier transform infrared spectroscopy. Maximum production of the volatiles occurred around 10-12s. A clear transformation of guaiacols to phenol, catechol and their derivatives, and aromatic hydrocarbons was observed with increasing temperature. The plausible reaction steps involved in various transformations are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. How Many Parameters Actually Affect the Mobility of Conjugated Polymers?

    NASA Astrophysics Data System (ADS)

    Fornari, Rocco P.; Blom, Paul W. M.; Troisi, Alessandro

    2017-02-01

    We describe charge transport along a polymer chain with a generic theoretical model depending in principle on tens of parameters, reflecting the chemistry of the material. The charge carrier states are obtained from a model Hamiltonian that incorporates different types of disorder and electronic structure (e.g., the difference between homo- and copolymer). The hopping rate between these states is described with a general rate expression, which contains the rates most used in the literature as special cases. We demonstrate that the steady state charge mobility in the limit of low charge density and low field ultimately depends on only two parameters: an effective structural disorder and an effective electron-phonon coupling, weighted by the size of the monomer. The results support the experimental observation [N. I. Craciun, J. Wildeman, and P. W. M. Blom, Phys. Rev. Lett. 100, 056601 (2008), 10.1103/PhysRevLett.100.056601] that the mobility in a broad range of (polymeric) semiconductors follows a universal behavior, insensitive to the chemical detail.

  16. A Novel Analytical Solution for Estimating Aquifer Properties and Predicting Stream Depletion Rates by Pumping from a Horizontally Anisotropic Aquifer

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Zhan, H.; Knappett, P.

    2017-12-01

    Past studies modeling stream-aquifer interactions commonly account for vertical anisotropy, but rarely address horizontal anisotropy, which does exist in certain geological settings. Horizontal anisotropy is impacted by sediment deposition rates, orientation of sediment particles and orientations of fractures etc. We hypothesize that horizontal anisotropy controls the volume of recharge a pumped aquifer captures from the river. To test this hypothesis, a new mathematical model was developed to describe the distribution of drawdown from stream-bank pumping with a well screened across a horizontally anisotropic, confined aquifer, laterally bounded by a river. This new model was used to determine four aquifer parameters including the magnitude and directions of major and minor principal transmissivities and storativity based on the observed drawdown-time curves within a minimum of three non-collinear observation wells. By comparing the aquifer parameters values estimated from drawdown data generated known values, the discrepancies of the major and minor transmissivities, horizontal anisotropy ratio, storativity and the direction of major transmissivity were 13.1, 8.8, 4, 0 and <1 percent, respectively. These discrepancies are well within acceptable ranges of uncertainty for aquifer parameters estimation, when compared with other pumping test interpretation methods, which typically estimate uncertainty for the estimated parameters of 20 or 30 percent. Finally, the stream depletion rate was calculated as a function of stream-bank pumping. Unique to horizontally anisotropic aquifer, the stream depletion rate at any given pumping rate depends on the horizontal anisotropy ratio and the direction of the principle transmissivity. For example, when horizontal anisotropy ratios are 5 and 50 respectively, the corresponding depletion rate under pseudo steady-state condition are 86 m3/day and 91 m3/day. The results of this research fill a knowledge gap on predicting the response of horizontally anisotropic aquifers connected to streams. We further provide a method to estimate aquifer properties and predict stream depletion rates from observed drawdown. This new model can be used by water resources managers to exploit groundwater resource reasonably while protecting stream ecosystem.

  17. Normal tissue complication probability (NTCP) parameters for breast fibrosis: pooled results from two randomised trials.

    PubMed

    Mukesh, Mukesh B; Harris, Emma; Collette, Sandra; Coles, Charlotte E; Bartelink, Harry; Wilkinson, Jenny; Evans, Philip M; Graham, Peter; Haviland, Jo; Poortmans, Philip; Yarnold, John; Jena, Raj

    2013-08-01

    The dose-volume effect of radiation therapy on breast tissue is poorly understood. We estimate NTCP parameters for breast fibrosis after external beam radiotherapy. We pooled individual patient data of 5856 patients from 2 trials including whole breast irradiation followed with or without a boost. A two-compartment dose volume histogram model was used with boost volume as the first compartment and the remaining breast volume as second compartment. Results from START-pilot trial (n=1410) were used to test the predicted models. 26.8% patients in the Cambridge trial (5 years) and 20.7% patients in the EORTC trial (10 years) developed moderate-severe breast fibrosis. The best fit NTCP parameters were BEUD3(50)=136.4 Gy, γ50=0.9 and n=0.011 for the Niemierko model and BEUD3(50)=132 Gy, m=0.35 and n=0.012 for the Lyman Kutcher Burman model. The observed rates of fibrosis in the START-pilot trial agreed well with the predicted rates. This large multi-centre pooled study suggests that the effect of volume parameter is small and the maximum RT dose is the most important parameter to influence breast fibrosis. A small value of volume parameter 'n' does not fit with the hypothesis that breast tissue is a parallel organ. However, this may reflect limitations in our current scoring system of fibrosis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Ab initio-informed maximum entropy modeling of rovibrational relaxation and state-specific dissociation with application to the O{sub 2} + O system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulakhmetov, Marat, E-mail: mkulakhm@purdue.edu; Alexeenko, Alina, E-mail: alexeenk@purdue.edu; Gallis, Michael, E-mail: magalli@sandia.gov

    Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O{sub 2} + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociationmore » exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500–20 000 K temperature range. The VRT model captures 80 × 10{sup 6} state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000–15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.« less

  19. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models

    PubMed Central

    Whittington, Jesse; Sawaya, Michael A.

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal’s home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786–1.071) for females, 0.844 (0.703–0.975) for males, and 0.882 (0.779–0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758–1.024) for females, 0.825 (0.700–0.948) for males, and 0.863 (0.771–0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park’s population of grizzly bears requires continued conservation-oriented management actions. PMID:26230262

  20. Evaluation of the biophysical limitations on photosynthesis of four varietals of Brassica rapa

    NASA Astrophysics Data System (ADS)

    Pleban, J. R.; Mackay, D. S.; Aston, T.; Ewers, B.; Weinig, C.

    2014-12-01

    Evaluating performance of agricultural varietals can support the identification of genotypes that will increase yield and can inform management practices. The biophysical limitations of photosynthesis are amongst the key factors that necessitate evaluation. This study evaluated how four biophysical limitations on photosynthesis, stomatal response to vapor pressure deficit, maximum carboxylation rate by Rubisco (Ac), rate of photosynthetic electron transport (Aj) and triose phosphate use (At) vary between four Brassica rapa genotypes. Leaf gas exchange data was used in an ecophysiological process model to conduct this evaluation. The Terrestrial Regional Ecosystem Exchange Simulator (TREES) integrates the carbon uptake and utilization rate limiting factors for plant growth. A Bayesian framework integrated in TREES here used net A as the target to estimate the four limiting factors for each genotype. As a first step the Bayesian framework was used for outlier detection, with data points outside the 95% confidence interval of model estimation eliminated. Next parameter estimation facilitated the evaluation of how the limiting factors on A different between genotypes. Parameters evaluated included maximum carboxylation rate (Vcmax), quantum yield (ϕJ), the ratio between Vc-max and electron transport rate (J), and trios phosphate utilization (TPU). Finally, as trios phosphate utilization has been shown to not play major role in the limiting A in many plants, the inclusion of At in models was evaluated using deviance information criteria (DIC). The outlier detection resulted in a narrowing in the estimated parameter distributions allowing for greater differentiation of genotypes. Results show genotypes vary in the how limitations shape assimilation. The range in Vc-max , a key parameter in Ac, was 203.2 - 223.9 umol m-2 s-1 while the range in ϕJ, a key parameter in AJ, was 0.463 - 0.497 umol m-2 s-1. The added complexity of the TPU limitation did not improve model performance in the genotypes assessed based on DIC. By identifying how varietals differ in their biophysical limitations on photosynthesis genotype selection can be informed for agricultural goals. Further work aims at applying this approach to a fifth limiting factor on photosynthesis, mesophyll conductance.

  1. On the in vivo photochemical rate parameters for PDT reactive oxygen species modeling

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Ghogare, Ashwini A.; Greer, Alexander; Zhu, Timothy C.

    2017-03-01

    Photosensitizer photochemical parameters are crucial data in accurate dosimetry for photodynamic therapy (PDT) based on photochemical modeling. Progress has been made in the last few decades in determining the photochemical properties of commonly used photosensitizers (PS), but mostly in solution or in vitro. Recent developments allow for the estimation of some of these photochemical parameters in vivo. This review will cover the currently available in vivo photochemical properties of photosensitizers as well as the techniques for measuring those parameters. Furthermore, photochemical parameters that are independent of environmental factors or are universal for different photosensitizers will be examined. Most photosensitizers discussed in this review are of the type II (singlet oxygen) photooxidation category, although type I photosensitizers that involve other reactive oxygen species (ROS) will be discussed as well. The compilation of these parameters will be essential for ROS modeling of PDT.

  2. On the in-vivo photochemical rate parameters for PDT reactive oxygen species modeling

    PubMed Central

    Kim, Michele M.; Ghogare, Ashwini A.; Greer, Alexander; Zhu, Timothy C.

    2017-01-01

    Photosensitizer photochemical parameters are crucial data in accurate dosimetry for photodynamic therapy (PDT) based on photochemical modeling. Progress has been made in the last few decades in determining the photochemical properties of commonly used photosensitizers (PS), but mostly in solution or in-vitro. Recent developments allow for the estimation of some of these photochemical parameters in-vivo. This review will cover the currently available in-vivo photochemical properties of photosensitizers as well as the techniques for measuring those parameters. Furthermore, photochemical parameters that are independent of environmental factors or are universal for different photosensitizers will be examined. Most photosensitizers discussed in this review are of the type II (singlet oxygen) photooxidation category, although type I photosensitizers that involve other reactive oxygen species (ROS) will be discussed as well. The compilation of these parameters will be essential for ROS modeling of PDT. PMID:28166056

  3. Collisional excitation of CO by H2O - An astrophysicist's guide to obtaining rate constants from coherent anti-Stokes Raman line shape data

    NASA Technical Reports Server (NTRS)

    Green, Sheldon

    1993-01-01

    Rate constants for excitation of CO by collisions with H2O are needed to understand recent observations of comet spectra. These collision rates are closely related to spectral line shape parameters, especially those for Raman Q-branch spectra. Because such spectra have become quite important for thermometry applications, much effort has been invested in understanding this process. Although it is not generally possible to extract state-to-state rate constants directly from the data as there are too many unknowns, if the matrix of state-to-state rates can be expressed in terms of a rate-law model which depends only on rotational quantum numbers plus a few parameters, the parameters can be determined from the data; this has been done with some success for many systems, especially those relevant to combustion processes. Although such an analysis has not yet been done for CO-H2O, this system is expected to behave similarly to N2-H2O which has been well studies; modifications of parameters for the latter system are suggested which should provide a reasonable description of rate constants for the former.

  4. A self-consistency approach to improve microwave rainfall rate estimation from space

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Mack, Robert A.; Hakkarinen, Ida M.

    1989-01-01

    A multichannel statistical approach is used to retrieve rainfall rates from the brightness temperature T(B) observed by passive microwave radiometers flown on a high-altitude NASA aircraft. T(B) statistics are based upon data generated by a cloud radiative model. This model simulates variabilities in the underlying geophysical parameters of interest, and computes their associated T(B) in each of the available channels. By further imposing the requirement that the observed T(B) agree with the T(B) values corresponding to the retrieved parameters through the cloud radiative transfer model, the results can be made to agree quite well with coincident radar-derived rainfall rates. Some information regarding the cloud vertical structure is also obtained by such an added requirement. The applicability of this technique to satellite retrievals is also investigated. Data which might be observed by satellite-borne radiometers, including the effects of nonuniformly filled footprints, are simulated by the cloud radiative model for this purpose.

  5. An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.

    PubMed

    Bradley, Stuart

    2015-11-20

    Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.

  6. Kinetic model for dependence of thin film stress on growth rate, temperature, and microstructure

    NASA Astrophysics Data System (ADS)

    Chason, E.; Shin, J. W.; Hearne, S. J.; Freund, L. B.

    2012-04-01

    During deposition, many thin films go through a range of stress states, changing from compressive to tensile and back again. In addition, the stress depends strongly on the processing and material parameters. We have developed a simple analytical model to describe the stress evolution in terms of a kinetic competition between different mechanisms of stress generation and relaxation at the triple junction where the surface and grain boundary intersect. The model describes how the steady state stress scales with the dimensionless parameter D/LR where D is the diffusivity, R is the growth rate, and L is the grain size. It also explains the transition from tensile to compressive stress as the microstructure evolves from isolated islands to a continuous film. We compare calculations from the model with measurements of the stress dependence on grain size and growth rate in the steady state regime and of the evolution of stress with thickness for different temperatures.

  7. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  8. Quantitative pharmacokinetic-pharmacodynamic modelling of baclofen-mediated cardiovascular effects using BP and heart rate in rats.

    PubMed

    Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie-Eve; Snow, Debra; Mettetal, Jerome T; Bialecki, Russell A

    2016-10-01

    While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen-mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular-relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Han-Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg(-1) , p.o.) at 4 h intervals and baclofen-mediated changes in parameters recorded. A pharmacokinetic-pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. The systems pharmacology model developed fits baclofen-mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. © 2016 The British Pharmacological Society.

  9. Quantitative pharmacokinetic–pharmacodynamic modelling of baclofen‐mediated cardiovascular effects using BP and heart rate in rats

    PubMed Central

    Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie‐Eve; Snow, Debra

    2016-01-01

    Background and Purpose While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen‐mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular‐relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Experimental Approach Han‐Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg−1, p.o.) at 4 h intervals and baclofen‐mediated changes in parameters recorded. A pharmacokinetic–pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Key Results Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. Conclusions and Implications The systems pharmacology model developed fits baclofen‐mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. PMID:27448216

  10. A strain-, cow-, and herd-specific bio-economic simulation model of intramammary infections in dairy cattle herds.

    PubMed

    Gussmann, Maya; Kirkeby, Carsten; Græsbøll, Kaare; Farre, Michael; Halasa, Tariq

    2018-07-14

    Intramammary infections (IMI) in dairy cattle lead to economic losses for farmers, both through reduced milk production and disease control measures. We present the first strain-, cow- and herd-specific bio-economic simulation model of intramammary infections in a dairy cattle herd. The model can be used to investigate the cost-effectiveness of different prevention and control strategies against IMI. The objective of this study was to describe a transmission framework, which simulates spread of IMI causing pathogens through different transmission modes. These include the traditional contagious and environmental spread and a new opportunistic transmission mode. In addition, the within-herd transmission dynamics of IMI causing pathogens were studied. Sensitivity analysis was conducted to investigate the influence of input parameters on model predictions. The results show that the model is able to represent various within-herd levels of IMI prevalence, depending on the simulated pathogens and their parameter settings. The parameters can be adjusted to include different combinations of IMI causing pathogens at different prevalence levels, representing herd-specific situations. The model is most sensitive to varying the transmission rate parameters and the strain-specific recovery rates from IMI. It can be used for investigating both short term operational and long term strategic decisions for the prevention and control of IMI in dairy cattle herds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Impact of biology knowledge on the conservation and management of large pelagic sharks.

    PubMed

    Yokoi, Hiroki; Ijima, Hirotaka; Ohshimo, Seiji; Yokawa, Kotaro

    2017-09-06

    Population growth rate, which depends on several biological parameters, is valuable information for the conservation and management of pelagic sharks, such as blue and shortfin mako sharks. However, reported biological parameters for estimating the population growth rates of these sharks differ by sex and display large variability. To estimate the appropriate population growth rate and clarify relationships between growth rate and relevant biological parameters, we developed a two-sex age-structured matrix population model and estimated the population growth rate using combinations of biological parameters. We addressed elasticity analysis and clarified the population growth rate sensitivity. For the blue shark, the estimated median population growth rate was 0.384 with a range of minimum and maximum values of 0.195-0.533, whereas those values of the shortfin mako shark were 0.102 and 0.007-0.318, respectively. The maturity age of male sharks had the largest impact for blue sharks, whereas that of female sharks had the largest impact for shortfin mako sharks. Hypotheses for the survival process of sharks also had a large impact on the population growth rate estimation. Both shark maturity age and survival rate were based on ageing validation data, indicating the importance of validating the quality of these data for the conservation and management of large pelagic sharks.

  12. Application of a parameter-estimation technique to modeling the regional aquifer underlying the eastern Snake River plain, Idaho

    USGS Publications Warehouse

    Garabedian, Stephen P.

    1986-01-01

    A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.

  13. Strontium-90 Biokinetics from Simulated Wound Intakes in Non-human Primates Compared with Combined Model Predictions from National Council on Radiation Protection and Measurements Report 156 and International Commission on Radiological Protection Publication 67.

    PubMed

    Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh

    2016-01-01

    This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.

  14. Effects of phenotypic plasticity on pathogen transmission in the field in a Lepidoptera-NPV system.

    PubMed

    Reeson, A F; Wilson, K; Cory, J S; Hankard, P; Weeks, J M; Goulson, D; Hails, R S

    2000-08-01

    In models of insect-pathogen interactions, the transmission parameter (ν) is the term that describes the efficiency with which pathogens are transmitted between hosts. There are two components to the transmission parameter, namely the rate at which the host encounters pathogens (contact rate) and the rate at which contact between host and pathogen results in infection (host susceptibility). Here it is shown that in larvae of Spodoptera exempta (Lepidoptera: Noctuidae), in which rearing density triggers the expression of one of two alternative phenotypes, the high-density morph is associated with an increase in larval activity. This response is likely to result in an increase in the contact rate between hosts and pathogens. Rearing density is also known to affect susceptibility of S. exempta to pathogens, with the high-density morph showing increased resistance to a baculovirus. In order to determine whether density-dependent differences observed in the laboratory might affect transmission in the wild, a field trial was carried out to estimate the transmission parameter for S. exempta and its nuclear polyhedrosis virus (NPV). The transmission parameter was found to be significantly higher among larvae reared in isolation than among those reared in crowds. Models of insect-pathogen interactions, in which the transmission parameter is assumed to be constant, will therefore not fully describe the S. exempta-NPV system. The finding that crowding can influence transmission in this way has major implications for both the long-term population dynamics and the invasion dynamics of insect-pathogen systems.

  15. Assessment of wear dependence parameters in complex model of cutting tool wear

    NASA Astrophysics Data System (ADS)

    Antsev, A. V.; Pasko, N. I.; Antseva, N. V.

    2018-03-01

    This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.

  16. Local order parameters for use in driving homogeneous ice nucleation with all-atom models of water

    NASA Astrophysics Data System (ADS)

    Reinhardt, Aleks; Doye, Jonathan P. K.; Noya, Eva G.; Vega, Carlos

    2012-11-01

    We present a local order parameter based on the standard Steinhardt-Ten Wolde approach that is capable both of tracking and of driving homogeneous ice nucleation in simulations of all-atom models of water. We demonstrate that it is capable of forcing the growth of ice nuclei in supercooled liquid water simulated using the TIP4P/2005 model using over-biassed umbrella sampling Monte Carlo simulations. However, even with such an order parameter, the dynamics of ice growth in deeply supercooled liquid water in all-atom models of water are shown to be very slow, and so the computation of free energy landscapes and nucleation rates remains extremely challenging.

  17. Evaluation of incremental reactivity and its uncertainty in Southern California.

    PubMed

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  18. Impact of particle emissions of new laser printers on modeled office room

    NASA Astrophysics Data System (ADS)

    Koivisto, Antti J.; Hussein, Tareq; Niemelä, Raimo; Tuomi, Timo; Hämeri, Kaarle

    2010-06-01

    In this study, we present how an indoor aerosol model can be used to characterize particle emitter and predict influence of the source on indoor air quality. Particle size-resolved emission rates were quantified and the source's influence on indoor air quality was estimated by using office model simulations. We measured particle emissions from three modern laser printers in a flow-through chamber. Measured parameters were used as input parameters for an indoor aerosol model, which we then used to quantify the particle emission rates. The same indoor aerosol model was used to simulate the effect of the particle emission source inside an office model. The office model consists of a mechanically ventilated empty room and the particle source. The aerosol from the ventilation air was a filtered urban background aerosol. The effect of the ventilation rate was studied using three different ventilation ratios 1, 2 and 3 h -1. According to the model, peak emission rates of the printers exceeded 7.0 × 10 8 s -1 (2.5 × 10 12 h -1), and emitted mainly ultrafine particles (diameter less than 100 nm). The office model simulation results indicate that a print job increases ultrafine particle concentration to a maximum of 2.6 × 10 5 cm -3. Printer-emitted particles increased 6-h averaged particle concentration over eleven times compared to the background particle concentration.

  19. The operating diagram of a model of two competitors in a chemostat with an external inhibitor.

    PubMed

    Dellal, Mohamed; Lakrib, Mustapha; Sari, Tewfik

    2018-05-24

    Understanding and exploiting the inhibition phenomenon, which promotes the stable coexistence of species, is a major challenge in the mathematical theory of the chemostat. Here, we study a model of two microbial species in a chemostat competing for a single resource in the presence of an external inhibitor. The model is a four-dimensional system of ordinary differential equations. Using general monotonic growth rate functions of the species and absorption rate of the inhibitor, we give a complete analysis for the existence and local stability of all steady states. We focus on the behavior of the system with respect of the three operating parameters represented by the dilution rate and the input concentrations of the substrate and the inhibitor. The operating diagram has the operating parameters as its coordinates and the various regions defined in it correspond to qualitatively different asymptotic behavior: washout, competitive exclusion of one species, coexistence of the species around a stable steady state and coexistence around a stable cycle. This bifurcation diagram which determines the effect of the operating parameters, is very useful to understand the model from both the mathematical and biological points of view, and is often constructed in the mathematical and biological literature. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Modelling leaf photosynthetic and transpiration temperature-dependent responses in Vitis vinifera cv. Semillon grapevines growing in hot, irrigated vineyard conditions

    PubMed Central

    Greer, Dennis H.

    2012-01-01

    Background and aims Grapevines growing in Australia are often exposed to very high temperatures and the question of how the gas exchange processes adjust to these conditions is not well understood. The aim was to develop a model of photosynthesis and transpiration in relation to temperature to quantify the impact of the growing conditions on vine performance. Methodology Leaf gas exchange was measured along the grapevine shoots in accordance with their growth and development over several growing seasons. Using a general linear statistical modelling approach, photosynthesis and transpiration were modelled against leaf temperature separated into bands and the model parameters and coefficients applied to independent datasets to validate the model. Principal results Photosynthesis, transpiration and stomatal conductance varied along the shoot, with early emerging leaves having the highest rates, but these declined as later emerging leaves increased their gas exchange capacities in accordance with development. The general linear modelling approach applied to these data revealed that photosynthesis at each temperature was additively dependent on stomatal conductance, internal CO2 concentration and photon flux density. The temperature-dependent coefficients for these parameters applied to other datasets gave a predicted rate of photosynthesis that was linearly related to the measured rates, with a 1 : 1 slope. Temperature-dependent transpiration was multiplicatively related to stomatal conductance and the leaf to air vapour pressure deficit and applying the coefficients also showed a highly linear relationship, with a 1 : 1 slope between measured and modelled rates, when applied to independent datasets. Conclusions The models developed for the grapevines were relatively simple but accounted for much of the seasonal variation in photosynthesis and transpiration. The goodness of fit in each case demonstrated that explicitly selecting leaf temperature as a model parameter, rather than including temperature intrinsically as is usually done in more complex models, was warranted. PMID:22567220

  1. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    PubMed Central

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  2. Dynamic properties in the four-state haploid coupled discrete-time mutation-selection model with an infinite population limit

    NASA Astrophysics Data System (ADS)

    Lee, Kyu Sang; Gill, Wonpyong

    2017-11-01

    The dynamic properties, such as the crossing time and time-dependence of the relative density of the four-state haploid coupled discrete-time mutation-selection model, were calculated with the assumption that μ ij = μ ji , where μ ij denotes the mutation rate between the sequence elements, i and j. The crossing time for s = 0 and r 23 = r 42 = 1 in the four-state model became saturated at a large fitness parameter when r 12 > 1, was scaled as a power law in the fitness parameter when r 12 = 1, and diverged when the fitness parameter approached the critical fitness parameter when r 12 < 1, where r ij = μ ij / μ 14.

  3. Mathematical Modeling for Scrub Typhus and Its Implications for Disease Control.

    PubMed

    Min, Kyung Duk; Cho, Sung Il

    2018-03-19

    The incidence rate of scrub typhus has been increasing in the Republic of Korea. Previous studies have suggested that this trend may have resulted from the effects of climate change on the transmission dynamics among vectors and hosts, but a clear explanation of the process is still lacking. In this study, we applied mathematical models to explore the potential factors that influence the epidemiology of tsutsugamushi disease. We developed mathematical models of ordinary differential equations including human, rodent and mite groups. Two models, including simple and complex models, were developed, and all parameters employed in the models were adopted from previous articles that represent epidemiological situations in the Republic of Korea. The simulation results showed that the force of infection at the equilibrium state under the simple model was 0.236 (per 100,000 person-months), and that in the complex model was 26.796 (per 100,000 person-months). Sensitivity analyses indicated that the most influential parameters were rodent and mite populations and contact rate between them for the simple model, and trans-ovarian transmission for the complex model. In both models, contact rate between humans and mites is more influential than morality rate of rodent and mite group. The results indicate that the effect of controlling either rodents or mites could be limited, and reducing the contact rate between humans and mites is more practical and effective strategy. However, the current level of control would be insufficient relative to the growing mite population. © 2018 The Korean Academy of Medical Sciences.

  4. Modeling the Afferent Dynamics of the Baroreflex Control System

    PubMed Central

    Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.

    2013-01-01

    In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231

  5. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake

    PubMed Central

    Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin

    2015-01-01

    The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642

  6. Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters.

    PubMed

    Khamassi, Mehdi; Enel, Pierre; Dominey, Peter Ford; Procyk, Emmanuel

    2013-01-01

    Converging evidence suggest that the medial prefrontal cortex (MPFC) is involved in feedback categorization, performance monitoring, and task monitoring, and may contribute to the online regulation of reinforcement learning (RL) parameters that would affect decision-making processes in the lateral prefrontal cortex (LPFC). Previous neurophysiological experiments have shown MPFC activities encoding error likelihood, uncertainty, reward volatility, as well as neural responses categorizing different types of feedback, for instance, distinguishing between choice errors and execution errors. Rushworth and colleagues have proposed that the involvement of MPFC in tracking the volatility of the task could contribute to the regulation of one of RL parameters called the learning rate. We extend this hypothesis by proposing that MPFC could contribute to the regulation of other RL parameters such as the exploration rate and default action values in case of task shifts. Here, we analyze the sensitivity to RL parameters of behavioral performance in two monkey decision-making tasks, one with a deterministic reward schedule and the other with a stochastic one. We show that there exist optimal parameter values specific to each of these tasks, that need to be found for optimal performance and that are usually hand-tuned in computational models. In contrast, automatic online regulation of these parameters using some heuristics can help producing a good, although non-optimal, behavioral performance in each task. We finally describe our computational model of MPFC-LPFC interaction used for online regulation of the exploration rate and its application to a human-robot interaction scenario. There, unexpected uncertainties are produced by the human introducing cued task changes or by cheating. The model enables the robot to autonomously learn to reset exploration in response to such uncertain cues and events. The combined results provide concrete evidence specifying how prefrontal cortical subregions may cooperate to regulate RL parameters. It also shows how such neurophysiologically inspired mechanisms can control advanced robots in the real world. Finally, the model's learning mechanisms that were challenged in the last robotic scenario provide testable predictions on the way monkeys may learn the structure of the task during the pretraining phase of the previous laboratory experiments. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  8. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE PAGES

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.; ...

    2018-03-27

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  9. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  10. Development of a second order closure model for computation of turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Varma, A. K.; Donaldson, C. D.

    1974-01-01

    A typical eddy box model for the second-order closure of turbulent, multispecies, reacting flows developed. The model structure was quite general and was valid for an arbitrary number of species. For the case of a reaction involving three species, the nine model parameters were determined from equations for nine independent first- and second-order correlations. The model enabled calculation of any higher-order correlation involving mass fractions, temperatures, and reaction rates in terms of first- and second-order correlations. Model predictions for the reaction rate were in very good agreement with exact solutions of the reaction rate equations for a number of assumed flow distributions.

  11. Modeling Shear Induced Von Willebrand Factor Binding to Collagen

    NASA Astrophysics Data System (ADS)

    Dong, Chuqiao; Wei, Wei; Morabito, Michael; Webb, Edmund; Oztekin, Alparslan; Zhang, Xiaohui; Cheng, Xuanhong

    2017-11-01

    Von Willebrand factor (vWF) is a blood glycoprotein that binds with platelets and collagen on injured vessel surfaces to form clots. VWF bioactivity is shear flow induced: at low shear, binding between VWF and other biological entities is suppressed; for high shear rate conditions - as are found near arterial injury sites - VWF elongates, activating its binding with platelets and collagen. Based on parameters derived from single molecule force spectroscopy experiments, we developed a coarse-grain molecular model to simulate bond formation probability as a function of shear rate. By introducing a binding criterion that depends on the conformation of a sub-monomer molecular feature of our model, the model predicts shear-induced binding, even for conditions where binding is highly energetically favorable. We further investigate the influence of various model parameters on the ability to predict shear-induced binding (vWF length, collagen site density and distribution, binding energy landscape, and slip/catch bond length) and demonstrate parameter ranges where the model provides good agreement with existing experimental data. Our results may be important for understanding vWF activity and also for achieving targeted drug therapy via biomimetic synthetic molecules. National Science Foundation (NSF),Division of Mathematical Sciences (DMS).

  12. The Influence of Temperature on Time-Dependent Deformation and Failure in Granite: A Mesoscale Modeling Approach

    NASA Astrophysics Data System (ADS)

    Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick

    2017-09-01

    An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the modeling presented herein will assist in the management and optimization of geotechnical engineering projects within granite.

  13. Sensitivity and spin-up times of cohesive sediment transport models used to simulate bathymetric change: Chapter 31

    USGS Publications Warehouse

    Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.

    2008-01-01

    Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged box model was calibrated to bathymetric change data and shows rapidly evolving bathymetry in the first 10-20 years, though sediment supply and hydrodynamic forcing did not vary greatly. This initial burst of bathymetric change is believed to be model adjustment to initial conditions, and suggests a spin-up time of greater than 10 years. These three diverse modeling approaches reinforce the sensitivity of cohesive sediment transport models to initial conditions and model parameters, and highlight the importance of appropriate calibration data. Adequate spin-up time of the order of years is required to initialize models, otherwise the solution will contain bathymetric change that is not due to environmental forcings, but rather improper specification of initial conditions and model parameters. Temporally intensive bathymetric change data can assist in determining initial conditions and parameters, provided they are available. Computational effort may be reduced by selectively updating hydrodynamics and bathymetry, thereby allowing time for spin-up periods. reserved.

  14. Quantifying the Uncertainty in Discharge Data Using Hydraulic Knowledge and Uncertain Gaugings

    NASA Astrophysics Data System (ADS)

    Renard, B.; Le Coz, J.; Bonnifait, L.; Branger, F.; Le Boursicaud, R.; Horner, I.; Mansanarez, V.; Lang, M.

    2014-12-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.).

  15. Fire Detection Tradeoffs as a Function of Vehicle Parameters

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Dietrich, Daniel L.; Brooker, John E.; Meyer, Marit E.; Ruff, Gary A.

    2016-01-01

    Fire survivability depends on the detection of and response to a fire before it has produced an unacceptable environment in the vehicle. This detection time is the result of interplay between the fire burning and growth rates; the vehicle size; the detection system design; the transport time to the detector (controlled by the level of mixing in the vehicle); and the rate at which the life support system filters the atmosphere, potentially removing the detected species or particles. Given the large differences in critical vehicle parameters (volume, mixing rate and filtration rate) the detection approach that works for a large vehicle (e.g. the ISS) may not be the best choice for a smaller crew capsule. This paper examines the impact of vehicle size and environmental control and life support system parameters on the detectability of fires in comparison to the hazard they present. A lumped element model was developed that considers smoke, heat, and toxic product release rates in comparison to mixing and filtration rates in the vehicle. Recent work has quantified the production rate of smoke and several hazardous species from overheated spacecraft polymers. These results are used as the input data set in the lumped element model in combination with the transport behavior of major toxic products released by overheating spacecraft materials to evaluate the necessary alarm thresholds to enable appropriate response to the fire hazard.

  16. Assessment of parameter regionalization methods for modeling flash floods in China

    NASA Astrophysics Data System (ADS)

    Ragettli, Silvan; Zhou, Jian; Wang, Haijing

    2017-04-01

    Rainstorm flash floods are a common and serious phenomenon during the summer months in many hilly and mountainous regions of China. For this study, we develop a modeling strategy for simulating flood events in small river basins of four Chinese provinces (Shanxi, Henan, Beijing, Fujian). The presented research is part of preliminary investigations for the development of a national operational model for predicting and forecasting hydrological extremes in basins of size 10 - 2000 km2, whereas most of these basins are ungauged or poorly gauged. The project is supported by the China Institute of Water Resources and Hydropower Research within the framework of the national initiative for flood prediction and early warning system for mountainous regions in China (research project SHZH-IWHR-73). We use the USGS Precipitation-Runoff Modeling System (PRMS) as implemented in the Java modeling framework Object Modeling System (OMS). PRMS can operate at both daily and storm timescales, switching between the two using a precipitation threshold. This functionality allows the model to perform continuous simulations over several years and to switch to the storm mode to simulate storm response in greater detail. The model was set up for fifteen watersheds for which hourly precipitation and runoff data were available. First, automatic calibration based on the Shuffled Complex Evolution method was applied to different hydrological response unit (HRU) configurations. The Nash-Sutcliffe efficiency (NSE) was used as assessment criteria, whereas only runoff data from storm events were considered. HRU configurations reflect the drainage-basin characteristics and depend on assumptions regarding drainage density and minimum HRU size. We then assessed the sensitivity of optimal parameters to different HRU configurations. Finally, the transferability to other watersheds of optimal model parameters that were not sensitive to HRU configurations was evaluated. Model calibration for the 15 catchments resulted in good model performance (NSE > 0.5) in 10 and medium performance (NSE > 0.2) in 3 catchments. Optimal model parameters proofed to be relatively insensitive to different HRU configurations. This suggests that dominant controls on hydrologic parameter transfer can potentially be identified based on catchment attributes describing meteorological, geological or landscape characteristics. Parameter regionalization based on a principal component analysis (PCA) nearest neighbor search (using all available catchment attributes) resulted in a 54% success rate in transferring optimal parameter sets and still yielding acceptable model performance. Data from more catchments are required to further increase the parameter transferability success rate or to develop regionalization strategies for individual parameters.

  17. Effect of rheological parameters on curing rate during NBR injection molding

    NASA Astrophysics Data System (ADS)

    Kyas, Kamil; Stanek, Michal; Manas, David; Skrobak, Adam

    2013-04-01

    In this work, non-isothermal injection molding process for NBR rubber mixture considering Isayev-Deng curing kinetic model, generalized Newtonian model with Carreau-WLF viscosity was modeled by using finite element method in order to understand the effect of volume flow rate, index of non-Newtonian behavior and relaxation time on the temperature profile and curing rate. It was found that for specific geometry and processing conditions, increase in relaxation time or in the index of non-Newtonian behavior increases the curing rate due to viscous dissipation taking place at the flow domain walls.

  18. Application of neural network in the study of combustion rate of natural gas/diesel dual fuel engine.

    PubMed

    Yan, Zhao-Da; Zhou, Chong-Guang; Su, Shi-Chuan; Liu, Zhen-Tao; Wang, Xi-Zhen

    2003-01-01

    In order to predict and improve the performance of natural gas/diesel dual fuel engine (DFE), a combustion rate model based on forward neural network was built to study the combustion process of the DFE. The effect of the operating parameters on combustion rate was also studied by means of this model. The study showed that the predicted results were good agreement with the experimental data. It was proved that the developed combustion rate model could be used to successfully predict and optimize the combustion process of dual fuel engine.

  19. A framework for scalable parameter estimation of gene circuit models using structural information.

    PubMed

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-07-01

    Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.

  20. Spatiotemporal variation in reproductive parameters of yellow-bellied marmots.

    PubMed

    Ozgul, Arpat; Oli, Madan K; Olson, Lucretia E; Blumstein, Daniel T; Armitage, Kenneth B

    2007-11-01

    Spatiotemporal variation in reproductive rates is a common phenomenon in many wildlife populations, but the population dynamic consequences of spatial and temporal variability in different components of reproduction remain poorly understood. We used 43 years (1962-2004) of data from 17 locations and a capture-mark-recapture (CMR) modeling framework to investigate the spatiotemporal variation in reproductive parameters of yellow-bellied marmots (Marmota flaviventris), and its influence on the realized population growth rate. Specifically, we estimated and modeled breeding probabilities of two-year-old females (earliest age of first reproduction), >2-year-old females that have not reproduced before (subadults), and >2-year-old females that have reproduced before (adults), as well as the litter sizes of two-year old and >2-year-old females. Most reproductive parameters exhibited spatial and/or temporal variation. However, reproductive parameters differed with respect to their relative influence on the realized population growth rate (lambda). Litter size had a stronger influence than did breeding probabilities on both spatial and temporal variations in lambda. Our analysis indicated that lambda was proportionately more sensitive to survival than recruitment. However, the annual fluctuation in litter size, abetted by the breeding probabilities, accounted for most of the temporal variation in lambda.

  1. Effects of Variant Rates and Noise on Epidemic Spreading

    NASA Astrophysics Data System (ADS)

    Li, Wei; Gao, Zong-Mao; Gu, Jiao

    2011-05-01

    We introduce variant rates, for both infection and recovery and noise into the susceptible-infected-removed (SIR) model for epidemic spreading. The changing rates are taken mainly due to the changing profiles of an epidemic during its evolution. However, the noise parameter which is taken from a given distribution, i.e. Gaussian can describe the fluctuations of the infection and recovery rates. The numerical simulations show that the SIR model with variant rates and noise and can improve the fitting with real SARS data in the near-stationary stage.

  2. California Fault Parameters for the National Seismic Hazard Maps and Working Group on California Earthquake Probabilities 2007

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.

    2008-01-01

    This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.

  3. Crystal Growth Simulations To Establish Physically Relevant Kinetic Parameters from the Empirical Kolmogorov-Johnson-Mehl-Avrami Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dill, Eric D.; Folmer, Jacob C.W.; Martin, James D.

    A series of simulations was performed to enable interpretation of the material and physical significance of the parameters defined in the Kolmogorov, Johnson and Mehl, and Avrami (KJMA) rate expression commonly used to describe phase boundary controlled reactions of condensed matter. The parameters k, n, and t 0 are shown to be highly correlated, which if unaccounted for seriously challenge mechanistic interpretation. It is demonstrated that rate measurements exhibit an intrinsic uncertainty without precise knowledge of the location and orientation of nucleation with respect to the free volume into which it grows. More significantly, it is demonstrated that the KJMAmore » rate constant k is highly dependent on sample size. However, under the simulated conditions of slow nucleation relative to crystal growth, sample volume and sample anisotropy correction affords a means to eliminate the experimental condition dependence of the KJMA rate constant, k, producing the material-specific parameter, the velocity of the phase boundary, v pb.« less

  4. Metabolic enzyme cost explains variable trade-offs between microbial growth rate and yield

    PubMed Central

    Ferris, Michael; Bruggeman, Frank J.

    2018-01-01

    Microbes may maximize the number of daughter cells per time or per amount of nutrients consumed. These two strategies correspond, respectively, to the use of enzyme-efficient or substrate-efficient metabolic pathways. In reality, fast growth is often associated with wasteful, yield-inefficient metabolism, and a general thermodynamic trade-off between growth rate and biomass yield has been proposed to explain this. We studied growth rate/yield trade-offs by using a novel modeling framework, Enzyme-Flux Cost Minimization (EFCM) and by assuming that the growth rate depends directly on the enzyme investment per rate of biomass production. In a comprehensive mathematical model of core metabolism in E. coli, we screened all elementary flux modes leading to cell synthesis, characterized them by the growth rates and yields they provide, and studied the shape of the resulting rate/yield Pareto front. By varying the model parameters, we found that the rate/yield trade-off is not universal, but depends on metabolic kinetics and environmental conditions. A prominent trade-off emerges under oxygen-limited growth, where yield-inefficient pathways support a 2-to-3 times higher growth rate than yield-efficient pathways. EFCM can be widely used to predict optimal metabolic states and growth rates under varying nutrient levels, perturbations of enzyme parameters, and single or multiple gene knockouts. PMID:29451895

  5. Vectorlike fermions and Higgs effective field theory revisited

    DOE PAGES

    Chen, Chien-Yi; Dawson, S.; Furlan, Elisabetta

    2017-07-10

    Heavy vectorlike quarks (VLQs) appear in many models of beyond the Standard Model physics. Direct experimental searches require these new quarks to be heavy, ≳ 800 – 1000 GeV . Here, we perform a global fit of the parameters of simple VLQ models in minimal representations of S U ( 2 ) L to precision data and Higgs rates. One interesting connection between anomalous Z bmore » $$\\bar{b}$$ interactions and Higgs physics in VLQ models is discussed. Finally, we present our analysis in an effective field theory (EFT) framework and show that the parameters of VLQ models are already highly constrained. Exact and approximate analytical formulas for the S and T parameters in the VLQ models we consider are available in the Supplemental Material as Mathematica files.« less

  6. About influence of input rate random part of nonstationary queue system on statistical estimates of its macroscopic indicators

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-05-01

    A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.

  7. Io - A volcanic flow model for the hot spot emission spectrum and a thermostatic mechanism

    NASA Technical Reports Server (NTRS)

    Sinton, V. M.

    1982-01-01

    The hot spots of Io are modeled as a steady state of active areas at 600 K, continuing creation of new lava flows and calderas, cooling off of recent flows and calderas, and the cessation of radiation of old flows and calderas from the accumulation of insulation added by resurfacing. There are three adjustable parameters in this model: the area of active sources at 600 K, the rate of production of new area that is cooling, and the temperature of cessation of emission as the result of resurfacing. The resurfacing rate sets constrains on this last parameter. The emission spectrum computed with reasonable values for these parameters is an excellent match to the spectrum from recent observations. A thermostatic mechanism is described whereby the volcanic activity is turned on for a long period of time and is then turned off for a nearly equal period. As a result the presently observed internal heat flow of approximately 1.5 W/sq m may be as much as twice the rate of production of internal heat. Thus the restrictions placed on theories of tidal dissipation by the observed heat flow may be partially relieved.

  8. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  9. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.

  10. Evaluating growth of the Porcupine Caribou Herd using a stochastic model

    USGS Publications Warehouse

    Walsh, Noreen E.; Griffith, Brad; McCabe, Thomas R.

    1995-01-01

    Estimates of the relative effects of demographic parameters on population rates of change, and of the level of natural variation in these parameters, are necessary to address potential effects of perturbations on populations. We used a stochastic model, based on survival and reproduction estimates of the Porcupine Caribou (Rangifer tarandus granti) Herd (PCH), during 1983-89 and 1989-92 to obtain distributions of potential population rates of change (r). The distribution of r produced by 1,000 trajectories of our simulation model (1983-89, r̄ = 0.013; 1989-92, r̄ = 0.003) encompassed the rate of increase calculated from an independent series of photo-survey data over the same years (1983-89, r = 0.048; 1989-92, r = -0.035). Changes in adult female survival had the largest effect on r, followed by changes in calf survival. We hypothesized that petroleum development on calving grounds, or changes in calving and post-calving habitats due to global climate change, would affect model input parameters. A decline in annual adult female survival from 0.871 to 0.847, or a decline in annual calf survival from 0.518 to 0.472, would be sufficient to cause a declining population, if all other input estimates remained the same. We then used these lower survival rates, in conjunction with our estimated amount of among-year variation, to determine a range of resulting population trajectories. Stochastic models can be used to better understand dynamics of populations, optimize sampling investment, and evaluate potential effects of various factors on population growth.

  11. Modeling of laser-induced ionization of solid dielectrics for ablation simulations: role of effective mass

    NASA Astrophysics Data System (ADS)

    Gruzdev, Vitaly

    2010-11-01

    Modeling of laser-induced ionization and heating of conduction-band electrons by laser radiation frequently serves as a basis for simulations supporting experimental studies of laser-induced ablation and damage of solid dielectrics. Together with band gap and electron-particle collision rate, effective electron mass is one of material parameters employed for the ionization modeling. Exact value of the effective mass is not known for many materials frequently utilized in experiments, e.g., fused silica and glasses. Because of that reason, value of the effective mass is arbitrary varied around "reasonable values" for the ionization modeling. In fact, it is utilized as a fitting parameter to fit experimental data on dependence of ablation or damage threshold on laser parameters. In this connection, we study how strong is the influence of variations of the effective mass on the value of conduction-band electron density. We consider influence of the effective mass on the photo-ionization rate and rate of impact ionization. In particular, it is shown that the photo-ionization rate can vary by 2-4 orders of magnitude with variation of effective mass by 50%. Impact ionization shows a much weaker dependence on effective mass, but it significantly enhances the variations of seed-electron density produced by the photo-ionization. Utilizing those results, we demonstrate that variation of effective mass by 50% produces variations of conduction-band electron density by 6 orders of magnitude. In this connection, we discuss the general issues of the current models of laser-induced ionization.

  12. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    PubMed Central

    Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729

  13. Ensemble engineering and statistical modeling for parameter calibration towards optimal design of microbial fuel cells

    NASA Astrophysics Data System (ADS)

    Sun, Hongyue; Luo, Shuai; Jin, Ran; He, Zhen

    2017-07-01

    Mathematical modeling is an important tool to investigate the performance of microbial fuel cell (MFC) towards its optimized design. To overcome the shortcoming of traditional MFC models, an ensemble model is developed through integrating both engineering model and statistical analytics for the extrapolation scenarios in this study. Such an ensemble model can reduce laboring effort in parameter calibration and require fewer measurement data to achieve comparable accuracy to traditional statistical model under both the normal and extreme operation regions. Based on different weight between current generation and organic removal efficiency, the ensemble model can give recommended input factor settings to achieve the best current generation and organic removal efficiency. The model predicts a set of optimal design factors for the present tubular MFCs including the anode flow rate of 3.47 mL min-1, organic concentration of 0.71 g L-1, and catholyte pumping flow rate of 14.74 mL min-1 to achieve the peak current at 39.2 mA. To maintain 100% organic removal efficiency, the anode flow rate and organic concentration should be controlled lower than 1.04 mL min-1 and 0.22 g L-1, respectively. The developed ensemble model can be potentially modified to model other types of MFCs or bioelectrochemical systems.

  14. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    PubMed

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  15. Bimodal star formation - Constraints from the solar neighborhood

    NASA Technical Reports Server (NTRS)

    Wyse, Rosemary F. G.; Silk, J.

    1987-01-01

    The chemical evolution resulting from a simple model of bimodal star formulation is investigated, using constraints from the solar neighborhood to set the parameters of the initial mass function and star formation rate. The two modes are an exclusively massive star mode, which forms stars at an exponentially declining rate, and a mode which contains stars of all masses and has a constant star formation rate. Satisfactory agreement with the age-metallicity relation for the thin disk and with the metallicity structure of the thin-disk and spheroid stars is possible only for a small range of parameter values. The preferred model offers a resolution to several of the long-standing problems of galactic chemical evolution, including explanations of the age-metallicity relation, the gas consumption time scale, and the stellar cumulative metallicity distributions.

  16. Facial motion parameter estimation and error criteria in model-based image coding

    NASA Astrophysics Data System (ADS)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  17. The Effects of Population Density on Juvenile Growth Rate in White-Tailed Deer

    NASA Astrophysics Data System (ADS)

    Barr, Brannon; Wolverton, Steve

    2014-10-01

    Animal body size is driven by habitat quality, food availability, and nutrition. Adult size can relate to birth weight, to length of the ontogenetic growth period, and/or to the rate of growth. Data requirements are high for studying these growth mechanisms, but large datasets exist for some game species. In North America, large harvest datasets exist for white-tailed deer ( Odocoileus virginianus), but such data are collected under a variety of conditions and are generally dismissed for ecological research beyond local population and habitat management. We contend that such data are useful for studying the ecology of white-tailed deer growth and body size when analyzed at ordinal scale. In this paper, we test the response of growth rate to food availability by fitting a logarithmic equation that estimates growth rate only to harvest data from Fort Hood, Texas, and track changes in growth rate over time. Results of this ordinal scale model are compared to previously published models that include additional parameters, such as birth weight and adult weight. It is shown that body size responds to food availability by variation in growth rate. Models that estimate multiple parameters may not work with harvest data because they are prone to error, which renders estimates from complex models too variable to detect interannual changes in growth rate that this ordinal scale model captures. This model can be applied to harvest data, from which inferences about factors that influence animal growth and body size (e.g., habitat quality and nutritional availability) can be drawn.

  18. The effects of population density on juvenile growth rate in white-tailed deer.

    PubMed

    Barr, Brannon; Wolverton, Steve

    2014-10-01

    Animal body size is driven by habitat quality, food availability, and nutrition. Adult size can relate to birth weight, to length of the ontogenetic growth period, and/or to the rate of growth. Data requirements are high for studying these growth mechanisms, but large datasets exist for some game species. In North America, large harvest datasets exist for white-tailed deer (Odocoileus virginianus), but such data are collected under a variety of conditions and are generally dismissed for ecological research beyond local population and habitat management. We contend that such data are useful for studying the ecology of white-tailed deer growth and body size when analyzed at ordinal scale. In this paper, we test the response of growth rate to food availability by fitting a logarithmic equation that estimates growth rate only to harvest data from Fort Hood, Texas, and track changes in growth rate over time. Results of this ordinal scale model are compared to previously published models that include additional parameters, such as birth weight and adult weight. It is shown that body size responds to food availability by variation in growth rate. Models that estimate multiple parameters may not work with harvest data because they are prone to error, which renders estimates from complex models too variable to detect interannual changes in growth rate that this ordinal scale model captures. This model can be applied to harvest data, from which inferences about factors that influence animal growth and body size (e.g., habitat quality and nutritional availability) can be drawn.

  19. Low survival rates of Swan Geese (Anser cygnoides) estimated from neck-collar resighting and telemetry

    USGS Publications Warehouse

    Choi, Chang-Yong; Lee, Ki-Sup; Poyarkov, Nikolay D.; Park, Jin-Young; Lee, Hansoo; Takekawa, John Y.; Smith, Lacy M.; Ely, Craig R.; Wang, Xin; Cao, Lei; Fox, Anthony D.; Goroshko, Oleg; Batbayar, Nyambayar; Prosser, Diann J.; Xiao, Xiangming

    2016-01-01

    Waterbird survival rates are a key component of demographic modeling used for effective conservation of long-lived threatened species. The Swan Goose (Anser cygnoides) is globally threatened and the most vulnerable goose species endemic to East Asia due to its small and rapidly declining population. To address a current knowledge gap in demographic parameters of the Swan Goose, available datasets were compiled from neck-collar resighting and telemetry studies, and two different models were used to estimate their survival rates. Results of a mark-resighting model using 15 years of neck-collar data (2001–2015) provided age-dependent survival rates and season-dependent encounter rates with a constant neck-collar retention rate. Annual survival rate was 0.638 (95% CI: 0.378–0.803) for adults and 0.122 (95% CI: 0.028–0.286) for first-year juveniles. Known-fate models were applied to the single season of telemetry data (autumn 2014) and estimated a mean annual survival rate of 0.408 (95% CI: 0.152–0.670) with higher but non-significant differences for adults (0.477) vs. juveniles (0.306). Our findings indicate that Swan Goose survival rates are comparable to the lowest rates reported for European or North American goose species. Poor survival may be a key demographic parameter contributing to their declining trend. Quantitative threat assessments and associated conservation measures, such as restricting hunting, may be a key step to mitigate for their low survival rates and maintain or enhance their population.

  20. Hyper- and viscoelastic modeling of needle and brain tissue interaction.

    PubMed

    Lehocky, Craig A; Yixing Shi; Riviere, Cameron N

    2014-01-01

    Deep needle insertion into brain is important for both diagnostic and therapeutic clinical interventions. We have developed an automated system for robotically steering flexible needles within the brain to improve targeting accuracy. In this work, we have developed a finite element needle-tissue interaction model that allows for the investigation of safe parameters for needle steering. The tissue model implemented contains both hyperelastic and viscoelastic properties to simulate the instantaneous and time-dependent responses of brain tissue. Several needle models were developed with varying parameters to study the effects of the parameters on tissue stress, strain and strain rate during needle insertion and rotation. The parameters varied include needle radius, bevel angle, bevel tip fillet radius, insertion speed, and rotation speed. The results will guide the design of safe needle tips and control systems for intracerebral needle steering.

  1. Modeling and parameters identification of 2-keto-L-gulonic acid fed-batch fermentation.

    PubMed

    Wang, Tao; Sun, Jibin; Yuan, Jingqi

    2015-04-01

    This article presents a modeling approach for industrial 2-keto-L-gulonic acid (2-KGA) fed-batch fermentation by the mixed culture of Ketogulonicigenium vulgare (K. vulgare) and Bacillus megaterium (B. megaterium). A macrokinetic model of K. vulgare is constructed based on the simplified metabolic pathways. The reaction rates obtained from the macrokinetic model are then coupled into a bioreactor model such that the relationship between substrate feeding rates and the main state variables, e.g., the concentrations of the biomass, substrate and product, is constructed. A differential evolution algorithm using the Lozi map as the random number generator is utilized to perform the model parameters identification, with the industrial data of 2-KGA fed-batch fermentation. Validation results demonstrate that the model simulations of substrate and product concentrations are well in coincidence with the measurements. Furthermore, the model simulations of biomass concentrations reflect principally the growth kinetics of the two microbes in the mixed culture.

  2. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  3. Complexity reduction of biochemical rate expressions.

    PubMed

    Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar

    2008-03-15

    The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.

  4. Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.

  5. Calibration parameters used to simulate streamflow from application of the Hydrologic Simulation Program-FORTRAN Model (HSPF) to mountainous basins containing coal mines in West Virginia

    USGS Publications Warehouse

    Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.

    2005-01-01

    This report presents the Hydrologic Simulation Program-FORTRAN Model (HSPF) parameters for eight basins in the coal-mining region of West Virginia. The magnitude and characteristics of model parameters from this study will assist users of HSPF in simulating streamflow at other basins in the coal-mining region of West Virginia. The parameter for nominal capacity of the upper-zone storage, UZSN, increased from south to north. The increase in UZSN with the increase in basin latitude could be due to decreasing slopes, decreasing rockiness of the soils, and increasing soil depths from south to north. A special action was given to the parameter for fraction of ground-water inflow that flows to inactive ground water, DEEPFR. The basis for this special action was related to the seasonal movement of the water table and transpiration from trees. The models were most sensitive to DEEPFR and the parameter for interception storage capacity, CEPSC. The models were also fairly sensitive to the parameter for an index representing the infiltration capacity of the soil, INFILT; the parameter for indicating the behavior of the ground-water recession flow, KVARY; the parameter for the basic ground-water recession rate, AGWRC; the parameter for nominal capacity of the upper zone storage, UZSN; the parameter for the interflow inflow, INTFW; the parameter for the interflow recession constant, IRC; and the parameter for lower zone evapotranspiration, LZETP.

  6. The ROC Toolbox: A toolbox for analyzing receiver-operating characteristics derived from confidence ratings.

    PubMed

    Koen, Joshua D; Barrett, Frederick S; Harlow, Iain M; Yonelinas, Andrew P

    2017-08-01

    Signal-detection theory, and the analysis of receiver-operating characteristics (ROCs), has played a critical role in the development of theories of episodic memory and perception. The purpose of the current paper is to present the ROC Toolbox. This toolbox is a set of functions written in the Matlab programming language that can be used to fit various common signal detection models to ROC data obtained from confidence rating experiments. The goals for developing the ROC Toolbox were to create a tool (1) that is easy to use and easy for researchers to implement with their own data, (2) that can flexibly define models based on varying study parameters, such as the number of response options (e.g., confidence ratings) and experimental conditions, and (3) that provides optimal routines (e.g., Maximum Likelihood estimation) to obtain parameter estimates and numerous goodness-of-fit measures.The ROC toolbox allows for various different confidence scales and currently includes the models commonly used in recognition memory and perception: (1) the unequal variance signal detection (UVSD) model, (2) the dual process signal detection (DPSD) model, and (3) the mixture signal detection (MSD) model. For each model fit to a given data set the ROC toolbox plots summary information about the best fitting model parameters and various goodness-of-fit measures. Here, we present an overview of the ROC Toolbox, illustrate how it can be used to input and analyse real data, and finish with a brief discussion on features that can be added to the toolbox.

  7. Basal glycogenolysis in mouse skeletal muscle: in vitro model predicts in vivo fluxes

    NASA Technical Reports Server (NTRS)

    Lambeth, Melissa J.; Kushmerick, Martin J.; Marcinek, David J.; Conley, Kevin E.

    2002-01-01

    A previously published mammalian kinetic model of skeletal muscle glycogenolysis, consisting of literature in vitro parameters, was modified by substituting mouse specific Vmax values. The model demonstrates that glycogen breakdown to lactate is under ATPase control. Our criteria to test whether in vitro parameters could reproduce in vivo dynamics was the ability of the model to fit phosphocreatine (PCr) and inorganic phosphate (Pi) dynamic NMR data from ischemic basal mouse hindlimbs and predict biochemically-assayed lactate concentrations. Fitting was accomplished by optimizing four parameters--the ATPase rate coefficient, fraction of activated glycogen phosphorylase, and the equilibrium constants of creatine kinase and adenylate kinase (due to the absence of pH in the model). The optimized parameter values were physiologically reasonable, the resultant model fit the [PCr] and [Pi] timecourses well, and the model predicted the final measured lactate concentration. This result demonstrates that additional features of in vivo enzyme binding are not necessary for quantitative description of glycogenolytic dynamics.

  8. A model for hormonal control of the menstrual cycle: structural consistency but sensitivity with regard to data.

    PubMed

    Selgrade, J F; Harris, L A; Pasteur, R D

    2009-10-21

    This study presents a 13-dimensional system of delayed differential equations which predicts serum concentrations of five hormones important for regulation of the menstrual cycle. Parameters for the system are fit to two different data sets for normally cycling women. For these best fit parameter sets, model simulations agree well with the two different data sets but one model also has an abnormal stable periodic solution, which may represent polycystic ovarian syndrome. This abnormal cycle occurs for the model in which the normal cycle has estradiol levels at the high end of the normal range. Differences in model behavior are explained by studying hysteresis curves in bifurcation diagrams with respect to sensitive model parameters. For instance, one sensitive parameter is indicative of the estradiol concentration that promotes pituitary synthesis of a large amount of luteinizing hormone, which is required for ovulation. Also, it is observed that models with greater early follicular growth rates may have a greater risk of cycling abnormally.

  9. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  10. Development and validation of chemistry agnostic flow battery cost performance model and application to nonaqueous electrolyte systems: Chemistry agnostic flow battery cost performance model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Alasdair; Thomsen, Edwin; Reed, David

    2016-04-20

    A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less

  11. Kinetics of hydrophobic organic contaminant extraction from sediment by granular activated carbon.

    PubMed

    Rakowska, M I; Kupryianchyk, D; Smit, M P J; Koelmans, A A; Grotenhuis, J T C; Rijnaarts, H H M

    2014-03-15

    Ex situ solid phase extraction with granular activated carbon (GAC) is a promising technique to remediate contaminated sediments. The methods' efficiency depends on the rate by which contaminants are transferred from the sediment to the surface of GAC. Here, we derive kinetic parameters for extraction of polycyclic aromatic hydrocarbons (PAH) from sediment by GAC, using a first-order multi-compartment kinetic model. The parameters were obtained by modeling sediment-GAC exchange kinetic data following a tiered model calibration approach. First, parameters for PAH desorption from sediment were calibrated using data from systems with 50% (by weight) GAC acting as an infinite sink. Second, the estimated parameters were used as fixed input to obtain GAC uptake kinetic parameters in sediment slurries with 4% GAC, representing the ex situ remediation scenario. PAH uptake rate constants (kGAC) by GAC ranged from 0.44 to 0.0005 d(-1), whereas GAC sorption coefficients (KGAC) ranged from 10(5.57) to 10(8.57) L kg(-1). These values are the first provided for GAC in the presence of sediment and show that ex situ extraction with GAC is sufficiently fast and effective to reduce the risks of the most available PAHs among those studied, such as fluorene, phenanthrene and anthracene. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. The Role of Heart-Rate Variability Parameters in Activity Recognition and Energy-Expenditure Estimation Using Wearable Sensors.

    PubMed

    Park, Heesu; Dong, Suh-Yeon; Lee, Miran; Youn, Inchan

    2017-07-24

    Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little or no movement. To solve these problems, we propose a novel approach using an accelerometer and electrocardiogram (ECG). First, we collected a database of six activities (sitting, standing, walking, ascending, resting and running) of 13 voluntary participants. We compared the HAR performances of three models with respect to the input data type (with none, all, or some of the heart-rate variability (HRV) parameters). The best recognition performance was 96.35%, which was obtained with some selected HRV parameters. EE was also estimated for different choices of the input data type (with or without HRV parameters) and the model type (single and activity-specific). The best estimation performance was found in the case of the activity-specific model with HRV parameters. Our findings indicate that the use of human physiological data, obtained by wearable sensors, has a significant impact on both HAR and EE estimation, which are crucial functions in the mobile healthcare system.

  13. Experimental characterization of post rigor mortis human muscle subjected to small tensile strains and application of a simple hyper-viscoelastic model.

    PubMed

    Gras, Laure-Lise; Laporte, Sébastien; Viot, Philippe; Mitton, David

    2014-10-01

    In models developed for impact biomechanics, muscles are usually represented with one-dimensional elements having active and passive properties. The passive properties of muscles are most often obtained from experiments performed on animal muscles, because limited data on human muscle are available. The aim of this study is thus to characterize the passive response of a human muscle in tension. Tensile tests at different strain rates (0.0045, 0.045, and 0.45 s⁻¹) were performed on 10 extensor carpi ulnaris muscles. A model composed of a nonlinear element defined with an exponential law in parallel with one or two Maxwell elements and considering basic geometrical features was proposed. The experimental results were used to identify the parameters of the model. The results for the first- and second-order model were similar. For the first-order model, the mean parameters of the exponential law are as follows: Young's modulus E (6.8 MPa) and curvature parameter α (31.6). The Maxwell element mean values are as follows: viscosity parameter η (1.2 MPa s) and relaxation time τ (0.25 s). Our results provide new data on a human muscle tested in vitro and a simple model with basic geometrical features that represent its behavior in tension under three different strain rates. This approach could be used to assess the behavior of other human muscles. © IMechE 2014.

  14. Mathematical modeling of HIV-like particle assembly in vitro.

    PubMed

    Liu, Yuewu; Zou, Xiufen

    2017-06-01

    In vitro, the recombinant HIV-1 Gag protein can generate spherical particles with a diameter of 25-30 nm in a fully defined system. It has approximately 80 building blocks, and its intermediates for assembly are abundant in geometry. Accordingly, there are a large number of nonlinear equations in the classical model. Therefore, it is difficult to compute values of geometry parameters for intermediates and make the mathematical analysis using the model. In this work, we develop a new model of HIV-like particle assembly in vitro by using six-fold symmetry of HIV-like particle assembly to decrease the number of geometry parameters. This method will greatly reduce computational costs and facilitate the application of the model. Then, we prove the existence and uniqueness of the positive equilibrium solution for this model with 79 nonlinear equations. Based on this model, we derive the interesting result that concentrations of all intermediates at equilibrium are independent of three important parameters, including two microscopic on-rate constants and the size of nucleating structure. Before equilibrium, these three parameters influence the concentration variation rates of all intermediates. We also analyze the relationship between the initial concentration of building blocks and concentrations of all intermediates. Furthermore, the bounds of concentrations of free building blocks and HIV-like particles are estimated. These results will be helpful to guide HIV-like particle assembly experiments and improve our understanding of the assembly dynamics of HIV-like particles in vitro. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Correction to the Dynamic Tensile Strength of Ice and Ice-Silicate Mixtures (Lange & Ahrens 1983)

    NASA Astrophysics Data System (ADS)

    Stewart, S. T.; Ahrens, T. J.

    1999-03-01

    We present a correction to the Weibull parameters for ice and ice-silicate mixtures (Lange & Ahrens 1983). These parameters relate the dynamic tensile strength to the strain rate. These data are useful for continuum fracture models of ice.

  16. The effect of loudness on the reverberance of music: reverberance prediction using loudness models.

    PubMed

    Lee, Doheon; Cabrera, Densil; Martens, William L

    2012-02-01

    This study examines the auditory attribute that describes the perceived amount of reverberation, known as "reverberance." Listening experiments were performed using two signals commonly heard in auditoria: excerpts of orchestral music and western classical singing. Listeners adjusted the decay rate of room impulse responses prior to convolution with these signals, so as to match the reverberance of each stimulus to that of a reference stimulus. The analysis examines the hypothesis that reverberance is related to the loudness decay rate of the underlying room impulse response. This hypothesis is tested using computational models of time varying or dynamic loudness, from which parameters analogous to conventional reverberation parameters (early decay time and reverberation time) are derived. The results show that listening level significantly affects reverberance, and that the loudness-based parameters outperform related conventional parameters. Results support the proposed relationship between reverberance and the computationally predicted loudness decay function of sound in rooms. © 2012 Acoustical Society of America

  17. Variation of yield loci in finite element analysis by considering texture evolution for AA5042 aluminum sheets

    NASA Astrophysics Data System (ADS)

    Yoon, Jonghun; Kim, Kyungjin; Yoon, Jeong Whan

    2013-12-01

    Yield function has various material parameters that describe how materials respond plastically in given conditions. However, a significant number of mechanical tests are required to identify the many material parameters for yield function. In this study, an effective method using crystal plasticity through a virtual experiment is introduced to develop the anisotropic yield function for AA5042. The crystal plasticity approach was used to predict the anisotropic response of the material in order to consider a number of stress or strain modes that would not otherwise be evident through mechanical testing. A rate-independent crystal plasticity model based on a smooth single crystal yield surface, which removes the innate ambiguity problem within the rate-independent model and Taylor model for polycrystalline deformation behavior were employed to predict the material's response in the balanced biaxial stress, pure shear, and plane strain states to identify the parameters for the anisotropic yield function of AA5042.

  18. Model development for naphthenic acids ozonation process.

    PubMed

    Al Jibouri, Ali Kamel H; Wu, Jiangning

    2015-02-01

    Naphthenic acids (NAs) are toxic constituents of oil sands process-affected water (OSPW) which is generated during the extraction of bitumen from oil sands. NAs consist mainly of carboxylic acids which are generally biorefractory. For the treatment of OSPW, ozonation is a very beneficial method. It can significantly reduce the concentration of NAs and it can also convert NAs from biorefractory to biodegradable. In this study, a factorial design (2(4)) was used for the ozonation of OSPW to study the influences of the operating parameters (ozone concentration, oxygen/ozone flow rate, pH, and mixing) on the removal of a model NAs in a semi-batch reactor. It was found that ozone concentration had the most significant effect on the NAs concentration compared to other parameters. An empirical model was developed to correlate the concentration of NAs with ozone concentration, oxygen/ozone flow rate, and pH. In addition, a theoretical analysis was conducted to gain the insight into the relationship between the removal of NAs and the operating parameters.

  19. Ultimate dynamics of the Kirschner-Panetta model: Tumor eradication and related problems

    NASA Astrophysics Data System (ADS)

    Starkov, Konstantin E.; Krishchenko, Alexander P.

    2017-10-01

    In this paper we consider the ultimate dynamics of the Kirschner-Panetta model which was created for studying the immune response to tumors under special types of immunotherapy. New ultimate upper bounds for compact invariant sets of this model are given, as well as sufficient conditions for the existence of a positively invariant polytope. We establish three types of conditions for the nonexistence of compact invariant sets in the domain of the tumor-cell population. Our main results are two types of conditions for global tumor elimination depending on the ratio between the proliferation rate of the immune cells and their mortality rate. These conditions are described in terms of simple algebraic inequalities imposed on model parameters and treatment parameters. Our theoretical studies of ultimate dynamics are complemented by numerical simulation results.

  20. Research on human physiological parameters intelligent clothing based on distributed Fiber Bragg Grating

    NASA Astrophysics Data System (ADS)

    Miao, Changyun; Shi, Boya; Li, Hongqiang

    2008-12-01

    A human physiological parameters intelligent clothing is researched with FBG sensor technology. In this paper, the principles and methods of measuring human physiological parameters including body temperature and heart rate in intelligent clothing with distributed FBG are studied, the mathematical models of human physiological parameters measurement are built; the processing method of body temperature and heart rate detection signals is presented; human physiological parameters detection module is designed, the interference signals are filtered out, and the measurement accuracy is improved; the integration of the intelligent clothing is given. The intelligent clothing can implement real-time measurement, processing, storage and output of body temperature and heart rate. It has accurate measurement, portability, low cost, real-time monitoring, and other advantages. The intelligent clothing can realize the non-contact monitoring between doctors and patients, timely find the diseases such as cancer and infectious diseases, and make patients get timely treatment. It has great significance and value for ensuring the health of the elders and the children with language dysfunction.

Top