Sample records for remaining model parameters

  1. On Interpreting the Model Parameters for the Three Parameter Logistic Model

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…

  2. A square-force cohesion model and its extraction from bulk measurements

    NASA Astrophysics Data System (ADS)

    Liu, Peiyuan; Lamarche, Casey; Kellogg, Kevin; Hrenya, Christine

    2017-11-01

    Cohesive particles remain poorly understood, with order of magnitude differences exhibited for prior, physical predictions of agglomerate size. A major obstacle lies in the absence of robust models of particle-particle cohesion, thereby precluding accurate prediction of the behavior of cohesive particles. Rigorous cohesion models commonly contain parameters related to surface roughness, to which cohesion shows extreme sensitivity. However, both roughness measurement and its distillation into these model parameters are challenging. Accordingly, we propose a ``square-force'' model, where cohesive force remains constant until a cut-off separation. Via DEM simulations, we demonstrate validity of the square-force model as surrogate of more rigorous models, when its two parameters are selected to match the two key quantities governing dense and dilute granular flows, namely maximum cohesive force and critical cohesive energy, respectively. Perhaps more importantly, we establish a method to extract the parameters in the square-force model via defluidization, due to its ability to isolate the effects of the two parameters. Thus, instead of relying on complicated scans of individual grains, determination of particle-particle cohesion from simple bulk measurements becomes feasible. Dow Corning Corporation.

  3. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  4. Toward On-line Parameter Estimation of Concentric Tube Robots Using a Mechanics-based Kinematic Model

    PubMed Central

    Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo

    2017-01-01

    Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554

  5. NASA Workshop on Distributed Parameter Modeling and Control of Flexible Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Marks, Virginia B. (Compiler); Keckler, Claude R. (Compiler)

    1994-01-01

    Although significant advances have been made in modeling and controlling flexible systems, there remains a need for improvements in model accuracy and in control performance. The finite element models of flexible systems are unduly complex and are almost intractable to optimum parameter estimation for refinement using experimental data. Distributed parameter or continuum modeling offers some advantages and some challenges in both modeling and control. Continuum models often result in a significantly reduced number of model parameters, thereby enabling optimum parameter estimation. The dynamic equations of motion of continuum models provide the advantage of allowing the embedding of the control system dynamics, thus forming a complete set of system dynamics. There is also increased insight provided by the continuum model approach.

  6. Quantification of the impact of precipitation spatial distribution uncertainty on predictive uncertainty of a snowmelt runoff model

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.

    2012-04-01

    This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed. In this study, first order and total effects of the group of precipitation factors FP1- FP4, and the precipitation factor FP5, are calculated separately. First order and total effects of the group FP1- FP4 are much higher than first order and total effects of the factor FP5, which are negligible This situation is due to the fact that the actual value taken by FP5 does not have much influence in the contribution of the glacier zone to the catchment's output discharge, mainly limited by incident solar radiation. In addition to this, first order effects indicate that, in average, nearly 25% of predictive uncertainty could be reduced if the true values of the precipitation factors FPi could be known, but no information was available on the appropriate values for the remaining model parameters. Finally, the total effects of the precipitation factors FP1- FP4 are close to 41% in average, implying that even if the appropriate values for the remaining model parameters could be fixed, predictive uncertainty would be still quite high if the spatial distribution of precipitation remains unknown. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.

  7. Constraints on texture zero and cofactor zero models for neutrino mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whisnant, K.; Liao, Jiajun; Marfatia, D.

    2014-06-24

    Imposing a texture or cofactor zero on the neutrino mass matrix reduces the number of independent parameters from nine to seven. Since five parameters have been measured, only two independent parameters would remain in such models. We find the allowed regions for single texture zero and single cofactor zero models. We also find strong similarities between single texture zero models with one mass hierarchy and single cofactor zero models with the opposite mass hierarchy. We show that this correspondence can be generalized to texture-zero and cofactor-zero models with the same homogeneous costraints on the elements and cofactors.

  8. Evaluation and linking of effective parameters in particle-based models and continuum models for mixing-limited bimolecular reactions

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo

    2013-08-01

    Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.

  9. The Singularity Mystery Associated with a Radially Continuous Maxwell Viscoelastic Structure

    NASA Technical Reports Server (NTRS)

    Fang, Ming; Hager, Bradford H.

    1995-01-01

    The singularity problem associated with a radially continuous Maxwell viscoclastic structure is investigated. A special tool called the isolation function is developed. Results calculated using the isolation function show that the discrete model assumption is no longer valid when the viscoelastic parameter becomes a continuous function of radius. Continuous variations in the upper mantle viscoelastic parameter are especially powerful in destroying the mode-like structures. The contribution to the load Love numbers of the singularities is sensitive to the convexity of the viscoelastic parameter models. The difference between the vertical response and the horizontal response found in layered viscoelastic parameter models remains with continuous models.

  10. The Sensitivity of Parameter Estimates to the Latent Ability Distribution. Research Report. ETS RR-11-40

    ERIC Educational Resources Information Center

    Xu, Xueli; Jia, Yue

    2011-01-01

    Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…

  11. SBML-PET: a Systems Biology Markup Language-based parameter estimation tool.

    PubMed

    Zi, Zhike; Klipp, Edda

    2006-11-01

    The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of experimental data from different experimental conditions. SBML-PET has a unique feature of supporting event definition in the SMBL model. SBML models can also be simulated in SBML-PET. Stochastic Ranking Evolution Strategy (SRES) is incorporated in SBML-PET for parameter estimation jobs. A classic ODE Solver called ODEPACK is used to solve the Ordinary Differential Equation (ODE) system. http://sysbio.molgen.mpg.de/SBML-PET/. The website also contains detailed documentation for SBML-PET.

  12. SURF Model Calibration Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-Dmore » simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.« less

  13. Entropy corrected holographic dark energy models in modified gravity

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Azhar, Nadeem; Rani, Shamaila

    We consider the power law and the entropy corrected holographic dark energy (HDE) models with Hubble horizon in the dynamical Chern-Simons modified gravity. We explore various cosmological parameters and planes in this framework. The Hubble parameter lies within the consistent range at the present and later epoch for both entropy corrected models. The deceleration parameter explains the accelerated expansion of the universe. The equation of state (EoS) parameter corresponds to quintessence and cold dark matter (ΛCDM) limit. The ωΛ-ωΛ‧ approaches to ΛCDM limit and freezing region in both entropy corrected models. The statefinder parameters are consistent with ΛCDM limit and dark energy (DE) models. The generalized second law of thermodynamics remain valid in all cases of interacting parameter. It is interesting to mention here that our results of Hubble, EoS parameter and ωΛ-ωΛ‧ plane show consistency with the present observations like Planck, WP, BAO, H0, SNLS and nine-year WMAP.

  14. Bayesian Framework Approach for Prognostic Studies in Electrolytic Capacitor under Thermal Overstress Conditions

    DTIC Science & Technology

    2012-09-01

    make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma

  15. Predictors of outcome for severe IgA Nephropathy in a multi-ethnic U.S. cohort.

    PubMed

    Arroyo, Ana Huerta; Bomback, Andrew S; Butler, Blake; Radhakrishnan, Jai; Herlitz, Leal; Stokes, M Barry; D'Agati, Vivette; Markowitz, Glen S; Appel, Gerald B; Canetta, Pietro A

    2015-09-01

    Although IgA nephropathy (IgAN) is the leading cause of glomerulonephritis worldwide, there are few large cohorts representative of U.S. Prognosis remains challenging, particularly as more patients are treated with RAAS blockade and immunosuppression. We analyzed a retrospective cohort of IgAN patients followed at Columbia University Medical Center from 1980 to 2010. We evaluated two outcomes - halving of eGFR and ESRD - using three proportional hazards models: 1) a model with only clinical parameters, 2) a model with only histopathologic parameters, and 3) a model combining clinical and histopathologic parameters. Of 154 patients with biopsy-proven IgAN, 126 had follow-up data available and 93 had biopsy slides re-read. Median follow-up was 47 months. The cohort was 64% male, 60% white, and the average age was 34 years at diagnosis. Median (IQR) eGFR and proteinuria at diagnosis were 64.1 (38.0 - 88.7) mL/min/1.73 m2 and 2.7 (1.3 - 4.5) g/day. Over 90% of subjects were treated with RAAS blockade, and over 66% received immunosuppression. In the clinical parameters-only model, baseline eGFR and African-American race predicted both halving of eGFR and ESRD. In the histopathologic parameters-only model, no parameter significantly predicted outcome. In the combined model, baseline eGFR remained the strongest predictor of both halving of eGFR (p = 0.03) and ESRD (p = 0.001), while the presence of IgG by immunofluorescence microscopy also predicted progression to ESRD. In this diverse U.S. IgAN cohort in which the majority of patients received RAAS blockade and immunosuppression, baseline eGFR, African-American race, and co-staining of IgG predicted poor outcome.

  16. How certain are the process parameterizations in our models?

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard

    2016-04-01

    Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.

  17. Practical identifiability analysis of a minimal cardiovascular system model.

    PubMed

    Pironet, Antoine; Docherty, Paul D; Dauby, Pierre C; Chase, J Geoffrey; Desaive, Thomas

    2017-01-17

    Parameters of mathematical models of the cardiovascular system can be used to monitor cardiovascular state, such as total stressed blood volume status, vessel elastance and resistance. To do so, the model parameters have to be estimated from data collected at the patient's bedside. This work considers a seven-parameter model of the cardiovascular system and investigates whether these parameters can be uniquely determined using indices derived from measurements of arterial and venous pressures, and stroke volume. An error vector defined the residuals between the simulated and reference values of the seven clinically available haemodynamic indices. The sensitivity of this error vector to each model parameter was analysed, as well as the collinearity between parameters. To assess practical identifiability of the model parameters, profile-likelihood curves were constructed for each parameter. Four of the seven model parameters were found to be practically identifiable from the selected data. The remaining three parameters were practically non-identifiable. Among these non-identifiable parameters, one could be decreased as much as possible. The other two non-identifiable parameters were inversely correlated, which prevented their precise estimation. This work presented the practical identifiability analysis of a seven-parameter cardiovascular system model, from limited clinical data. The analysis showed that three of the seven parameters were practically non-identifiable, thus limiting the use of the model as a monitoring tool. Slight changes in the time-varying function modeling cardiac contraction and use of larger values for the reference range of venous pressure made the model fully practically identifiable. Copyright © 2017. Published by Elsevier B.V.

  18. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model

    NASA Astrophysics Data System (ADS)

    Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi

    2018-03-01

    This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.

  19. Design and validation of diffusion MRI models of white matter

    NASA Astrophysics Data System (ADS)

    Jelescu, Ileana O.; Budde, Matthew D.

    2017-11-01

    Diffusion MRI is arguably the method of choice for characterizing white matter microstructure in vivo. Over the typical duration of diffusion encoding, the displacement of water molecules is conveniently on a length scale similar to that of the underlying cellular structures. Moreover, water molecules in white matter are largely compartmentalized which enables biologically-inspired compartmental diffusion models to characterize and quantify the true biological microstructure. A plethora of white matter models have been proposed. However, overparameterization and mathematical fitting complications encourage the introduction of simplifying assumptions that vary between different approaches. These choices impact the quantitative estimation of model parameters with potential detriments to their biological accuracy and promised specificity. First, we review biophysical white matter models in use and recapitulate their underlying assumptions and realms of applicability. Second, we present up-to-date efforts to validate parameters estimated from biophysical models. Simulations and dedicated phantoms are useful in assessing the performance of models when the ground truth is known. However, the biggest challenge remains the validation of the “biological accuracy” of estimated parameters. Complementary techniques such as microscopy of fixed tissue specimens have facilitated direct comparisons of estimates of white matter fiber orientation and densities. However, validation of compartmental diffusivities remains challenging, and complementary MRI-based techniques such as alternative diffusion encodings, compartment-specific contrast agents and metabolites have been used to validate diffusion models. Finally, white matter injury and disease pose additional challenges to modeling, which are also discussed. This review aims to provide an overview of the current state of models and their validation and to stimulate further research in the field to solve the remaining open questions and converge towards consensus.

  20. Design and validation of diffusion MRI models of white matter

    PubMed Central

    Jelescu, Ileana O.; Budde, Matthew D.

    2018-01-01

    Diffusion MRI is arguably the method of choice for characterizing white matter microstructure in vivo. Over the typical duration of diffusion encoding, the displacement of water molecules is conveniently on a length scale similar to that of the underlying cellular structures. Moreover, water molecules in white matter are largely compartmentalized which enables biologically-inspired compartmental diffusion models to characterize and quantify the true biological microstructure. A plethora of white matter models have been proposed. However, overparameterization and mathematical fitting complications encourage the introduction of simplifying assumptions that vary between different approaches. These choices impact the quantitative estimation of model parameters with potential detriments to their biological accuracy and promised specificity. First, we review biophysical white matter models in use and recapitulate their underlying assumptions and realms of applicability. Second, we present up-to-date efforts to validate parameters estimated from biophysical models. Simulations and dedicated phantoms are useful in assessing the performance of models when the ground truth is known. However, the biggest challenge remains the validation of the “biological accuracy” of estimated parameters. Complementary techniques such as microscopy of fixed tissue specimens have facilitated direct comparisons of estimates of white matter fiber orientation and densities. However, validation of compartmental diffusivities remains challenging, and complementary MRI-based techniques such as alternative diffusion encodings, compartment-specific contrast agents and metabolites have been used to validate diffusion models. Finally, white matter injury and disease pose additional challenges to modeling, which are also discussed. This review aims to provide an overview of the current state of models and their validation and to stimulate further research in the field to solve the remaining open questions and converge towards consensus. PMID:29755979

  1. Local order parameters for use in driving homogeneous ice nucleation with all-atom models of water

    NASA Astrophysics Data System (ADS)

    Reinhardt, Aleks; Doye, Jonathan P. K.; Noya, Eva G.; Vega, Carlos

    2012-11-01

    We present a local order parameter based on the standard Steinhardt-Ten Wolde approach that is capable both of tracking and of driving homogeneous ice nucleation in simulations of all-atom models of water. We demonstrate that it is capable of forcing the growth of ice nuclei in supercooled liquid water simulated using the TIP4P/2005 model using over-biassed umbrella sampling Monte Carlo simulations. However, even with such an order parameter, the dynamics of ice growth in deeply supercooled liquid water in all-atom models of water are shown to be very slow, and so the computation of free energy landscapes and nucleation rates remains extremely challenging.

  2. Effects of total pressure on non-grey gas radiation transfer in oxy-fuel combustion using the LBL, SNB, SNBCK, WSGG, and FSCK methods

    NASA Astrophysics Data System (ADS)

    Chu, Huaqiang; Gu, Mingyan; Consalvi, Jean-Louis; Liu, Fengshan; Zhou, Huaichun

    2016-03-01

    The effects of total pressure on gas radiation heat transfer are investigated in 1D parallel plate geometry containing isothermal and homogeneous media and an inhomogeneous and non-isothermal CO2-H2O mixture under conditions relevant to oxy-fuel combustion using the line-by-line (LBL), statistical narrow-band (SNB), statistical narrow-band correlated-k (SNBCK), weighted-sum-of-grey-gases (WSGG), and full-spectrum correlated-k (FSCK) models. The LBL calculations were conducted using the HITEMP2010 and CDSD-1000 databases and the LBL results serve as the benchmark solution to evaluate the accuracy of the other models. Calculations of the SNB, SNBCK, and FSCK were conducted using both the 1997 EM2C SNB parameters and their recently updated 2012 parameters to investigate how the SNB model parameters affect the results under oxy-fuel combustion conditions at high pressures. The WSGG model considered is the recently developed one by Bordbar et al. [19] for oxy-fuel combustion based on LBL calculations using HITEMP2010. The total pressure considered ranges from 1 up to 30 atm. The total pressure significantly affects gas radiation transfer primarily through the increase in molecule number density and only slightly through spectral line broadening. Using the 1997 EM2C SNB model parameters the accuracy of SNB and SNBCK is very good and remains essentially independent of the total pressure. When using the 2012 EM2C SNB model parameters the SNB and SNBCK results are less accurate and their error increases with increasing the total pressure. The WSGG model has the lowest accuracy and the best computational efficiency among the models investigated. The errors of both WSGG and FSCK using the 2012 EM2C SNB model parameters increase when the total pressure is increased from 1 to 10 atm, but remain nearly independent of the total pressure beyond 10 atm. When using the 1997 EM2C SNB model parameters the accuracy of FSCK only slightly decreases with increasing the total pressure.

  3. Extended Kalman Filter for Estimation of Parameters in Nonlinear State-Space Models of Biochemical Networks

    PubMed Central

    Sun, Xiaodian; Jin, Li; Xiong, Momiao

    2008-01-01

    It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286

  4. Optimal control problems of epidemic systems with parameter uncertainties: application to a malaria two-age-classes transmission model with asymptomatic carriers.

    PubMed

    Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo

    2015-03-01

    The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  6. A compendium of chameleon constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, Clare; Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk

    2016-11-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical andmore » laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.« less

  7. Inflation with a constant rate of roll

    NASA Astrophysics Data System (ADS)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by ̈phi/H dot phi remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.

  8. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    PubMed

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  9. Multi-response calibration of a conceptual hydrological model in the semiarid catchment of Wadi al Arab, Jordan

    NASA Astrophysics Data System (ADS)

    Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.

    2014-02-01

    A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.

  10. Practical limits for reverse engineering of dynamical systems: a statistical analysis of sensitivity and parameter inferability in systems biology models.

    PubMed

    Erguler, Kamil; Stumpf, Michael P H

    2011-05-01

    The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.

  11. Scale-up on basis of structured mixing models: A new concept.

    PubMed

    Mayr, B; Moser, A; Nagy, E; Horvat, P

    1994-02-05

    A new scale-up concept based upon mixing models for bioreactors equipped with Rushton turbines using the tanks-in-series concept is presented. The physical mixing model includes four adjustable parameters, i.e., radial and axial circulation time, number of ideally mixed elements in one cascade, and the volume of the ideally mixed turbine region. The values of the model parameters were adjusted with the application of a modified Monte-Carlo optimization method, which fitted the simulated response function to the experimental curve. The number of cascade elements turned out to be constant (N = 4). The model parameter radial circulation time is in good agreement with the one obtained by the pumping capacity. In case of remaining parameters a first or second order formal equation was developed, including four operational parameters (stirring and aeration intensity, scale, viscosity). This concept can be extended to several other types of bioreactors as well, and it seems to be a suitable tool to compare the bioprocess performance of different types of bioreactors. (c) 1994 John Wiley & Sons, Inc.

  12. The overconstraint of response time models: rethinking the scaling problem.

    PubMed

    Donkin, Chris; Brown, Scott D; Heathcote, Andrew

    2009-12-01

    Theories of choice response time (RT) provide insight into the psychological underpinnings of simple decisions. Evidence accumulation (or sequential sampling) models are the most successful theories of choice RT. These models all have the same "scaling" property--that a subset of their parameters can be multiplied by the same amount without changing their predictions. This property means that a single parameter must be fixed to allow the estimation of the remaining parameters. In the present article, we show that the traditional solution to this problem has overconstrained these models, unnecessarily restricting their ability to account for data and making implicit--and therefore unexamined--psychological assumptions. We show that versions of these models that address the scaling problem in a minimal way can provide a better description of data than can their overconstrained counterparts, even when increased model complexity is taken into account.

  13. Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.

    PubMed

    Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto

    2011-07-15

    Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.

  14. Inversion of parameters for semiarid regions by a neural network

    NASA Technical Reports Server (NTRS)

    Zurk, Lisa M.; Davis, Daniel; Njoku, Eni G.; Tsang, Leung; Hwang, Jenq-Neng

    1992-01-01

    Microwave brightness temperatures obtained from a passive radiative transfer model are inverted through use of a neural network. The model is applicable to semiarid regions and produces dual-polarized brightness temperatures for 6.6-, 10.7-, and 37-GHz frequencies. A range of temperatures is generated by varying three geophysical parameters over acceptable ranges: soil moisture, vegetation moisture, and soil temperature. A multilayered perceptron (MLP) neural network is trained with a subset of the generated temperatures, and the remaining temperatures are inverted using a backpropagation method. Several synthetic terrains are devised and inverted by the network under local constraints. All the inversions show good agreement with the original geophysical parameters, falling within 5 percent of the actual value of the parameter range.

  15. A comparison of random draw and locally neutral models for the avifauna of an English woodland.

    PubMed

    Dolman, Andrew M; Blackburn, Tim M

    2004-06-03

    Explanations for patterns observed in the structure of local assemblages are frequently sought with reference to interactions between species, and between species and their local environment. However, analyses of null models, where non-interactive local communities are assembled from regional species pools, have demonstrated that much of the structure of local assemblages remains in simulated assemblages where local interactions have been excluded. Here we compare the ability of two null models to reproduce the breeding bird community of Eastern Wood, a 16-hectare woodland in England, UK. A random draw model, in which there is complete annual replacement of the community by immigrants from the regional pool, is compared to a locally neutral community model, in which there are two additional parameters describing the proportion of the community replaced annually (per capita death rate) and the proportion of individuals recruited locally rather than as immigrants from the regional pool. Both the random draw and locally neutral model are capable of reproducing with significant accuracy several features of the observed structure of the annual Eastern Wood breeding bird community, including species relative abundances, species richness and species composition. The two additional parameters present in the neutral model result in a qualitatively more realistic representation of the Eastern Wood breeding bird community, particularly of its dynamics through time. The fact that these parameters can be varied, allows for a close quantitative fit between model and observed communities to be achieved, particularly with respect to annual species richness and species accumulation through time. The presence of additional free parameters does not detract from the qualitative improvement in the model and the neutral model remains a model of local community structure that is null with respect to species differences at the local scale. The ability of this locally neutral model to describe a larger number of woodland bird communities with either little variation in its parameters or with variation explained by features local to the woods themselves (such as the area and isolation of a wood) will be a key subsequent test of its relevance.

  16. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  17. The Relationship between Root Mean Square Error of Approximation and Model Misspecification in Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2012-01-01

    The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…

  18. Item Response Theory for Peer Assessment

    ERIC Educational Resources Information Center

    Uto, Masaki; Ueno, Maomi

    2016-01-01

    As an assessment method based on a constructivist approach, peer assessment has become popular in recent years. However, in peer assessment, a problem remains that reliability depends on the rater characteristics. For this reason, some item response models that incorporate rater parameters have been proposed. Those models are expected to improve…

  19. Evolutionary model selection and parameter estimation for protein-protein interaction network based on differential evolution algorithm

    PubMed Central

    Huang, Lei; Liao, Li; Wu, Cathy H.

    2016-01-01

    Revealing the underlying evolutionary mechanism plays an important role in understanding protein interaction networks in the cell. While many evolutionary models have been proposed, the problem about applying these models to real network data, especially for differentiating which model can better describe evolutionary process for the observed network urgently remains as a challenge. The traditional way is to use a model with presumed parameters to generate a network, and then evaluate the fitness by summary statistics, which however cannot capture the complete network structures information and estimate parameter distribution. In this work we developed a novel method based on Approximate Bayesian Computation and modified Differential Evolution (ABC-DEP) that is capable of conducting model selection and parameter estimation simultaneously and detecting the underlying evolutionary mechanisms more accurately. We tested our method for its power in differentiating models and estimating parameters on the simulated data and found significant improvement in performance benchmark, as compared with a previous method. We further applied our method to real data of protein interaction networks in human and yeast. Our results show Duplication Attachment model as the predominant evolutionary mechanism for human PPI networks and Scale-Free model as the predominant mechanism for yeast PPI networks. PMID:26357273

  20. The nearby triple star HIP 101955

    NASA Astrophysics Data System (ADS)

    Fang, Xia

    2018-04-01

    The nearby triple star HIP 101955 with strongly inclined orbit still remains. Thus the long-term dynamical stability deserves to be discussed based on the new dynamical state parameters (component masses and kinematic parameters) derived from fitting the accurate three-body model to the radial velocity, the Hipparcos Intermediate Astrometric Data (HIAD), and the accumulated speckle and visual data. It is found that the three-body system remains integrated and most likely undergoes Kozai cycles. With the already accumulated high-precision data, the three-body effects cannot always be neglected in the determination of the dynamical state. And it is expected that this will be the general case under the available Gaia data.

  1. The heuristic value of redundancy models of aging.

    PubMed

    Boonekamp, Jelle J; Briga, Michael; Verhulst, Simon

    2015-11-01

    Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links. To this end, we explore the heuristic value of redundancy models of aging to develop a deeper insight into the mechanisms causing variation in senescence and lifespan. We start by showing (i) how different redundancy model parameters affect projected aging and mortality, and (ii) how variation in redundancy model parameters relates to variation in parameters of the Gompertz equation. Lifestyle changes or medical interventions during life can modify mortality rate, and we investigate (iii) how interventions that change specific redundancy parameters within the model affect subsequent mortality and actuarial senescence. Lastly, as an example of data-directed modelling and the insights that can be gained from this, (iv) we fit a redundancy model to mortality patterns observed by Mair et al. (2003; Science 301: 1731-1733) in Drosophila that were subjected to dietary restriction and temperature manipulations. Mair et al. found that dietary restriction instantaneously reduced mortality rate without affecting aging, while temperature manipulations had more transient effects on mortality rate and did affect aging. We show that after adjusting model parameters the redundancy model describes both effects well, and a comparison of the parameter values yields a deeper insight in the mechanisms causing these contrasting effects. We see replacement of the redundancy model parameters by more detailed sub-models of these parameters as a next step in linking demographic patterns to underlying molecular mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Influence of primary fragment excitation energy and spin distributions on fission observables

    NASA Astrophysics Data System (ADS)

    Litaize, Olivier; Thulliez, Loïc; Serot, Olivier; Chebboubi, Abdelaziz; Tamagno, Pierre

    2018-03-01

    Fission observables in the case of 252Cf(sf) are investigated by exploring several models involved in the excitation energy sharing and spin-parity assignment between primary fission fragments. In a first step the parameters used in the FIFRELIN Monte Carlo code "reference route" are presented: two parameters for the mass dependent temperature ratio law and two constant spin cut-off parameters for light and heavy fragment groups respectively. These parameters determine the initial fragment entry zone in excitation energy and spin-parity (E*, Jπ). They are chosen to reproduce the light and heavy average prompt neutron multiplicities. When these target observables are achieved all other fission observables can be predicted. We show here the influence of input parameters on the saw-tooth curve and we discuss the influence of a mass and energy-dependent spin cut-off model on gamma-rays related fission observables. The part of the model involving level densities, neutron transmission coefficients or photon strength functions remains unchanged.

  3. Threshold Dynamics of a Temperature-Dependent Stage-Structured Mosquito Population Model with Nested Delays.

    PubMed

    Wang, Xiunan; Zou, Xingfu

    2018-05-21

    Mosquito-borne diseases remain a significant threat to public health and economics. Since mosquitoes are quite sensitive to temperature, global warming may not only worsen the disease transmission case in current endemic areas but also facilitate mosquito population together with pathogens to establish in new regions. Therefore, understanding mosquito population dynamics under the impact of temperature is considerably important for making disease control policies. In this paper, we develop a stage-structured mosquito population model in the environment of a temperature-controlled experiment. The model turns out to be a system of periodic delay differential equations with periodic delays. We show that the basic reproduction number is a threshold parameter which determines whether the mosquito population goes to extinction or remains persistent. We then estimate the parameter values for Aedes aegypti, the mosquito that transmits dengue virus. We verify the analytic result by numerical simulations with the temperature data of Colombo, Sri Lanka where a dengue outbreak occurred in 2017.

  4. Targeted versus statistical approaches to selecting parameters for modelling sediment provenance

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick

    2017-04-01

    One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.

  5. Monthly land cover-specific evapotranspiration models derived from global eddy flux measurements and remote sensing data

    Treesearch

    Yuan Fang; Ge Sun; Peter Caldwell; Steven G. McNulty; Asko Noormets; Jean-Christophe Domec; John King; Zhiqiang Zhang; Xudong Zhang; Guanghui Lin; Guangsheng Zhou; Jingfeng Xiao; Jiquan Chen

    2015-01-01

    Evapotranspiration (ET) is arguably the most uncertain ecohydrologic variable for quantifying watershed water budgets. Although numerous ET and hydrological models exist, accurately predicting the effects of global change on water use and availability remains challenging because of model deficiency and/or a lack of input parameters. The objective of this study was to...

  6. First evidence of non-locality in real band-gap metamaterials: determining parameters in the relaxed micromorphic model

    PubMed Central

    Barbagallo, Gabriele; d’Agostino, Marco Valerio; Placidi, Luca; Neff, Patrizio

    2016-01-01

    In this paper, we propose the first estimate of some elastic parameters of the relaxed micromorphic model on the basis of real experiments of transmission of longitudinal plane waves across an interface separating a classical Cauchy material (steel plate) and a phononic crystal (steel plate with fluid-filled holes). A procedure is set up in order to identify the parameters of the relaxed micromorphic model by superimposing the experimentally based profile of the reflection coefficient (plotted as function of the wave-frequency) with the analogous profile obtained via numerical simulations. We determine five out of six constitutive parameters which are featured by the relaxed micromorphic model in the isotropic case, plus the determination of the micro-inertia parameter. The sixth elastic parameter, namely the Cosserat couple modulus μc, still remains undetermined, since experiments on transverse incident waves are not yet available. A fundamental result of this paper is the estimate of the non-locality intrinsically associated with the underlying microstructure of the metamaterial. We show that the characteristic length Lc measuring the non-locality of the phononic crystal is of the order of 13 of the diameter of its fluid-filled holes. PMID:27436984

  7. First evidence of non-locality in real band-gap metamaterials: determining parameters in the relaxed micromorphic model.

    PubMed

    Madeo, Angela; Barbagallo, Gabriele; d'Agostino, Marco Valerio; Placidi, Luca; Neff, Patrizio

    2016-06-01

    In this paper, we propose the first estimate of some elastic parameters of the relaxed micromorphic model on the basis of real experiments of transmission of longitudinal plane waves across an interface separating a classical Cauchy material (steel plate) and a phononic crystal (steel plate with fluid-filled holes). A procedure is set up in order to identify the parameters of the relaxed micromorphic model by superimposing the experimentally based profile of the reflection coefficient (plotted as function of the wave-frequency) with the analogous profile obtained via numerical simulations. We determine five out of six constitutive parameters which are featured by the relaxed micromorphic model in the isotropic case, plus the determination of the micro-inertia parameter. The sixth elastic parameter, namely the Cosserat couple modulus μ c , still remains undetermined, since experiments on transverse incident waves are not yet available. A fundamental result of this paper is the estimate of the non-locality intrinsically associated with the underlying microstructure of the metamaterial. We show that the characteristic length L c measuring the non-locality of the phononic crystal is of the order of [Formula: see text] of the diameter of its fluid-filled holes.

  8. Bayesian inference of physiologically meaningful parameters from body sway measurements.

    PubMed

    Tietäväinen, A; Gutmann, M U; Keski-Vakkuri, E; Corander, J; Hæggström, E

    2017-06-19

    The control of the human body sway by the central nervous system, muscles, and conscious brain is of interest since body sway carries information about the physiological status of a person. Several models have been proposed to describe body sway in an upright standing position, however, due to the statistical intractability of the more realistic models, no formal parameter inference has previously been conducted and the expressive power of such models for real human subjects remains unknown. Using the latest advances in Bayesian statistical inference for intractable models, we fitted a nonlinear control model to posturographic measurements, and we showed that it can accurately predict the sway characteristics of both simulated and real subjects. Our method provides a full statistical characterization of the uncertainty related to all model parameters as quantified by posterior probability density functions, which is useful for comparisons across subjects and test settings. The ability to infer intractable control models from sensor data opens new possibilities for monitoring and predicting body status in health applications.

  9. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  10. Inflation with a constant rate of roll

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi, E-mail: motohashi@kicp.uchicago.edu, E-mail: alstar@landau.ac.ru, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by {sup ··}φ/H φ-dot remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs formore » unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.« less

  11. Monitoring the injured brain: registered, patient specific atlas models to improve accuracy of recovered brain saturation values

    NASA Astrophysics Data System (ADS)

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid

    2015-07-01

    The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.

  12. Mixed geographically weighted regression (MGWR) model with weighted adaptive bi-square for case of dengue hemorrhagic fever (DHF) in Surakarta

    NASA Astrophysics Data System (ADS)

    Astuti, H. N.; Saputro, D. R. S.; Susanti, Y.

    2017-06-01

    MGWR model is combination of linear regression model and geographically weighted regression (GWR) model, therefore, MGWR model could produce parameter estimation that had global parameter estimation, and other parameter that had local parameter in accordance with its observation location. The linkage between locations of the observations expressed in specific weighting that is adaptive bi-square. In this research, we applied MGWR model with weighted adaptive bi-square for case of DHF in Surakarta based on 10 factors (variables) that is supposed to influence the number of people with DHF. The observation unit in the research is 51 urban villages and the variables are number of inhabitants, number of houses, house index, many public places, number of healthy homes, number of Posyandu, area width, level population density, welfare of the family, and high-region. Based on this research, we obtained 51 MGWR models. The MGWR model were divided into 4 groups with significant variable is house index as a global variable, an area width as a local variable and the remaining variables vary in each. Global variables are variables that significantly affect all locations, while local variables are variables that significantly affect a specific location.

  13. Prognostics of lithium-ion batteries based on Dempster-Shafer theory and the Bayesian Monte Carlo method

    NASA Astrophysics Data System (ADS)

    He, Wei; Williard, Nicholas; Osterman, Michael; Pecht, Michael

    A new method for state of health (SOH) and remaining useful life (RUL) estimations for lithium-ion batteries using Dempster-Shafer theory (DST) and the Bayesian Monte Carlo (BMC) method is proposed. In this work, an empirical model based on the physical degradation behavior of lithium-ion batteries is developed. Model parameters are initialized by combining sets of training data based on DST. BMC is then used to update the model parameters and predict the RUL based on available data through battery capacity monitoring. As more data become available, the accuracy of the model in predicting RUL improves. Two case studies demonstrating this approach are presented.

  14. Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.

    PubMed

    Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S

    2018-02-05

    To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.

  15. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    NASA Astrophysics Data System (ADS)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna

    2016-05-01

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.

  16. Measurements of the ionization coefficient of simulated iron micrometeoroids

    NASA Astrophysics Data System (ADS)

    Thomas, Evan; Horányi, Mihály; Janches, Diego; Munsat, Tobin; Simolka, Jonas; Sternovsky, Zoltan

    2016-04-01

    The interpretation of meteor radar observations has remained an open problem for decades. One of the most critical parameters to establish the size of an incoming meteoroid from radar echoes is the ionization coefficient, β, which still remains poorly known. Here we report on new experiments to simulate micrometeoroid ablation in laboratory conditions to measure β for iron particles impacting N2, air, CO2, and He gases. This new data set is compared to previous laboratory data where we find agreement except for He and air impacts > 30 km/s. We calibrate the Jones model of β(v) and provide fit parameters to these gases and find agreement with all gases except CO2 and high-speed air impacts where we observe βair > 1 for velocities > 70 km/s. These data therefore demonstrate potential problems with using the Jones model for CO2 atmospheres as well as for high-speed meteors on Earth.

  17. Effect of Surface Tension Anisotropy and Welding Parameters on Initial Instability Dynamics During Solidification: A Phase-Field Study

    NASA Astrophysics Data System (ADS)

    Yu, Fengyi; Wei, Yanhong

    2018-05-01

    The effects of surface tension anisotropy and welding parameters on initial instability dynamics during gas tungsten arc welding of an Al-alloy are investigated by a quantitative phase-field model. The results show that the surface tension anisotropy and welding parameters affect the initial instability dynamics in different ways during welding. The surface tension anisotropy does not influence the solute diffusion process but does affect the stability of the solid/liquid interface during solidification. The welding parameters affect the initial instability dynamics by varying the growth rate and thermal gradient. The incubation time decreases, and the initial wavelength remains stable as the welding speed increases. When welding power increases, the incubation time increases and the initial wavelength slightly increases. Experiments were performed for the same set of welding parameters used in modeling, and the results of the experiments and simulations were in good agreement.

  18. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  19. Directions for computational mechanics in automotive crashworthiness

    NASA Technical Reports Server (NTRS)

    Bennett, James A.; Khalil, T. B.

    1993-01-01

    The automotive industry has used computational methods for crashworthiness since the early 1970's. These methods have ranged from simple lumped parameter models to full finite element models. The emergence of the full finite element models in the mid 1980's has significantly altered the research direction. However, there remains a need for both simple, rapid modeling methods and complex detailed methods. Some directions for continuing research are discussed.

  20. Directions for computational mechanics in automotive crashworthiness

    NASA Astrophysics Data System (ADS)

    Bennett, James A.; Khalil, T. B.

    1993-08-01

    The automotive industry has used computational methods for crashworthiness since the early 1970's. These methods have ranged from simple lumped parameter models to full finite element models. The emergence of the full finite element models in the mid 1980's has significantly altered the research direction. However, there remains a need for both simple, rapid modeling methods and complex detailed methods. Some directions for continuing research are discussed.

  1. Reconstruction of sub-surface archaeological remains from magnetic data using neural computing.

    NASA Astrophysics Data System (ADS)

    Bescoby, D. J.; Cawley, G. C.; Chroston, P. N.

    2003-04-01

    The remains of a former Roman colonial settlement, once part of the classical city of Butrint in southern Albania have been the subject of a high resolution magnetic survey using a caesium-vapour magnetometer. The survey revealed the surviving remains of an extensive planned settlement and a number of outlying buildings, today buried beneath over 0.5 m of alluvial deposits. The aim of the current research is to derive a sub-surface model from the magnetic survey measurements, allowing an enhanced archaeological interpretation of the data. Neural computing techniques are used to perform the non-linear mapping between magnetic data and corresponding sub-surface model parameters. The adoption of neural computing paradigms potentially holds several advantages over other modelling techniques, allowing fast solutions for complex data, while having a high tolerance to noise. A multi-layer perceptron network with a feed-forward architecture is trained to estimate the shape and burial depth of wall foundations using a series of representative models as training data. Parameters used to forward model the training data sets are derived from a number of trial trench excavations targeted over features identified by the magnetic survey. The training of the network was optimized by first applying it to synthetic test data of known source parameters. Pre-processing of the network input data, including the use of a rotationally invariant transform, enhanced network performance and the efficiency of the training data. The approach provides good results when applied to real magnetic data, accurately predicting the depths and layout of wall foundations within the former settlement, verified by subsequent excavation. The resulting sub-surface model is derived from the averaged outputs of a ‘committee’ of five networks, trained with individualized training sets. Fuzzy logic inference has also been used to combine individual network outputs through correlation with data from a second geophysical technique, allowing the integration of data from a separate series of measurements.

  2. Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations

    NASA Astrophysics Data System (ADS)

    Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.

    2014-02-01

    The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.

  3. Parameter identification of process simulation models as a means for knowledge acquisition and technology transfer

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.; Ifanti, Konstantina

    2012-12-01

    Process simulation models are usually empirical, therefore there is an inherent difficulty in serving as carriers for knowledge acquisition and technology transfer, since their parameters have no physical meaning to facilitate verification of the dependence on the production conditions; in such a case, a 'black box' regression model or a neural network might be used to simply connect input-output characteristics. In several cases, scientific/mechanismic models may be proved valid, in which case parameter identification is required to find out the independent/explanatory variables and parameters, which each parameter depends on. This is a difficult task, since the phenomenological level at which each parameter is defined is different. In this paper, we have developed a methodological framework under the form of an algorithmic procedure to solve this problem. The main parts of this procedure are: (i) stratification of relevant knowledge in discrete layers immediately adjacent to the layer that the initial model under investigation belongs to, (ii) design of the ontology corresponding to these layers, (iii) elimination of the less relevant parts of the ontology by thinning, (iv) retrieval of the stronger interrelations between the remaining nodes within the revised ontological network, and (v) parameter identification taking into account the most influential interrelations revealed in (iv). The functionality of this methodology is demonstrated by quoting two representative case examples on wastewater treatment.

  4. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  5. Physically based model for extracting dual permeability parameters using non-Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Abou Najm, M. R.; Basset, C.; Stewart, R. D.; Hauswirth, S.

    2017-12-01

    Dual permeability models are effective for the assessment of flow and transport in structured soils with two dominant structures. The major challenge to those models remains in the ability to determine appropriate and unique parameters through affordable, simple, and non-destructive methods. This study investigates the use of water and a non-Newtonian fluid in saturated flow experiments to derive physically-based parameters required for improved flow predictions using dual permeability models. We assess the ability of these two fluids to accurately estimate the representative pore sizes in dual-domain soils, by determining the effective pore sizes of macropores and micropores. We developed two sub-models that solve for the effective macropore size assuming either cylindrical (e.g., biological pores) or planar (e.g., shrinkage cracks and fissures) pore geometries, with the micropores assumed to be represented by a single effective radius. Furthermore, the model solves for the percent contribution to flow (wi) corresponding to the representative macro and micro pores. A user-friendly solver was developed to numerically solve the system of equations, given that relevant non-Newtonian viscosity models lack forms conducive to analytical integration. The proposed dual-permeability model is a unique attempt to derive physically based parameters capable of measuring dual hydraulic conductivities, and therefore may be useful in reducing parameter uncertainty and improving hydrologic model predictions.

  6. Control of Systems With Slow Actuators Using Time Scale Separation

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vehram; Nguyen, Nhan

    2009-01-01

    This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.

  7. Remaining useful life prediction of degrading systems subjected to imperfect maintenance: Application to draught fans

    NASA Astrophysics Data System (ADS)

    Wang, Zhao-Qiang; Hu, Chang-Hua; Si, Xiao-Sheng; Zio, Enrico

    2018-02-01

    Current degradation modeling and remaining useful life prediction studies share a common assumption that the degrading systems are not maintained or maintained perfectly (i.e., to an as-good-as new state). This paper concerns the issues of how to model the degradation process and predict the remaining useful life of degrading systems subjected to imperfect maintenance activities, which can restore the health condition of a degrading system to any degradation level between as-good-as new and as-bad-as old. Toward this end, a nonlinear model driven by Wiener process is first proposed to characterize the degradation trajectory of the degrading system subjected to imperfect maintenance, where negative jumps are incorporated to quantify the influence of imperfect maintenance activities on the system's degradation. Then, the probability density function of the remaining useful life is derived analytically by a space-scale transformation, i.e., transforming the constructed degradation model with negative jumps crossing a constant threshold level to a Wiener process model crossing a random threshold level. To implement the proposed method, unknown parameters in the degradation model are estimated by the maximum likelihood estimation method. Finally, the proposed degradation modeling and remaining useful life prediction method are applied to a practical case of draught fans belonging to a kind of mechanical systems from steel mills. The results reveal that, for a degrading system subjected to imperfect maintenance, our proposed method can obtain more accurate remaining useful life predictions than those of the benchmark model in literature.

  8. Process model comparison and transferability across bioreactor scales and modes of operation for a mammalian cell bioprocess.

    PubMed

    Craven, Stephen; Shirsat, Nishikant; Whelan, Jessica; Glennon, Brian

    2013-01-01

    A Monod kinetic model, logistic equation model, and statistical regression model were developed for a Chinese hamster ovary cell bioprocess operated under three different modes of operation (batch, bolus fed-batch, and continuous fed-batch) and grown on two different bioreactor scales (3 L bench-top and 15 L pilot-scale). The Monod kinetic model was developed for all modes of operation under study and predicted cell density, glucose glutamine, lactate, and ammonia concentrations well for the bioprocess. However, it was computationally demanding due to the large number of parameters necessary to produce a good model fit. The transferability of the Monod kinetic model structure and parameter set across bioreactor scales and modes of operation was investigated and a parameter sensitivity analysis performed. The experimentally determined parameters had the greatest influence on model performance. They changed with scale and mode of operation, but were easily calculated. The remaining parameters, which were fitted using a differential evolutionary algorithm, were not as crucial. Logistic equation and statistical regression models were investigated as alternatives to the Monod kinetic model. They were less computationally intensive to develop due to the absence of a large parameter set. However, modeling of the nutrient and metabolite concentrations proved to be troublesome due to the logistic equation model structure and the inability of both models to incorporate a feed. The complexity, computational load, and effort required for model development has to be balanced with the necessary level of model sophistication when choosing which model type to develop for a particular application. Copyright © 2012 American Institute of Chemical Engineers (AIChE).

  9. Perceptual control models of pursuit manual tracking demonstrate individual specificity and parameter consistency.

    PubMed

    Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren

    2017-11-01

    Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.

  10. The effect of noise-induced variance on parameter recovery from reaction times.

    PubMed

    Vadillo, Miguel A; Garaizar, Pablo

    2016-03-31

    Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.

  11. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  12. Toward seamless hydrologic predictions across spatial scales

    NASA Astrophysics Data System (ADS)

    Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin H.; Warrach-Sagi, Kirsten; Attinger, Sabine

    2017-09-01

    Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the chosen resolution, i.e., fulfills a flux-matching condition across scales. An analysis of state-of-the-art LSMs and HMs reveals that most do not have consistent hydrologic parameter fields. Multiple experiments with the mHM, Noah-MP, PCR-GLOBWB, and WaterGAP models demonstrate the pitfalls of deficient parameterization practices currently used in most operational models, which are insufficient to satisfy the flux-matching condition. These examples demonstrate that J. Dooge's 1982 statement on the unsolved problem of parameterization in these models remains true. Based on a review of existing parameter regionalization techniques, we postulate that the multiscale parameter regionalization (MPR) technique offers a practical and robust method that provides consistent (seamless) parameter and flux fields across scales. Herein, we develop a general model protocol to describe how MPR can be applied to a particular model and present an example application using the PCR-GLOBWB model. Finally, we discuss potential advantages and limitations of MPR in obtaining the seamless prediction of hydrological fluxes and states across spatial scales.

  13. On the Induction of the First-Order Phase Magnetic Transitions by Acoustic Vibrations in MnSi

    NASA Astrophysics Data System (ADS)

    Pikin, S. A.

    2017-12-01

    The main result of the paper contains the conclusion that the magnetic phase transition in MnSi always remains first order at any temperature and magnetic field. In these aims, a model of coupling of an order parameter with other degrees of freedom is used. The coupling of magnetic order parameters with long-wave acoustic phonons, in the presence of the nonsingular parts of the bulk and shear moduli, a first-order transition occurs, participle near the transition the heat capacity and the compressibility remain finite, if the heat capacity becomes infinite in the system disregarding the acoustic phonons. The role of the Frenkel heterophase fluctuations is discussed. The impurity effect shows that, for some phases, the heat capacity of the system remains continuous and finite at the transition point. It is supposed that the transition is progressively smoothed by these fluctuations at the application of the magnetic field.

  14. On the induction of the first-order phase magnetic transitions by acoustic vibrations in MnSi

    NASA Astrophysics Data System (ADS)

    Pikin, S. A.

    2017-12-01

    The main result of the paper contains the conclusion that the magnetic phase transition in MnSi always remains first order at any temperature and magnetic field. In these aims, a model of coupling of an order parameter with other degrees of freedom is used. The coupling of magnetic order parameters with longwave acoustic phonons, in the presence of the nonsingular parts of the bulk and shear moduli, a first-order transition occurs, participle near the transition the heat capacity and the compressibility remain finite, if in the system without allowance of the acoustic phonons the heat capacity becomes infinite. The role of the Frenkel heterophase fluctuations is discussed. The impurity effect shows that, for some phases, the heat capacity of the system remains continuous and finite at the transition point. It is supposed that the transition is progressively smoothed by these fluctuations at the application of the magnetic field.

  15. Probabilistic Prognosis of Non-Planar Fatigue Crack Growth

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Newman, John A.; Warner, James E.; Leser, William P.; Hochhalter, Jacob D.; Yuan, Fuh-Gwo

    2016-01-01

    Quantifying the uncertainty in model parameters for the purpose of damage prognosis can be accomplished utilizing Bayesian inference and damage diagnosis data from sources such as non-destructive evaluation or structural health monitoring. The number of samples required to solve the Bayesian inverse problem through common sampling techniques (e.g., Markov chain Monte Carlo) renders high-fidelity finite element-based damage growth models unusable due to prohibitive computation times. However, these types of models are often the only option when attempting to model complex damage growth in real-world structures. Here, a recently developed high-fidelity crack growth model is used which, when compared to finite element-based modeling, has demonstrated reductions in computation times of three orders of magnitude through the use of surrogate models and machine learning. The model is flexible in that only the expensive computation of the crack driving forces is replaced by the surrogate models, leaving the remaining parameters accessible for uncertainty quantification. A probabilistic prognosis framework incorporating this model is developed and demonstrated for non-planar crack growth in a modified, edge-notched, aluminum tensile specimen. Predictions of remaining useful life are made over time for five updates of the damage diagnosis data, and prognostic metrics are utilized to evaluate the performance of the prognostic framework. Challenges specific to the probabilistic prognosis of non-planar fatigue crack growth are highlighted and discussed in the context of the experimental results.

  16. COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior

    NASA Technical Reports Server (NTRS)

    Smialek, James L.; Auping, Judith V.

    2002-01-01

    COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,

  17. Influence of the Fluid on the Parameters and Limits of Bubble Detonation

    NASA Astrophysics Data System (ADS)

    Pinaev, A. V.; Prokhorov, E. S.

    2017-12-01

    The compression and inflammation of reactive gas bubbles in bubble detonation waves have been studied, and the considerable influence of the fluid (liquid or vapor) on the detonation parameters has been found. It has been shown numerically that the final values of the pressure and temperature significantly decrease if the temperature dependence of the adiabatic index is taken into account at the compression stage. The parameters of reactive gas combustion products in the bubble have been calculated in terms of an equilibrium model, and the influence of the fluid that remains in the bubble in the form of microdroplets and vapor on these parameters has been investigated.

  18. Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Lähivaara, Timo; Kärkkäinen, Leo; Huttunen, Janne M. J.; Hesthaven, Jan S.

    2018-02-01

    We study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters. In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media. As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem. In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.

  19. The Role of Parvalbumin, Sarcoplasmatic Reticulum Calcium Pump Rate, Rates of Cross-Bridge Dynamics, and Ryanodine Receptor Calcium Current on Peripheral Muscle Fatigue: A Simulation Study

    PubMed Central

    Neumann, Verena

    2016-01-01

    A biophysical model of the excitation-contraction pathway, which has previously been validated for slow-twitch and fast-twitch skeletal muscles, is employed to investigate key biophysical processes leading to peripheral muscle fatigue. Special emphasis hereby is on investigating how the model's original parameter sets can be interpolated such that realistic behaviour with respect to contraction time and fatigue progression can be obtained for a continuous distribution of the model's parameters across the muscle units, as found for the functional properties of muscles. The parameters are divided into 5 groups describing (i) the sarcoplasmatic reticulum calcium pump rate, (ii) the cross-bridge dynamics rates, (iii) the ryanodine receptor calcium current, (iv) the rates of binding of magnesium and calcium ions to parvalbumin and corresponding dissociations, and (v) the remaining processes. The simulations reveal that the first two parameter groups are sensitive to contraction time but not fatigue, the third parameter group affects both considered properties, and the fourth parameter group is only sensitive to fatigue progression. Hence, within the scope of the underlying model, further experimental studies should investigate parvalbumin dynamics and the ryanodine receptor calcium current to enhance the understanding of peripheral muscle fatigue. PMID:27980606

  20. Multiscale Models of Melting Arctic Sea Ice

    DTIC Science & Technology

    2013-09-30

    September 29, 2013 LONG-TERM GOALS Sea ice reflectance or albedo , a key parameter in climate modeling, is primarily determined by melt pond...and ice floe configurations. Ice - albedo feedback has played a major role in the recent declines of the summer Arctic sea ice pack. However...understanding the evolution of melt ponds and sea ice albedo remains a significant challenge to improving climate models. Our research is focused on

  1. Reconstruction of interaction rate in holographic dark energy

    NASA Astrophysics Data System (ADS)

    Mukherjee, Ankan

    2016-11-01

    The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. It is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.

  2. ON THE NOTION OF WELL-DEFINED TECTONIC REGIMES FOR TERRESTRIAL PLANETS IN THIS SOLAR SYSTEM AND OTHERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lenardic, A.; Crowley, J. W., E-mail: ajns@rice.edu, E-mail: jwgcrowley@gmail.com

    2012-08-20

    A model of coupled mantle convection and planetary tectonics is used to demonstrate that history dependence can outweigh the effects of a planet's energy content and material parameters in determining its tectonic state. The mantle convection-surface tectonics system allows multiple tectonic modes to exist for equivalent planetary parameter values. The tectonic mode of the system is then determined by its specific geologic and climatic history. This implies that models of tectonics and mantle convection will not be able to uniquely determine the tectonic mode of a terrestrial planet without the addition of historical data. Historical data exists, to variable degrees,more » for all four terrestrial planets within our solar system. For the Earth, the planet with the largest amount of observational data, debate does still remain regarding the geologic and climatic history of Earth's deep past but constraints are available. For planets in other solar systems, no such constraints exist at present. The existence of multiple tectonic modes, for equivalent parameter values, points to a reason why different groups have reached different conclusions regarding the tectonic state of extrasolar terrestrial planets larger than Earth ({sup s}uper-Earths{sup )}. The region of multiple stable solutions is predicted to widen in parameter space for more energetic mantle convection (as would be expected for larger planets). This means that different groups can find different solutions, all potentially viable and stable, using identical models and identical system parameter values. At a more practical level, the results argue that the question of whether extrasolar terrestrial planets will have plate tectonics is unanswerable and will remain so until the temporal evolution of extrasolar planets can be constrained.« less

  3. Predicting non-isometric fatigue induced by electrical stimulation pulse trains as a function of pulse duration

    PubMed Central

    2013-01-01

    Background Our previous model of the non-isometric muscle fatigue that occurs during repetitive functional electrical stimulation included models of force, motion, and fatigue and accounted for applied load but not stimulation pulse duration. Our objectives were to: 1) further develop, 2) validate, and 3) present outcome measures for a non-isometric fatigue model that can predict the effect of a range of pulse durations on muscle fatigue. Methods A computer-controlled stimulator sent electrical pulses to electrodes on the thighs of 25 able-bodied human subjects. Isometric and non-isometric non-fatiguing and fatiguing knee torques and/or angles were measured. Pulse duration (170–600 μs) was the independent variable. Measurements were divided into parameter identification and model validation subsets. Results The fatigue model was simplified by removing two of three non-isometric parameters. The third remained a function of other model parameters. Between 66% and 77% of the variability in the angle measurements was explained by the new model. Conclusion Muscle fatigue in response to different stimulation pulse durations can be predicted during non-isometric repetitive contractions. PMID:23374142

  4. Chaos and Localization in Dieterich-Ruina Friction

    NASA Astrophysics Data System (ADS)

    Erickson, B. A.; Birnir, B.; Lavallee, D.

    2009-12-01

    We consider two models derived from a 1-D Burridge-Knopoff chain of spring connected blocks subject to the Dieterich-Ruina (D-R) friction law. We analyze both the discrete ordinary differential equations, as well as the continuum model. Preliminary investigation into the ODEs shows evidence of the Dieterich-Ruina law exhibiting chaos, dependent on the size of the system. Periodic behavior occurs when considering chains of 3 or 5 blocks, while a chain of 10 blocks with the same parameter values results in chaotic motion. The continuum model (PDE) undergoes a transition to chaos when a specific parameter is increased and the chaotic regime is reached for smaller critical values than in the case of a single block (see Erickson et. al. 2008). This parameter, epsilon is the ratio of the stress parameters (B-A) and A in the D-R friction law. The parameter A is a measure of the direct velocity dependence (sometimes called the "direct effect") while (A-B) is a measure of the steady-state velocity dependence. When compared to the slip weakening friction law, the parameter (B-A) plays a role of a stress drop while A corresponds to the strength excess. In the case of a single block, transitions to chaos occur when epsilon = 11, a value too high for applications in seismology. For the continuum model however, the chaotic regime is reached for epsilon = 1. That the transition to chaos ensues for smaller parameter values than in the case of a single block may also be an indication that a careful rescaling of the friction law is necessary, similar to the conclusions made by Schmittbuhl et. al. (1996) who studied a "hierarchical array of blocks" and found that velocity weakening friction was scale dependent. We also observe solutions to both the discrete and the continuous model where the slip remains localized in space, suggesting the presence of solitonic behavior. Initial data in the form of a gaussian pulse tends to remain localized under certain parameter values and we explore the space of values for which this occurs. These solitonic or localized solutions can be understood as proxy for the propagation of the rupture across the fault during an earthquake. Under the Dieterich-Ruina law we may have discovered only a small subset of solutions to both the discrete and the continuous model, but there is no question that even in one spatial dimension, a rich phenomenology of dynamics exists.

  5. Global parameter estimation for thermodynamic models of transcriptional regulation.

    PubMed

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Microbial models with data-driven parameters predict stronger soil carbon responses to climate change.

    PubMed

    Hararuk, Oleksandra; Smith, Matthew J; Luo, Yiqi

    2015-06-01

    Long-term carbon (C) cycle feedbacks to climate depend on the future dynamics of soil organic carbon (SOC). Current models show low predictive accuracy at simulating contemporary SOC pools, which can be improved through parameter estimation. However, major uncertainty remains in global soil responses to climate change, particularly uncertainty in how the activity of soil microbial communities will respond. To date, the role of microbes in SOC dynamics has been implicitly described by decay rate constants in most conventional global carbon cycle models. Explicitly including microbial biomass dynamics into C cycle model formulations has shown potential to improve model predictive performance when assessed against global SOC databases. This study aimed to data-constrained parameters of two soil microbial models, evaluate the improvements in performance of those calibrated models in predicting contemporary carbon stocks, and compare the SOC responses to climate change and their uncertainties between microbial and conventional models. Microbial models with calibrated parameters explained 51% of variability in the observed total SOC, whereas a calibrated conventional model explained 41%. The microbial models, when forced with climate and soil carbon input predictions from the 5th Coupled Model Intercomparison Project (CMIP5), produced stronger soil C responses to 95 years of climate change than any of the 11 CMIP5 models. The calibrated microbial models predicted between 8% (2-pool model) and 11% (4-pool model) soil C losses compared with CMIP5 model projections which ranged from a 7% loss to a 22.6% gain. Lastly, we observed unrealistic oscillatory SOC dynamics in the 2-pool microbial model. The 4-pool model also produced oscillations, but they were less prominent and could be avoided, depending on the parameter values. © 2014 John Wiley & Sons Ltd.

  7. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  8. Model‐based analysis of the influence of catchment properties on hydrologic partitioning across five mountain headwater subcatchments

    PubMed Central

    Wagener, Thorsten; McGlynn, Brian

    2015-01-01

    Abstract Ungauged headwater basins are an abundant part of the river network, but dominant influences on headwater hydrologic response remain difficult to predict. To address this gap, we investigated the ability of a physically based watershed model (the Distributed Hydrology‐Soil‐Vegetation Model) to represent controls on metrics of hydrologic partitioning across five adjacent headwater subcatchments. The five study subcatchments, located in Tenderfoot Creek Experimental Forest in central Montana, have similar climate but variable topography and vegetation distribution. This facilitated a comparative hydrology approach to interpret how parameters that influence partitioning, detected via global sensitivity analysis, differ across catchments. Model parameters were constrained a priori using existing regional information and expert knowledge. Influential parameters were compared to perceptions of catchment functioning and its variability across subcatchments. Despite between‐catchment differences in topography and vegetation, hydrologic partitioning across all metrics and all subcatchments was sensitive to a similar subset of snow, vegetation, and soil parameters. Results also highlighted one subcatchment with low certainty in parameter sensitivity, indicating that the model poorly represented some complexities in this subcatchment likely because an important process is missing or poorly characterized in the mechanistic model. For use in other basins, this method can assess parameter sensitivities as a function of the specific ungauged system to which it is applied. Overall, this approach can be employed to identify dominant modeled controls on catchment response and their agreement with system understanding. PMID:27642197

  9. Inferring pathological states in cortical neuron microcircuits.

    PubMed

    Rydzewski, Jakub; Nowak, Wieslaw; Nicosia, Giuseppe

    2015-12-07

    The brain activity is to a large extent determined by states of neural cortex microcircuits. Unfortunately, accuracy of results from neural circuits׳ mathematical models is often biased by the presence of uncertainties in underlying experimental data. Moreover, due to problems with uncertainties identification in a multidimensional parameters space, it is almost impossible to classify states of the neural cortex, which correspond to a particular set of the parameters. Here, we develop a complete methodology for determining uncertainties and the novel protocol for classifying all states in any neuroinformatic model. Further, we test this protocol on the mathematical, nonlinear model of such a microcircuit developed by Giugliano et al. (2008) and applied in the experimental data analysis of Huntington׳s disease. Up to now, the link between parameter domains in the mathematical model of Huntington׳s disease and the pathological states in cortical microcircuits has remained unclear. In this paper we precisely identify all the uncertainties, the most crucial input parameters and domains that drive the system into an unhealthy state. The scheme proposed here is general and can be easily applied to other mathematical models of biological phenomena. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Efficient Ensemble State-Parameters Estimation Techniques in Ocean Ecosystem Models: Application to the North Atlantic

    NASA Astrophysics Data System (ADS)

    El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.

    2016-02-01

    Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate different biological parameters of phytoplanktons and zooplanktons. We analyze the performance of the filters in terms of complexity and accuracy of the state and parameters estimates.

  11. Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart

    PubMed Central

    Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin

    2015-01-01

    Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546

  12. Fractional blood flow in oscillatory arteries with thermal radiation and magnetic field effects

    NASA Astrophysics Data System (ADS)

    Bansi, C. D. K.; Tabi, C. B.; Motsumi, T. G.; Mohamadou, A.

    2018-06-01

    A fractional model is proposed to study the effect of heat transfer and magnetic field on the blood flowing inside oscillatory arteries. The flow is due to periodic pressure gradient and the fractional model equations include body acceleration. The proposed velocity and temperature distribution equations are solved using the Laplace and Hankel transforms. The effect of the fluid parameters such as the Reynolds number (Re), the magnetic parameter (M) and the radiation parameter (N) is studied graphically with changing the fractional-order parameter. It is found that the fractional derivative is a valuable tool to control both the temperature and velocity of blood when flow parameters change under treatment, for example. Besides, this work highlights the fact that in the presence of strong magnetic field, blood velocity and temperature reduce. A reversed effect is observed where the applied thermal radiation increase; the velocity and temperature of blood increase. However, the temperature remains high around the artery centerline, which is appropriate during treatment to avoid tissues damage.

  13. Do detailed simulations with size-resolved microphysics reproduce basic features of observed cirrus ice size distributions?

    NASA Astrophysics Data System (ADS)

    Fridlind, A. M.; Atlas, R.; van Diedenhoven, B.; Ackerman, A. S.; Rind, D. H.; Harrington, J. Y.; McFarquhar, G. M.; Um, J.; Jackson, R.; Lawson, P.

    2017-12-01

    It has recently been suggested that seeding synoptic cirrus could have desirable characteristics as a geoengineering approach, but surprisingly large uncertainties remain in the fundamental parameters that govern cirrus properties, such as mass accommodation coefficient, ice crystal physical properties, aggregation efficiency, and ice nucleation rate from typical upper tropospheric aerosol. Only one synoptic cirrus model intercomparison study has been published to date, and studies that compare the shapes of observed and simulated ice size distributions remain sparse. Here we amend a recent model intercomparison setup using observations during two 2010 SPARTICUS campaign flights. We take a quasi-Lagrangian column approach and introduce an ensemble of gravity wave scenarios derived from collocated Doppler cloud radar retrievals of vertical wind speed. We use ice crystal properties derived from in situ cloud particle images, for the first time allowing smoothly varying and internally consistent treatments of nonspherical ice capacitance, fall speed, gravitational collection, and optical properties over all particle sizes in our model. We test two new parameterizations for mass accommodation coefficient as a function of size, temperature and water vapor supersaturation, and several ice nucleation scenarios. Comparison of results with in situ ice particle size distribution data, corrected using state-of-the-art algorithms to remove shattering artifacts, indicate that poorly constrained uncertainties in the number concentration of crystals smaller than 100 µm in maximum dimension still prohibit distinguishing which parameter combinations are more realistic. When projected area is concentrated at such sizes, the only parameter combination that reproduces observed size distribution properties uses a fixed mass accommodation coefficient of 0.01, on the low end of recently reported values. No simulations reproduce the observed abundance of such small crystals when the projected area is concentrated at larger sizes. Simulations across the parameter space are also compared with MODIS collection 6 retrievals and forward simulations of cloud radar reflectivity and mean Doppler velocity. Results motivate further in situ and laboratory measurements to narrow parameter uncertainties in models.

  14. Earth-moon system: Dynamics and parameter estimation

    NASA Technical Reports Server (NTRS)

    Breedlove, W. J., Jr.

    1975-01-01

    A theoretical development of the equations of motion governing the earth-moon system is presented. The earth and moon were treated as finite rigid bodies and a mutual potential was utilized. The sun and remaining planets were treated as particles. Relativistic, non-rigid, and dissipative effects were not included. The translational and rotational motion of the earth and moon were derived in a fully coupled set of equations. Euler parameters were used to model the rotational motions. The mathematical model is intended for use with data analysis software to estimate physical parameters of the earth-moon system using primarily LURE type data. Two program listings are included. Program ANEAMO computes the translational/rotational motion of the earth and moon from analytical solutions. Program RIGEM numerically integrates the fully coupled motions as described above.

  15. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  16. Logic-based models in systems biology: a predictive and parameter-free network analysis method†

    PubMed Central

    Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.

    2012-01-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820

  17. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  18. A Semi-analytical Line Transfer (SALT) Model. II: The Effects of a Bi-conical Geometry

    NASA Astrophysics Data System (ADS)

    Carr, Cody; Scarlata, Claudia; Panagia, Nino; Henry, Alaina

    2018-06-01

    We generalize the semi-analytical line transfer model recently introduced by Scarlata & Panagia for modeling galactic outflows, to account for bi-conical geometries of various opening angles and orientations with respect to the line of sight to the observer, as well as generalized velocity fields. We model the absorption and emission component of the line profile resulting from resonant absorption in the bi-conical outflow. We show how the outflow geometry impacts the resulting line profile. We use simulated spectra with different geometries and velocity fields to study how well the outflow parameters can be recovered. We find that geometrical parameters (including the opening angle and the orientation) are always well recovered. The density and velocity field parameters are reliably recovered when both an absorption and an emission component are visible in the spectra. This condition implies that the velocity and density fields for narrow cones oriented perpendicular to the line of sight will remain unconstrained.

  19. Constraining Secluded Dark Matter models with the public data from the 79-string IceCube search for dark matter in the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ardid, M.; Felis, I.; Martínez-Mora, J.A.

    The 79-string IceCube search for dark matter in the Sun public data is used to test Secluded Dark Matter models. No significant excess over background is observed and constraints on the parameters of the models are derived. Moreover, the search is also used to constrain the dark photon model in the region of the parameter space with dark photon masses between 0.22 and ∼ 1 GeV and a kinetic mixing parameter ε ∼ 10{sup −9}, which remains unconstrained. These are the first constraints of dark photons from neutrino telescopes. It is expected that neutrino telescopes will be efficient tools tomore » test dark photons by means of different searches in the Sun, Earth and Galactic Center, which could complement constraints from direct detection, accelerators, astrophysics and indirect detection with other messengers, such as gamma rays or antiparticles.« less

  20. End-of-Discharge and End-of-Life Prediction in Lithium-Ion Batteries with Electrochemistry-Based Aging Models

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Kulkarni, Chetan S.

    2016-01-01

    As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.

  1. Constraining the 2012-2014 growing season Alaskan methane budget using CARVE aircraft measurements

    NASA Astrophysics Data System (ADS)

    Hartery, S.; Chang, R. Y. W.; Commane, R.; Lindaas, J.; Miller, S. M.; Wofsy, S. C.; Karion, A.; Sweeney, C.; Miller, C. E.; Dinardo, S. J.; Steiner, N.; McDonald, K. C.; Watts, J. D.; Zona, D.; Oechel, W. C.; Kimball, J. S.; Henderson, J.; Mountain, M. E.

    2015-12-01

    Soil in northen latitudes contains rich carbon stores which have been historically preserved via permafrost within the soil bed; however, recent surface warming in these regions is allowing deeper soil layers to thaw, influencing the net carbon exchange from these areas. Due to the extreme nature of its climate, these eco-regions remain poorly understood by most global models. In this study we analyze methane fluxes from Alaska using in situ aircraft observations from the 2012-2014 Carbon in Arctic Reservoir Vulnerability Experiment (CARVE). These observations are coupled with an atmospheric particle transport model which quantitatively links surface emissions to atmospheric observations to make regional methane emission estimates. The results of this study are two-fold. First, the inter-annual variability of the methane emissions was found to be <1 Tg over the area of interest and is largely influenced by the length of time the deep soil remains unfrozen. Second, the resulting methane flux estimates and mean soil parameters were used to develop an empirical emissions model to help spatially and temporally constrain the methane exchange at the Alaskan soil surface. The empirical emissions model will provide a basis for exploring the sensitivity of methane emissions to subsurface soil temperature, soil moisture, organic carbon content, and other parameters commonly used in process-based models.

  2. Acoustics of marine sediment under compaction: binary grain-size model and viscoelastic extension of Biot's theory.

    PubMed

    Leurer, Klaus C; Brown, Colin

    2008-04-01

    This paper presents a model of acoustic wave propagation in unconsolidated marine sediment, including compaction, using a concept of a simplified sediment structure, modeled as a binary grain-size sphere pack. Compressional- and shear-wave velocities and attenuation follow from a combination of Biot's model, used as the general framework, and two viscoelastic extensions resulting in complex grain and frame moduli, respectively. An effective-grain model accounts for the viscoelasticity arising from local fluid flow in expandable clay minerals in clay-bearing sediments. A viscoelastic-contact model describes local fluid flow at the grain contacts. Porosity, density, and the structural Biot parameters (permeability, pore size, structure factor) as a function of pressure follow from the binary model, so that the remaining input parameters to the acoustic model consist solely of the mass fractions and the known mechanical properties of each constituent (e.g., carbonates, sand, clay, and expandable clay) of the sediment, effective pressure, or depth, and the environmental parameters (water depth, salinity, temperature). Velocity and attenuation as a function of pressure from the model are in good agreement with data on coarse- and fine-grained unconsolidated marine sediments.

  3. Temperature and pressure dependent thermodynamic behavior of 2H-CuInO2

    NASA Astrophysics Data System (ADS)

    Bhamu, K. C.

    2018-05-01

    Density functional theory and quasi-harmonic Debye model has been used to study the thermodynamic properties of 2H-CuInO2. At the optimized structural parameters, pressure (0 to 80 GPa) dependent variation in the various thermodynamic properties, i.e. unit cell volume (V), bulk modulus (B), specific heat (Cv), Debye temperature (θD), Grüneisen parameter (γ) and thermal expansion coefficient (α) are calculated for various temperature values. The results predict that the pressure has significant effect on unit cell volume and bulk modulus while the temperature shows negligible effect on both parameters. With increasing temperature thermal expansion coefficient increase while with increasing pressure it decreases. The specific heat remains close to zero for ambient pressure and temperature values and it increases with increasing temperature. It is observed that the pressure has high impact on Debye temperature and Grüneisen parameter instead of temperature. Debye temperature and Grüneisen parameter both remains almost constant for the temperature range (0-300K) while Grüneisen parameter decrease with increasing pressure at constant temperature and Debye temperature increases rapidly with increasing pressure. An increase in Debye temperature with respect to pressure shows that the thermal vibration frequency changes rapidly.

  4. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).

  5. Quantum Discord Preservation for Two Quantum-Correlated Qubits in Two Independent Reserviors

    NASA Astrophysics Data System (ADS)

    Xu, Lan

    2018-03-01

    We investigate the dynamics of quantum discord using an exactly solvable model where two qubits coupled to independent thermal environments. The quantum discord is employed as a non-classical correlation quantifier. By studying the quantum discord of a class of initial states, we find discord remains preserve for a finite time. The effects of the temperature, initial-state parameter, system-reservoir coupling constant and temperature difference parameter of the two independent reserviors are also investigated. We discover that the quantum nature loses faster in high temperature, however, one can extend the time of quantum nature by choosing smaller system-reservoir coupling constant, larger certain initial-state parameter and larger temperature difference parameter.

  6. Competitive Modes for the Detection of Chaotic Parameter Regimes in the General Chaotic Bilinear System of Lorenz Type

    NASA Astrophysics Data System (ADS)

    Mallory, Kristina; van Gorder, Robert A.

    We study chaotic behavior of solutions to the bilinear system of Lorenz type developed by Celikovsky and Vanecek [1994] through an application of competitive modes. This bilinear system of Lorenz type is one possible canonical form holding the Lorenz equation as a special case. Using a competitive modes analysis, which is a completely analytical method allowing one to identify parameter regimes for which chaos may occur, we are able to demonstrate a number of parameter regimes which admit a variety of distinct chaotic behaviors. Indeed, we are able to draw some interesting conclusions which relate the behavior of the mode frequencies arising from writing the state variables for the Celikovsky-Vanecek model as coupled oscillators, and the types of emergent chaotic behaviors observed. The competitive modes analysis is particularly useful if all but one of the model parameters are fixed, and the remaining free parameter is used to modify the chaos observed, in a manner analogous to a bifurcation parameter. Through a thorough application of the method, we are able to identify several parameter regimes which give new dynamics (such as specific forms of chaos) which were not observed or studied previously in the Celikovsky-Vanecek model. Therefore, the results demonstrate the advantage of the competitive modes approach for detecting new parameter regimes leading to chaos in third-order dynamical systems.

  7. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  8. An inverse modeling approach to estimate groundwater flow and transport model parameters at a research site at Vandenberg AFB, CA

    NASA Astrophysics Data System (ADS)

    Rasa, E.; Foglia, L.; Mackay, D. M.; Ginn, T. R.; Scow, K. M.

    2009-12-01

    A numerical groundwater fate and transport model was developed for analyses of data from field experiments evaluating the impacts of ethanol on the natural attenuation of benzene, toluene, ethylbenzene, and xylenes (BTEX) and methyl tert-butyl ether (MTBE) at Vandenberg Air Force Base, Site 60. We used the U.S. Geological Survey (USGS) groundwater flow (MODFLOW2000) and transport (MT3DMS) models in conjunction with the USGS universal inverse modeling code (UCODE) to jointly determine flow and transport parameters using bromide tracer data from multiple experiments in the same location. The key flow and transport parameters include hydraulic conductivity of aquifer and aquitard layers, porosity, and transverse and longitudinal dispersivity. Aquifer and aquitard layers were assumed homogenous in this study. Therefore, the calibration parameters were not spatially variable within each layer. A total of 162 monitoring wells in seven transects perpendicular to the mean flow direction were monitored over the course of ten months, resulting in 1,766 bromide concentration data points and 149 head values used as observations for the inverse modeling. The results showed the significance of the concentration observation data in predicting the flow model parameters and indicated the sensitivity of the hydraulic conductivity of different zones in the aquifer including the excavated former contaminant zone. The model has already been used to evaluate alternative designs for further experiments on in situ bioremediation of the tert-butyl alcohol (TBA) plume remaining at the site. We describe the recent applications of the model and future work, including adding reaction submodels to the calibrated flow model.

  9. A Self-Organizing State-Space-Model Approach for Parameter Estimation in Hodgkin-Huxley-Type Models of Single Neurons

    PubMed Central

    Vavoulis, Dimitrios V.; Straub, Volko A.; Aston, John A. D.; Feng, Jianfeng

    2012-01-01

    Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models. PMID:22396632

  10. Carbon dioxide stripping in aquaculture -- part III: model verification

    USGS Publications Warehouse

    Colt, John; Watten, Barnaby; Pfeiffer, Tim

    2012-01-01

    Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.

  11. Analysis of Flow Behavior of an Nb-Ti Microalloyed Steel During Hot Deformation

    NASA Astrophysics Data System (ADS)

    Mohebbi, Mohammad Sadegh; Parsa, Mohammad Habibi; Rezayat, Mohammad; Orovčík, L'ubomír

    2018-03-01

    The hot flow behavior of an Nb-Ti microalloyed steel is investigated through hot compression test at various strain rates and temperatures. By the combination of dynamic recovery (DRV) and dynamic recrystallization (DRX) models, a phenomenological constitutive model is developed to derive the flow stress. The predefined activation energy of Q = 270 kJ/mol and the exponent of n = 5 are successfully set to derive critical stress at the onset of DRX and saturation stress of DRV as functions of the Zener-Hollomon parameter by the classical hyperbolic sine equation. The remaining parameters of the constitutive model are determined by fitting them to the experiments. Through substitution of a normalized strain in the DRV model and considering the interconnections between dependent parameters, a new model is developed. It is shown that, despite its fewer parameters, this model is in good agreement with the experiments. Accurate analyses of flow data along with microstructural analyses indicate that the dissolution of NbC precipitates and its consequent solid solution strengthening and retardation of DRX are responsible for the distinguished behaviors in the two temperature ranges between T < 1100 °C and T ≥ 1100 °C. Nevertheless, it is shown that a single constitutive equation can still be employed for the present steel in the whole tested temperature ranges.

  12. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  13. Simulation of pesticide dissipation in soil at the catchment scale over 23 years

    NASA Astrophysics Data System (ADS)

    Queyrel, Wilfried; Florence, Habets; Hélène, Blanchoud; Céline, Schott; Laurine, Nicola

    2014-05-01

    Pesticide applications lead to contamination risks of environmental compartments causing harmful effects on water resource used for drinking water. Pesticide fate modeling is assumed to be a relevant approach to study pesticide dissipation at the catchment scale. Simulations of five herbicides (atrazine, simazine, isoproturon, chlortoluron, metolachor) and one metabolite (DEA) were carried out with the crop model STICS over a 23-year period (1990-2012). The model application was performed using real agricultural practices over a small rural catchment (104 km²) located at 60km east from Paris (France). Model applications were established for two crops: wheat and maize. The objectives of the study were i) to highlight the main processes implied in pesticide fate and transfer at long-term; ii) to assess the influence of dynamics of the remaining mass of pesticide in soil on transfer; iii) to determine the most sensitive parameters related to pesticide losses by leaching over a 23-year period. The simulated data related to crop yield, water transfer, nitrates and pesticide concentrations were first compared to observations over the 23-year period, when measurements were available at the catchment scale. Then, the evaluation of the main processes related to pesticide fate and transfer was performed using long-term simulations at a yearly time step and monthly average variations. Analyses of the monthly average variations were oriented on the impact of pesticide application, water transfer and pesticide transformation on pesticide leaching. The evolution of the remaining mass of pesticide in soil, including the mobile phase (the liquid phase) and non-mobile (adsorbed at equilibrium and non-equilibrium), was studied to evaluate the impact of pesticide stored in soil on the fraction available for leaching. Finally, a sensitivity test was performed to evaluate the more sensitive parameters regarding the remaining mass of pesticide in soil and leaching. The findings of the study show that the dynamic of the remaining mass of pesticide in soil is a relevant issue to understand pesticide dissipation at long term. Attention must be paid on parameters influencing sorption and availability of the pesticide for leaching. To conclude, the significant discrepancies in the simulated pesticide leaching for the two types of crops (maize and wheat) highlight the interest of using a crop model to simulate the fate of pesticides at the catchment scale.

  14. Predicting loop–helix tertiary structural contacts in RNA pseudoknots

    PubMed Central

    Cao, Song; Giedroc, David P.; Chen, Shi-Jie

    2010-01-01

    Tertiary interactions between loops and helical stems play critical roles in the biological function of many RNA pseudoknots. However, quantitative predictions for RNA tertiary interactions remain elusive. Here we report a statistical mechanical model for the prediction of noncanonical loop–stem base-pairing interactions in RNA pseudoknots. Central to the model is the evaluation of the conformational entropy for the pseudoknotted folds with defined loop–stem tertiary structural contacts. We develop an RNA virtual bond-based conformational model (Vfold model), which permits a rigorous computation of the conformational entropy for a given fold that contains loop–stem tertiary contacts. With the entropy parameters predicted from the Vfold model and the energy parameters for the tertiary contacts as inserted parameters, we can then predict the RNA folding thermodynamics, from which we can extract the tertiary contact thermodynamic parameters from theory–experimental comparisons. These comparisons reveal a contact enthalpy (ΔH) of −14 kcal/mol and a contact entropy (ΔS) of −38 cal/mol/K for a protonated C+•(G–C) base triple at pH 7.0, and (ΔH = −7 kcal/mol, ΔS = −19 cal/mol/K) for an unprotonated base triple. Tests of the model for a series of pseudoknots show good theory–experiment agreement. Based on the extracted energy parameters for the tertiary structural contacts, the model enables predictions for the structure, stability, and folding pathways for RNA pseudoknots with known or postulated loop–stem tertiary contacts from the nucleotide sequence alone. PMID:20100813

  15. A sensitivity analysis of cloud properties to CLUBB parameters in the single-column Community Atmosphere Model (SCAM5)

    DOE PAGES

    Guo, Zhun; Wang, Minghuai; Qian, Yun; ...

    2014-08-13

    In this study, we investigate the sensitivity of simulated shallow cumulus and stratocumulus clouds to selected tunable parameters of Cloud Layers Unified by Binormals (CLUBB) in the single column version of Community Atmosphere Model version 5 (SCAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is adopted to study the responses of simulated cloud fields to tunable parameters. One stratocumulus and two shallow convection cases are configured at both coarse and fine vertical resolutions in this study.. Our results show that most of the variance in simulated cloudmore » fields can be explained by a small number of tunable parameters. The parameters related to Newtonian and buoyancy-damping terms of total water flux are found to be the most influential parameters for stratocumulus. For shallow cumulus, the most influential parameters are those related to skewness of vertical velocity, reflecting the strong coupling between cloud properties and dynamics in this regime. The influential parameters in the stratocumulus case are sensitive to the choice of the vertical resolution while little sensitivity is found for the shallow convection cases, as eddy mixing length (or dissipation time scale) plays a more important role and depends more strongly on the vertical resolution in stratocumulus than in shallow convections. The influential parameters remain almost unchanged when the number of tunable parameters increases from 16 to 35. This study improves understanding of the CLUBB behavior associated with parameter uncertainties.« less

  16. Gestation-Specific Changes in the Anatomy and Physiology of Healthy Pregnant Women: An Extended Repository of Model Parameters for Physiologically Based Pharmacokinetic Modeling in Pregnancy.

    PubMed

    Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg

    2017-11-01

    In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.

  17. Integrative neural networks model for prediction of sediment rating curve parameters for ungauged basins

    NASA Astrophysics Data System (ADS)

    Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.

    2015-12-01

    One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.

  18. Modeling the Severity of Drinking Consequences in First-Year College Women: An Item Response Theory Analysis of the Rutgers Alcohol Problem Index*

    PubMed Central

    Cohn, Amy M.; Hagman, Brett T.; Graff, Fiona S.; Noel, Nora E.

    2011-01-01

    Objective: The present study examined the latent continuum of alcohol-related negative consequences among first-year college women using methods from item response theory and classical test theory. Method: Participants (N = 315) were college women in their freshman year who reported consuming any alcohol in the past 90 days and who completed assessments of alcohol consumption and alcohol-related negative consequences using the Rutgers Alcohol Problem Index. Results: Item response theory analyses showed poor model fit for five items identified in the Rutgers Alcohol Problem Index. Two-parameter item response theory logistic models were applied to the remaining 18 items to examine estimates of item difficulty (i.e., severity) and discrimination parameters. The item difficulty parameters ranged from 0.591 to 2.031, and the discrimination parameters ranged from 0.321 to 2.371. Classical test theory analyses indicated that the omission of the five misfit items did not significantly alter the psychometric properties of the construct. Conclusions: Findings suggest that those consequences that had greater severity and discrimination parameters may be used as screening items to identify female problem drinkers at risk for an alcohol use disorder. PMID:22051212

  19. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  20. Reconstruction of interaction rate in holographic dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Ankan, E-mail: ankan_ju@iiserkol.ac.in

    2016-11-01

    The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. Itmore » is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.« less

  1. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  2. Application of describing function analysis to a model of deep brain stimulation.

    PubMed

    Davidson, Clare Muireann; de Paor, Annraoi M; Lowery, Madeleine M

    2014-03-01

    Deep brain stimulation effectively alleviates motor symptoms of medically refractory Parkinson's disease, and also relieves many other treatment-resistant movement and affective disorders. Despite its relative success as a treatment option, the basis of its efficacy remains elusive. In Parkinson's disease, increased functional connectivity and oscillatory activity occur within the basal ganglia as a result of dopamine loss. A correlative relationship between pathological oscillatory activity and the motor symptoms of the disease, in particular bradykinesia, rigidity, and tremor, has been established. Suppression of the oscillations by either dopamine replacement or DBS also correlates with an improvement in motor symptoms. DBS parameters are currently chosen empirically using a "trial and error" approach, which can be time-consuming and costly. The work presented here amalgamates concepts from theories of neural network modeling with nonlinear control engineering to describe and analyze a model of synchronous neural activity and applied stimulation. A theoretical expression for the optimum stimulation parameters necessary to suppress oscillations is derived. The effect of changing stimulation parameters (amplitude and pulse duration) on induced oscillations is studied in the model. Increasing either stimulation pulse duration or amplitude enhanced the level of suppression. The predicted parameters were found to agree well with clinical measurements reported in the literature for individual patients. It is anticipated that the simplified model described may facilitate the development of protocols to aid optimum stimulation parameter choice on a patient by patient basis.

  3. 3D MHD Models of Active Region Loops

    NASA Technical Reports Server (NTRS)

    Ofman, Leon

    2004-01-01

    Present imaging and spectroscopic observations of active region loops allow to determine many physical parameters of the coronal loops, such as the density, temperature, velocity of flows in loops, and the magnetic field. However, due to projection effects many of these parameters remain ambiguous. Three dimensional imaging in EUV by the STEREO spacecraft will help to resolve the projection ambiguities, and the observations could be used to setup 3D MHD models of active region loops to study the dynamics and stability of active regions. Here the results of 3D MHD models of active region loops are presented, and the progress towards more realistic 3D MHD models of active regions. In particular the effects of impulsive events on the excitation of active region loop oscillations, and the generation, propagations and reflection of EIT waves are shown. It is shown how 3D MHD models together with 3D EUV observations can be used as a diagnostic tool for active region loop physical parameters, and to advance the science of the sources of solar coronal activity.

  4. Numerical Investigation of the Residual Stress Distribution of Flat-Faced and Convexly Curved Tablets Using the Finite Element Method.

    PubMed

    Otoguro, Saori; Hayashi, Yoshihiro; Miura, Takahiro; Uehara, Naoto; Utsumi, Shunichi; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo

    2015-01-01

    The stress distribution of tablets after compression was simulated using a finite element method, where the powder was defined by the Drucker-Prager cap model. The effect of tablet shape, identified by the surface curvature, on the residual stress distribution was investigated. In flat-faced tablets, weak positive shear stress remained from the top and bottom die walls toward the center of the tablet. In the case of the convexly curved tablet, strong positive shear stress remained on the upper side and in the intermediate part between the die wall and the center of the tablet. In the case of x-axial stress, negative values were observed for all tablets, suggesting that the x-axial force always acts from the die wall toward the center of the tablet. In the flat tablet, negative x-axial stress remained from the upper edge to the center bottom. The x-axial stress distribution differed between the flat and convexly curved tablets. Weak stress remained in the y-axial direction of the flat tablet, whereas an upward force remained at the center of the convexly curved tablet. By employing multiple linear regression analysis, the mechanical properties of the tablets were predicted accurately as functions of their residual stress distribution. However, the multiple linear regression prediction of the dissolution parameters of acetaminophen, used here as a model drug, was limited, suggesting that the dissolution of active ingredients is not a simple process; further investigation is needed to enable accurate predictions of dissolution parameters.

  5. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  6. SOFT: a synthetic synchrotron diagnostic for runaway electrons

    NASA Astrophysics Data System (ADS)

    Hoppe, M.; Embréus, O.; Tinguely, R. A.; Granetz, R. S.; Stahl, A.; Fülöp, T.

    2018-02-01

    Improved understanding of the dynamics of runaway electrons can be obtained by measurement and interpretation of their synchrotron radiation emission. Models for synchrotron radiation emitted by relativistic electrons are well established, but the question of how various geometric effects—such as magnetic field inhomogeneity and camera placement—influence the synchrotron measurements and their interpretation remains open. In this paper we address this issue by simulating synchrotron images and spectra using the new synthetic synchrotron diagnostic tool SOFT (Synchrotron-detecting Orbit Following Toolkit). We identify the key parameters influencing the synchrotron radiation spot and present scans in those parameters. Using a runaway electron distribution function obtained by Fokker-Planck simulations for parameters from an Alcator C-Mod discharge, we demonstrate that the corresponding synchrotron image is well-reproduced by SOFT simulations, and we explain how it can be understood in terms of the parameter scans. Geometric effects are shown to significantly influence the synchrotron spectrum, and we show that inherent inconsistencies in a simple emission model (i.e. not modeling detection) can lead to incorrect interpretation of the images.

  7. Robust sensor fault detection and isolation of gas turbine engines subjected to time-varying parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar

    2016-08-01

    In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.

  8. Slice simulation from a model of the parenchymous vascularization to evaluate texture features: work in progress.

    PubMed

    Rolland, Y; Bézy-Wendling, J; Duvauferrier, R; Coatrieux, J L

    1999-03-01

    To demonstrate the usefulness of a model of the parenchymous vascularization to evaluate texture analysis methods. Slices with thickness varying from 1 to 4 mm were reformatted from a 3D vascular model corresponding to either normal tissue perfusion or local hypervascularization. Parameters of statistical methods were measured on 16128x128 regions of interest, and mean values and standard deviation were calculated. For each parameter, the performances (discrimination power and stability) were evaluated. Among 11 calculated statistical parameters, three (homogeneity, entropy, mean of gradients) were found to have a good discriminating power to differentiate normal perfusion from hypervascularization, but only the gradient mean was found to have a good stability with respect to the thickness. Five parameters (run percentage, run length distribution, long run emphasis, contrast, and gray level distribution) were found to have intermediate results. In the remaining three, curtosis and correlation was found to have little discrimination power, skewness none. This 3D vascular model, which allows the generation of various examples of vascular textures, is a powerful tool to assess the performance of texture analysis methods. This improves our knowledge of the methods and should contribute to their a priori choice when designing clinical studies.

  9. Assessing composition and structure of soft biphasic media from Kelvin-Voigt fractional derivative model parameters

    NASA Astrophysics Data System (ADS)

    Zhang, Hongmei; Wang, Yue; Fatemi, Mostafa; Insana, Michael F.

    2017-03-01

    Kelvin-Voigt fractional derivative (KVFD) model parameters have been used to describe viscoelastic properties of soft tissues. However, translating model parameters into a concise set of intrinsic mechanical properties related to tissue composition and structure remains challenging. This paper begins by exploring these relationships using a biphasic emulsion materials with known composition. Mechanical properties are measured by analyzing data from two indentation techniques—ramp-stress relaxation and load-unload hysteresis tests. Material composition is predictably correlated with viscoelastic model parameters. Model parameters estimated from the tests reveal that elastic modulus E 0 closely approximates the shear modulus for pure gelatin. Fractional-order parameter α and time constant τ vary monotonically with the volume fraction of the material’s fluid component. α characterizes medium fluidity and the rate of energy dissipation, and τ is a viscous time constant. Numerical simulations suggest that the viscous coefficient η is proportional to the energy lost during quasi-static force-displacement cycles, E A . The slope of E A versus η is determined by α and the applied indentation ramp time T r. Experimental measurements from phantom and ex vivo liver data show close agreement with theoretical predictions of the η -{{E}A} relation. The relative error is less than 20% for emulsions 22% for liver. We find that KVFD model parameters form a concise features space for biphasic medium characterization that described time-varying mechanical properties. The experimental work was carried out at the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA. Methodological development, including numerical simulation and all data analysis, were carried out at the school of Life Science and Technology, Xi’an JiaoTong University, 710049, China.

  10. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  11. VALIDATION OF THE CORONAL THICK TARGET SOURCE MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleishman, Gregory D.; Xu, Yan; Nita, Gelu N.

    2016-01-10

    We present detailed 3D modeling of a dense, coronal thick-target X-ray flare using the GX Simulator tool, photospheric magnetic measurements, and microwave imaging and spectroscopy data. The developed model offers a remarkable agreement between the synthesized and observed spectra and images in both X-ray and microwave domains, which validates the entire model. The flaring loop parameters are chosen to reproduce the emission measure, temperature, and the nonthermal electron distribution at low energies derived from the X-ray spectral fit, while the remaining parameters, unconstrained by the X-ray data, are selected such as to match the microwave images and total power spectra.more » The modeling suggests that the accelerated electrons are trapped in the coronal part of the flaring loop, but away from where the magnetic field is minimal, and, thus, demonstrates that the data are clearly inconsistent with electron magnetic trapping in the weak diffusion regime mediated by the Coulomb collisions. Thus, the modeling supports the interpretation of the coronal thick-target sources as sites of electron acceleration in flares and supplies us with a realistic 3D model with physical parameters of the acceleration region and flaring loop.« less

  12. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  13. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    PubMed Central

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  14. The ISACA Business Model for Information Security: An Integrative and Innovative Approach

    NASA Astrophysics Data System (ADS)

    von Roessing, Rolf

    In recent years, information security management has matured into a professional discipline that covers both technical and managerial aspects in an organisational environment. Information security is increasingly dependent on business-driven parameters and interfaces to a variety of organisational units and departments. In contrast, common security models and frameworks have remained largely technical. A review of extant models ranging from [LaBe73] to more recent models shows that technical aspects are covered in great detail, while the managerial aspects of security are often neglected.Likewise, the business view on organisational security is frequently at odds with the demands of information security personnel or information technology management. In practice, senior and executive level management remain comparatively distant from technical requirements. As a result, information security is generally regarded as a cost factor rather than a benefit to the organisation.

  15. Predictions from star formation in the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Leichenauer, Stefan

    2010-03-15

    We compute trivariate probability distributions in the landscape, scanning simultaneously over the cosmological constant, the primordial density contrast, and spatial curvature. We consider two different measures for regulating the divergences of eternal inflation, and three different models for observers. In one model, observers are assumed to arise in proportion to the entropy produced by stars; in the others, they arise at a fixed time (5 or 10x10{sup 9} years) after star formation. The star formation rate, which underlies all our observer models, depends sensitively on the three scanning parameters. We employ a recently developed model of star formation in themore » multiverse, a considerable refinement over previous treatments of the astrophysical and cosmological properties of different pocket universes. For each combination of observer model and measure, we display all single and bivariate probability distributions, both with the remaining parameter(s) held fixed and marginalized. Our results depend only weakly on the observer model but more strongly on the measure. Using the causal diamond measure, the observed parameter values (or bounds) lie within the central 2{sigma} of nearly all probability distributions we compute, and always within 3{sigma}. This success is encouraging and rather nontrivial, considering the large size and dimension of the parameter space. The causal patch measure gives similar results as long as curvature is negligible. If curvature dominates, the causal patch leads to a novel runaway: it prefers a negative value of the cosmological constant, with the smallest magnitude available in the landscape.« less

  16. Effects of Differing Energy Dependences in Three Level-Density Models on Calculated Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, C.Y.

    2000-07-15

    Three level-density formalisms commonly used for cross-section calculations are examined. Residual nuclides in neutron interaction with {sup 58}Ni are chosen to quantify the well-known differences in the energy dependences of the three formalisms. Level-density parameters for the Gilbert and Cameron model are determined from experimental information. Parameters for the back-shifted Fermi-gas and generalized superfluid models are obtained by fitting their level densities at two selected energies for each nuclide to those of the Gilbert and Cameron model, forcing the level densities of the three models to be as close as physically allowed. The remaining differences are in their energy dependencesmore » that, it is shown, can change the calculated cross sections and particle emission spectra significantly, in some cases or energy ranges by a factor of 2.« less

  17. Bringing metabolic networks to life: convenience rate law and thermodynamic constraints

    PubMed Central

    Liebermeister, Wolfram; Klipp, Edda

    2006-01-01

    Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669

  18. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    PubMed

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2017-04-01

    Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear influence of the weights in the different SA scenarios. However, working with grouped factors resolves this issue and leads to clear importance results.

  20. Estimation of Boreal Forest Biomass Using Spaceborne SAR Systems

    NASA Technical Reports Server (NTRS)

    Saatchi, Sassan; Moghaddam, Mahta

    1995-01-01

    In this paper, we report on the use of a semiempirical algorithm derived from a two layer radar backscatter model for forest canopies. The model stratifies the forest canopy into crown and stem layers, separates the structural and biometric attributes of the canopy. The structural parameters are estimated by training the model with polarimetric SAR (synthetic aperture radar) data acquired over homogeneous stands with known above ground biomass. Given the structural parameters, the semi-empirical algorithm has four remaining parameters, crown biomass, stem biomass, surface soil moisture, and surface rms height that can be estimated by at least four independent SAR measurements. The algorithm has been used to generate biomass maps over the entire images acquired by JPL AIRSAR and SIR-C SAR systems. The semi-empirical algorithms are then modified to be used by single frequency radar systems such as ERS-1, JERS-1, and Radarsat. The accuracy. of biomass estimation from single channel radars is compared with the case when the channels are used together in synergism or in a polarimetric system.

  1. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  2. Neurite density from magnetic resonance diffusion measurements at ultrahigh field: Comparison with light microscopy and electron microscopy

    PubMed Central

    Jespersen, Sune N.; Bjarkam, Carsten R.; Nyengaard, Jens R.; Chakravarty, M. Mallar; Hansen, Brian; Vosegaard, Thomas; Østergaard, Leif; Yablonskiy, Dmitriy; Nielsen, Niels Chr.; Vestergaard-Poulsen, Peter

    2010-01-01

    Due to its unique sensitivity to tissue microstructure, diffusion-weighted magnetic resonance imaging (MRI) has found many applications in clinical and fundamental science. With few exceptions, a more precise correspondence between physiological or biophysical properties and the obtained diffusion parameters remain uncertain due to lack of specificity. In this work, we address this problem by comparing diffusion parameters of a recently introduced model for water diffusion in brain matter to light microscopy and quantitative electron microscopy. Specifically, we compare diffusion model predictions of neurite density in rats to optical myelin staining intensity and stereological estimation of neurite volume fraction using electron microscopy. We find that the diffusion model describes data better and that its parameters show stronger correlation with optical and electron microscopy, and thus reflect myelinated neurite density better than the more frequently used diffusion tensor imaging (DTI) and cumulant expansion methods. Furthermore, the estimated neurite orientations capture dendritic architecture more faithfully than DTI diffusion ellipsoids. PMID:19732836

  3. Pressure oscillation delivery to the lung: Computer simulation of neonatal breathing parameters.

    PubMed

    Al-Jumaily, Ahmed M; Reddy, Prasika I; Bold, Geoff T; Pillow, J Jane

    2011-10-13

    Preterm newborn infants may develop respiratory distress syndrome (RDS) due to functional and structural immaturity. A lack of surfactant promotes collapse of alveolar regions and airways such that newborns with RDS are subject to increased inspiratory effort and non-homogeneous ventilation. Pressure oscillation has been incorporated into one form of RDS treatment; however, how far it reaches various parts of the lung is still questionable. Since in-vivo measurement is very difficult if not impossible, mathematical modeling may be used as one way of assessment. Whereas many models of the respiratory system have been developed for adults, the neonatal lung remains essentially ill-described in mathematical models. A mathematical model is developed, which represents the first few generations of the tracheo-bronchial tree and the 5 lobes that make up the premature ovine lung. The elements of the model are derived using the lumped parameter approach and formulated in Simulink™ within the Matlab™ environment. The respiratory parameters at the airway opening compare well with those measured from experiments. The model demonstrates the ability to predict pressures, flows and volumes in the alveolar regions of a premature ovine lung. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Neutron coincidence measurements when nuclear parameters vary during the multiplication process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.

    1995-07-01

    In a recent paper, a physical/mathematical model was developed for neutron coincidence counting, taking explicit account of neutron absorption and leakage, and using dual probability generating function to derive explicit formulae for the single and multiple count-rates in terms of the physical parameters of the system. The results of this modeling proved very successful in a number of cases in which the system parameters (neutron reaction cross-sections, detection probabilities, etc.) remained the same at the various stages of the process (i.e. from collision to collision). However, there are practical circumstances in which such system parameters change from collision to collision,more » and it is necessary to accommodate these, too, in a general theory, applicable to such situations. For instance, in the case of the neutron coincidence collar (NCC), the parameters for the initial, spontaneous fission neutrons, are not the same as those for the succeeding induced fission neutrons, and similar situations can be envisaged for certain other experimental configurations. This present document shows how the previous considerations can be elaborated to embrace these more general requirements.« less

  5. Rhelogical constraints on ridge formation on Icy Satellites

    NASA Astrophysics Data System (ADS)

    Rudolph, M. L.; Manga, M.

    2010-12-01

    The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.

  6. Accounting for the influence of vegetation and landscape improves model transferability in a tropical savannah region

    NASA Astrophysics Data System (ADS)

    Gao, Hongkai; Hrachowitz, Markus; Sriwongsitanon, Nutchanart; Fenicia, Fabrizio; Gharari, Shervan; Savenije, Hubert H. G.

    2016-10-01

    Understanding which catchment characteristics dominate hydrologic response and how to take them into account remains a challenge in hydrological modeling, particularly in ungauged basins. This is even more so in nontemperate and nonhumid catchments, where—due to the combination of seasonality and the occurrence of dry spells—threshold processes are more prominent in rainfall runoff behavior. An example is the tropical savannah, the second largest climatic zone, characterized by pronounced dry and wet seasons and high evaporative demand. In this study, we investigated the importance of landscape variability on the spatial variability of stream flow in tropical savannah basins. We applied a stepwise modeling approach to 23 subcatchments of the Upper Ping River in Thailand, where gradually more information on landscape was incorporated. The benchmark is represented by a classical lumped model (FLEXL), which does not account for spatial variability. We then tested the effect of accounting for vegetation information within the lumped model (FLEXLM), and subsequently two semidistributed models: one accounting for the spatial variability of topography-based landscape features alone (FLEXT), and another accounting for both topographic features and vegetation (FLEXTM). In cross validation, each model was calibrated on one catchment, and then transferred with its fitted parameters to the remaining catchments. We found that when transferring model parameters in space, the semidistributed models accounting for vegetation and topographic heterogeneity clearly outperformed the lumped model. This suggests that landscape controls a considerable part of the hydrological function and explicit consideration of its heterogeneity can be highly beneficial for prediction in ungauged basins in tropical savannah.

  7. Simulating settlement during waste placement at a landfill with waste lifts placed under frozen conditions.

    PubMed

    Van Geel, Paul J; Murray, Kathleen E

    2015-12-01

    Twelve instrument bundles were placed within two waste profiles as waste was placed in an operating landfill in Ste. Sophie, Quebec, Canada. The settlement data were simulated using a three-component model to account for primary or instantaneous compression, secondary compression or mechanical creep and biodegradation induced settlement. The regressed model parameters from the first waste layer were able to predict the settlement of the remaining four waste layers with good agreement. The model parameters were compared to values published in the literature. A MSW landfill scenario referenced in the literature was used to illustrate how the parameter values from the different studies predicted settlement. The parameters determined in this study and other studies with total waste heights between 15 and 60 m provided similar estimates of total settlement in the long term while the settlement rates and relative magnitudes of the three components varied. The parameters determined based on studies with total waste heights less than 15m resulted in larger secondary compression indices and lower biodegradation induced settlements. When these were applied to a MSW landfill scenario with a total waste height of 30 m, the settlement was overestimated and provided unrealistic values. This study concludes that more field studies are needed to measure waste settlement during the filling stage of landfill operations and more field data are needed to assess different settlement models and their respective parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Normal tissue complication probability (NTCP) parameters for breast fibrosis: pooled results from two randomised trials.

    PubMed

    Mukesh, Mukesh B; Harris, Emma; Collette, Sandra; Coles, Charlotte E; Bartelink, Harry; Wilkinson, Jenny; Evans, Philip M; Graham, Peter; Haviland, Jo; Poortmans, Philip; Yarnold, John; Jena, Raj

    2013-08-01

    The dose-volume effect of radiation therapy on breast tissue is poorly understood. We estimate NTCP parameters for breast fibrosis after external beam radiotherapy. We pooled individual patient data of 5856 patients from 2 trials including whole breast irradiation followed with or without a boost. A two-compartment dose volume histogram model was used with boost volume as the first compartment and the remaining breast volume as second compartment. Results from START-pilot trial (n=1410) were used to test the predicted models. 26.8% patients in the Cambridge trial (5 years) and 20.7% patients in the EORTC trial (10 years) developed moderate-severe breast fibrosis. The best fit NTCP parameters were BEUD3(50)=136.4 Gy, γ50=0.9 and n=0.011 for the Niemierko model and BEUD3(50)=132 Gy, m=0.35 and n=0.012 for the Lyman Kutcher Burman model. The observed rates of fibrosis in the START-pilot trial agreed well with the predicted rates. This large multi-centre pooled study suggests that the effect of volume parameter is small and the maximum RT dose is the most important parameter to influence breast fibrosis. A small value of volume parameter 'n' does not fit with the hypothesis that breast tissue is a parallel organ. However, this may reflect limitations in our current scoring system of fibrosis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Improving the analysis of slug tests

    USGS Publications Warehouse

    McElwee, C.D.

    2002-01-01

    This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.

  10. Simulation of aerobic and anaerobic biodegradation processes at a crude oil spill site

    USGS Publications Warehouse

    Essaid, Hedeff I.; Bekins, Barbara A.; Godsy, E. Michael; Warren, Ean; Baedecker, Mary Jo; Cozzarelli, Isabelle M.

    1995-01-01

    A two-dimensional, multispecies reactive solute transport model with sequential aerobic and anaerobic degradation processes was developed and tested. The model was used to study the field-scale solute transport and degradation processes at the Bemidji, Minnesota, crude oil spill site. The simulations included the biodegradation of volatile and nonvolatile fractions of dissolved organic carbon by aerobic processes, manganese and iron reduction, and methanogenesis. Model parameter estimates were constrained by published Monod kinetic parameters, theoretical yield estimates, and field biomass measurements. Despite the considerable uncertainty in the model parameter estimates, results of simulations reproduced the general features of the observed groundwater plume and the measured bacterial concentrations. In the simulation, 46% of the total dissolved organic carbon (TDOC) introduced into the aquifer was degraded. Aerobic degradation accounted for 40% of the TDOC degraded. Anaerobic processes accounted for the remaining 60% of degradation of TDOC: 5% by Mn reduction, 19% by Fe reduction, and 36% by methanogenesis. Thus anaerobic processes account for more than half of the removal of DOC at this site.

  11. Biophysical stimulation for in vitro engineering of functional cardiac tissues.

    PubMed

    Korolj, Anastasia; Wang, Erika Yan; Civitarese, Robert A; Radisic, Milica

    2017-07-01

    Engineering functional cardiac tissues remains an ongoing significant challenge due to the complexity of the native environment. However, our growing understanding of key parameters of the in vivo cardiac microenvironment and our ability to replicate those parameters in vitro are resulting in the development of increasingly sophisticated models of engineered cardiac tissues (ECT). This review examines some of the most relevant parameters that may be applied in culture leading to higher fidelity cardiac tissue models. These include the biochemical composition of culture media and cardiac lineage specification, co-culture conditions, electrical and mechanical stimulation, and the application of hydrogels, various biomaterials, and scaffolds. The review will also summarize some of the recent functional human tissue models that have been developed for in vivo and in vitro applications. Ultimately, the creation of sophisticated ECT that replicate native structure and function will be instrumental in advancing cell-based therapeutics and in providing advanced models for drug discovery and testing. © 2017 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  12. Heat transfer modelling of pulsed laser-tissue interaction

    NASA Astrophysics Data System (ADS)

    Urzova, J.; Jelinek, M.

    2018-03-01

    Due to their attributes, the application of medical lasers is on the rise in numerous medical fields. From a biomedical point of view, the most interesting applications are the thermal interactions and the photoablative interactions, which effectively remove tissue without excessive heat damage to the remaining tissue. The objective of this work is to create a theoretical model for heat transfer in the tissue following its interaction with the laser beam to predict heat transfer during medical laser surgery procedures. The dimensions of the ablated crater (shape and ablation depth) were determined by computed tomography imaging. COMSOL Multiphysics software was used for temperature modelling. The parameters of tissue and blood, such as density, specific heat capacity, thermal conductivity and diffusivity, were calculated from the chemical ratio. The parameters of laser-tissue interaction, such as absorption and reflection coefficients, were experimentally determined. The parameters of the laser beam were power density, repetition frequency, pulse length and spot dimensions. Heat spreading after laser interaction with tissue was captured using a Fluke thermal camera. The model was verified for adipose tissue, skeletal muscle tissue and heart muscle tissue.

  13. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  14. A discrete element modelling approach for block impacts on trees

    NASA Astrophysics Data System (ADS)

    Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic

    2015-04-01

    These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input parameters that control the block kinematics after impact.

  15. The Early Eocene equable climate problem: can perturbations of climate model parameters identify possible solutions?

    PubMed

    Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J

    2013-10-28

    Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.

  16. Air drying modelling of Mastocarpus stellatus seaweed a source of hybrid carrageenan

    NASA Astrophysics Data System (ADS)

    Arufe, Santiago; Torres, Maria D.; Chenlo, Francisco; Moreira, Ramon

    2018-01-01

    Water sorption isotherms from 5 up to 65 °C and air drying kinetics at 35, 45 and 55 °C of Mastocarpus stellatus seaweed were determined. Experimental sorption data were modelled using BET and Oswin models. A four-parameter model, based on Oswin model, was proposed to estimate equilibrium moisture content as function of water activity and temperature simultaneously. Drying experiments showed that water removal rate increased significantly with temperature from 35 to 45 °C, but at higher temperatures drying rate remained constant. Some chemical modifications of the hybrid carrageenans present in the seaweed can be responsible of this unexpected thermal trend. Experimental drying data were modelled using two-parameter Page model (n, k). Page parameter n was constant (1.31 ± 0.10) at tested temperatures, but k varied significantly with drying temperature (from 18.5 ± 0.2 10-3 min-n at 35 °C up to 28.4 ± 0.8 10-3 min-n at 45 and 55 °C). Drying experiments allowed the determination of the critical moisture content of seaweed (0.87 ± 0.06 kg water (kg d.b.)-1). A diffusional model considering slab geometry was employed to determine the effective diffusion coefficient of water during the falling rate period at different temperatures.

  17. Estimation of the ARNO model baseflow parameters using daily streamflow data

    NASA Astrophysics Data System (ADS)

    Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu

    1999-09-01

    An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.

  18. Particle filter based hybrid prognostics for health monitoring of uncertain systems in bond graph framework

    NASA Astrophysics Data System (ADS)

    Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.

    2016-06-01

    The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.

  19. A physiologically based toxicokinetic model for methylmercury in female American kestrels

    USGS Publications Warehouse

    Nichols, J.W.; Bennett, R.S.; Rossmann, R.; French, J.B.; Sappington, K.G.

    2010-01-01

    A physiologically based toxicokinetic (PBTK) model was developed to describe the uptake, distribution, and elimination of methylmercury (CH 3Hg) in female American kestrels. The model consists of six tissue compartments corresponding to the brain, liver, kidney, gut, red blood cells, and remaining carcass. Additional compartments describe the elimination of CH3Hg to eggs and growing feathers. Dietary uptake of CH 3Hg was modeled as a diffusion-limited process, and the distribution of CH3Hg among compartments was assumed to be mediated by the flow of blood plasma. To the extent possible, model parameters were developed using information from American kestrels. Additional parameters were based on measured values for closely related species and allometric relationships for birds. The model was calibrated using data from dietary dosing studies with American kestrels. Good agreement between model simulations and measured CH3Hg concentrations in blood and tissues during the loading phase of these studies was obtained by fitting model parameters that control dietary uptake of CH 3Hg and possible hepatic demethylation. Modeled results tended to underestimate the observed effect of egg production on circulating levels of CH3Hg. In general, however, simulations were consistent with observed patterns of CH3Hg uptake and elimination in birds, including the dominant role of feather molt. This model could be used to extrapolate CH 3Hg kinetics from American kestrels to other bird species by appropriate reassignment of parameter values. Alternatively, when combined with a bioenergetics-based description, the model could be used to simulate CH 3Hg kinetics in a long-term environmental exposure. ?? 2010 SETAC.

  20. Solid Rocket Motor Combustion Instability Modeling in COMSOL Multiphysics

    NASA Technical Reports Server (NTRS)

    Fischbach, Sean R.

    2015-01-01

    Combustion instability modeling of Solid Rocket Motors (SRM) remains a topic of active research. Many rockets display violent fluctuations in pressure, velocity, and temperature originating from the complex interactions between the combustion process, acoustics, and steady-state gas dynamics. Recent advances in defining the energy transport of disturbances within steady flow-fields have been applied by combustion stability modelers to improve the analysis framework [1, 2, 3]. Employing this more accurate global energy balance requires a higher fidelity model of the SRM flow-field and acoustic mode shapes. The current industry standard analysis tool utilizes a one dimensional analysis of the time dependent fluid dynamics along with a quasi-three dimensional propellant grain regression model to determine the SRM ballistics. The code then couples with another application that calculates the eigenvalues of the one dimensional homogenous wave equation. The mean flow parameters and acoustic normal modes are coupled to evaluate the stability theory developed and popularized by Culick [4, 5]. The assumption of a linear, non-dissipative wave in a quiescent fluid remains valid while acoustic amplitudes are small and local gas velocities stay below Mach 0.2. The current study employs the COMSOL multiphysics finite element framework to model the steady flow-field parameters and acoustic normal modes of a generic SRM. The study requires one way coupling of the CFD High Mach Number Flow (HMNF) and mathematics module. The HMNF module evaluates the gas flow inside of a SRM using St. Robert's law to model the solid propellant burn rate, no slip boundary conditions, and the hybrid outflow condition. Results from the HMNF model are verified by comparing the pertinent ballistics parameters with the industry standard code outputs (i.e. pressure drop, thrust, ect.). These results are then used by the coefficient form of the mathematics module to determine the complex eigenvalues of the Acoustic Velocity Potential Equation (AVPE). The mathematics model is truncated at the nozzle sonic line, where a zero flux boundary condition is self-satisfying. The remaining boundaries are modeled with a zero flux boundary condition, assuming zero acoustic absorption on all surfaces. The results of the steady-state CFD and AVPE analyses are used to calculate the linear acoustic growth rate as is defined by Flandro and Jacob [2, 3]. In order to verify the process implemented within COMSOL we first employ the Culick theory and compare the results with the industry standard. After the process is verified, the Flandro/Jacob energy balance theory is employed and results displayed.

  1. Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain

    PubMed Central

    Chis Ster, Irina; Ferguson, Neil M.

    2007-01-01

    Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582

  2. The Role of Heart-Rate Variability Parameters in Activity Recognition and Energy-Expenditure Estimation Using Wearable Sensors.

    PubMed

    Park, Heesu; Dong, Suh-Yeon; Lee, Miran; Youn, Inchan

    2017-07-24

    Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little or no movement. To solve these problems, we propose a novel approach using an accelerometer and electrocardiogram (ECG). First, we collected a database of six activities (sitting, standing, walking, ascending, resting and running) of 13 voluntary participants. We compared the HAR performances of three models with respect to the input data type (with none, all, or some of the heart-rate variability (HRV) parameters). The best recognition performance was 96.35%, which was obtained with some selected HRV parameters. EE was also estimated for different choices of the input data type (with or without HRV parameters) and the model type (single and activity-specific). The best estimation performance was found in the case of the activity-specific model with HRV parameters. Our findings indicate that the use of human physiological data, obtained by wearable sensors, has a significant impact on both HAR and EE estimation, which are crucial functions in the mobile healthcare system.

  3. Post-LHC7 fine-tuning in the minimal supergravity/CMSSM model with a 125 GeV Higgs boson

    NASA Astrophysics Data System (ADS)

    Baer, Howard; Barger, Vernon; Huang, Peisi; Mickelson, Dan; Mustafayev, Azar; Tata, Xerxes

    2013-02-01

    The recent discovery of a 125 GeV Higgs-like resonance at LHC, coupled with the lack of evidence for weak scale supersymmetry (SUSY), has severely constrained SUSY models such as minimal supergravity (mSUGRA)/CMSSM. As LHC probes deeper into SUSY model parameter space, the little hierarchy problem—how to reconcile the Z and Higgs boson mass scale with the scale of SUSY breaking—will become increasingly exacerbated unless a sparticle signal is found. We evaluate two different measures of fine-tuning in the mSUGRA/CMSSM model. The more stringent of these, ΔHS, includes effects that arise from the high-scale origin of the mSUGRA parameters while the second measure, ΔEW, is determined only by weak scale parameters: hence, it is universal to any model with the same particle spectrum and couplings. Our results incorporate the latest constraints from LHC7 sparticle searches, LHCb limits from Bs→μ+μ- and also require a light Higgs scalar with mh˜123-127GeV. We present fine-tuning contours in the m0 vs m1/2 plane for several sets of A0 and tan⁡β values. We also present results for ΔHS and ΔEW from a scan over the entire viable model parameter space. We find a ΔHS≳103, or at best 0.1%, fine-tuning. For the less stringent electroweak fine-tuning, we find ΔEW≳102, or at best 1%, fine-tuning. Two benchmark points are presented that have the lowest values of ΔHS and ΔEW. Our results provide a quantitative measure for ascertaining whether or not the remaining mSUGRA/CMSSM model parameter space is excessively fine-tuned and so could provide impetus for considering alternative SUSY models.

  4. Real-time sensing of fatigue crack damage for information-based decision and control

    NASA Astrophysics Data System (ADS)

    Keller, Eric Evans

    Information-based decision and control for structures that are subject to failure by fatigue cracking is based on the following notion: Maintenance, usage scheduling, and control parameter tuning can be optimized through real time knowledge of the current state of fatigue crack damage. Additionally, if the material properties of a mechanical structure can be identified within a smaller range, then the remaining life prediction of that structure will be substantially more accurate. Information-based decision systems can rely one physical models, estimation of material properties, exact knowledge of usage history, and sensor data to synthesize an accurate snapshot of the current state of damage and the likely remaining life of a structure under given assumed loading. The work outlined in this thesis is structured to enhance the development of information-based decision and control systems. This is achieved by constructing a test facility for laboratory experiments on real-time damage sensing. This test facility makes use of a methodology that has been formulated for fatigue crack model parameter estimation and significantly improves the quality of predictions of remaining life. Specifically, the thesis focuses on development of an on-line fatigue crack damage sensing and life prediction system that is built upon the disciplines of Systems Sciences and Mechanics of Materials. A major part of the research effort has been expended to design and fabricate a test apparatus which allows: (i) measurement and recording of statistical data for fatigue crack growth in metallic materials via different sensing techniques; and (ii) identification of stochastic model parameters for prediction of fatigue crack damage. To this end, this thesis describes the test apparatus and the associated instrumentation based on four different sensing techniques, namely, traveling optical microscopy, ultrasonic flaw detection, Alternating Current Potential Drop (ACPD), and fiber-optic extensometry-based compliance, for crack length measurements.

  5. Analyzing crash frequency in freeway tunnels: A correlated random parameters approach.

    PubMed

    Hou, Qinzhong; Tarko, Andrew P; Meng, Xianghai

    2018-02-01

    The majority of past road safety studies focused on open road segments while only a few focused on tunnels. Moreover, the past tunnel studies produced some inconsistent results about the safety effects of the traffic patterns, the tunnel design, and the pavement conditions. The effects of these conditions therefore remain unknown, especially for freeway tunnels in China. The study presented in this paper investigated the safety effects of these various factors utilizing a four-year period (2009-2012) of data as well as three models: 1) a random effects negative binomial model (RENB), 2) an uncorrelated random parameters negative binomial model (URPNB), and 3) a correlated random parameters negative binomial model (CRPNB). Of these three, the results showed that the CRPNB model provided better goodness-of-fit and offered more insights into the factors that contribute to tunnel safety. The CRPNB was not only able to allocate the part of the otherwise unobserved heterogeneity to the individual model parameters but also was able to estimate the cross-correlations between these parameters. Furthermore, the study results showed that traffic volume, tunnel length, proportion of heavy trucks, curvature, and pavement rutting were associated with higher frequencies of traffic crashes, while the distance to the tunnel wall, distance to the adjacent tunnel, distress ratio, International Roughness Index (IRI), and friction coefficient were associated with lower crash frequencies. In addition, the effects of the heterogeneity of the proportion of heavy trucks, the curvature, the rutting depth, and the friction coefficient were identified and their inter-correlations were analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Dynamic Parameters of the 2015 Nepal Gorkha Mw7.8 Earthquake Constrained by Multi-observations

    NASA Astrophysics Data System (ADS)

    Weng, H.; Yang, H.

    2017-12-01

    Dynamic rupture model can provide much detailed insights into rupture physics that is capable of assessing future seismic risk. Many studies have attempted to constrain the slip-weakening distance, an important parameter controlling friction behavior of rock, for several earthquakes based on dynamic models, kinematic models, and direct estimations from near-field ground motion. However, large uncertainties of the values of the slip-weakening distance still remain, mostly because of the intrinsic trade-offs between the slip-weakening distance and fault strength. Here we use a spontaneously dynamic rupture model to constrain the frictional parameters of the 25 April 2015 Mw7.8 Nepal earthquake, by combining with multiple seismic observations such as high-rate cGPS data, strong motion data, and kinematic source models. With numerous tests we find the trade-off patterns of final slip, rupture speed, static GPS ground displacements, and dynamic ground waveforms are quite different. Combining all the seismic constraints we can conclude a robust solution without a substantial trade-off of average slip-weakening distance, 0.6 m, in contrast to previous kinematical estimation of 5 m. To our best knowledge, this is the first time to robustly determine the slip-weakening distance on seismogenic fault from seismic observations. The well-constrained frictional parameters may be used for future dynamic models to assess seismic hazard, such as estimating the peak ground acceleration (PGA) etc. Similar approach could also be conducted for other great earthquakes, enabling broad estimations of the dynamic parameters in global perspectives that can better reveal the intrinsic physics of earthquakes.

  7. Regularized Semiparametric Estimation for Ordinary Differential Equations

    PubMed Central

    Li, Yun; Zhu, Ji; Wang, Naisyin

    2015-01-01

    Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639

  8. Polynomic nonlinear dynamical systems - A residual sensitivity method for model reduction

    NASA Technical Reports Server (NTRS)

    Yurkovich, S.; Bugajski, D.; Sain, M.

    1985-01-01

    The motivation for using polynomic combinations of system states and inputs to model nonlinear dynamics systems is founded upon the classical theories of analysis and function representation. A feature of such representations is the need to make available all possible monomials in these variables, up to the degree specified, so as to provide for the description of widely varying functions within a broad class. For a particular application, however, certain monomials may be quite superfluous. This paper examines the possibility of removing monomials from the model in accordance with the level of sensitivity displayed by the residuals to their absence. Critical in these studies is the effect of system input excitation, and the effect of discarding monomial terms, upon the model parameter set. Therefore, model reduction is approached iteratively, with inputs redesigned at each iteration to ensure sufficient excitation of remaining monomials for parameter approximation. Examples are reported to illustrate the performance of such model reduction approaches.

  9. Reduction of the dimension of neural network models in problems of pattern recognition and forecasting

    NASA Astrophysics Data System (ADS)

    Nasertdinova, A. D.; Bochkarev, V. V.

    2017-11-01

    Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.

  10. Uncertainty in predictions of oil spill trajectories in a coastal zone

    NASA Astrophysics Data System (ADS)

    Sebastião, P.; Guedes Soares, C.

    2006-12-01

    A method is introduced to determine the uncertainties in the predictions of oil spill trajectories using a classic oil spill model. The method considers the output of the oil spill model as a function of random variables, which are the input parameters, and calculates the standard deviation of the output results which provides a measure of the uncertainty of the model as a result of the uncertainties of the input parameters. In addition to a single trajectory that is calculated by the oil spill model using the mean values of the parameters, a band of trajectories can be defined when various simulations are done taking into account the uncertainties of the input parameters. This band of trajectories defines envelopes of the trajectories that are likely to be followed by the spill given the uncertainties of the input. The method was applied to an oil spill that occurred in 1989 near Sines in the southwestern coast of Portugal. This model represented well the distinction between a wind driven part that remained offshore, and a tide driven part that went ashore. For both parts, the method defined two trajectory envelopes, one calculated exclusively with the wind fields, and the other using wind and tidal currents. In both cases reasonable approximation to the observed results was obtained. The envelope of likely trajectories that is obtained with the uncertainty modelling proved to give a better interpretation of the trajectories that were simulated by the oil spill model.

  11. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    PubMed

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  12. Reason, emotion and decision-making: risk and reward computation with feeling.

    PubMed

    Quartz, Steven R

    2009-05-01

    Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.

  13. Influence of lasing parameters on the cleaning efficacy of laser-activated irrigation with pulsed erbium lasers.

    PubMed

    Meire, Maarten A; Havelaerts, Sophie; De Moor, Roeland J

    2016-05-01

    Laser-activated irrigation (LAI) using erbium lasers is an irrigant agitation technique with great potential for improved cleaning of the root canal system, as shown in many in vitro studies. However, lasing parameters for LAI vary considerably and their influence remains unclear. Therefore, this study sought to investigate the influence of pulse energy, pulse frequency, pulse length, irradiation time and fibre tip shape, position and diameter on the cleaning efficacy of LAI. Transparent resin blocks containing standardized root canals (apical diameter of 0.4 mm, 6% taper, 15 mm long, with a coronal reservoir) were used as the test model. A standardized groove in the apical part of each canal wall was packed with stained dentin debris. The canals were filled with irrigant, which was activated by an erbium: yttrium aluminium garnet (Er:YAG) laser (2940 nm, AT Fidelis, Fotona, Ljubljana, Slovenia). In each experiment, one laser parameter was varied, while the others remained constant. In this way, the influence of pulse energy (10-40 mJ), pulse length (50-1000 μs), frequency (5-30 Hz), irradiation time (5-40 s) and fibre tip shape (flat or conical), position (pulp chamber, canal entrance, next to groove) and diameter (300-600 μm) was determined by treating 20 canals per parameter. The amount of debris remaining in the groove after each LAI procedure was scored and compared among the different treatments. The parameters significantly (P < 0.05, Kruskal-Wallis) affecting debris removal from the groove were fibre tip position, pulse length, pulse energy, irradiation time and frequency. Fibre tip shape and diameter had no significant influence on the cleaning efficacy.

  14. Chloramine demand estimation using surrogate chemical and microbiological parameters.

    PubMed

    Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose

    2017-07-01

    A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.

  15. A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling

    PubMed Central

    Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  16. Refinement of Generalized Born Implicit Solvation Parameters for Nucleic Acids and their Complexes with Proteins

    PubMed Central

    Nguyen, Hai; Pérez, Alberto; Bermeo, Sherry; Simmerling, Carlos

    2016-01-01

    The Generalized Born (GB) implicit solvent model has undergone significant improvements in accuracy for modeling of proteins and small molecules. However, GB still remains a less widely explored option for nucleic acid simulations, in part because fast GB models are often unable to maintain stable nucleic acid structures, or they introduce structural bias in proteins, leading to difficulty in application of GB models in simulations of protein-nucleic acid complexes. Recently, GB-neck2 was developed to improve the behavior of protein simulations. In an effort to create a more accurate model for nucleic acids, a similar procedure to the development of GB-neck2 is described here for nucleic acids. The resulting parameter set significantly reduces absolute and relative energy error relative to Poisson Boltzmann for both nucleic acids and nucleic acid-protein complexes, when compared to its predecessor GB-neck model. This improvement in solvation energy calculation translates to increased structural stability for simulations of DNA and RNA duplexes, quadruplexes, and protein-nucleic acid complexes. The GB-neck2 model also enables successful folding of small DNA and RNA hairpins to near native structures as determined from comparison with experiment. The functional form and all required parameters are provided here and also implemented in the AMBER software. PMID:26574454

  17. Uncertainty Quantification in Remaining Useful Life of Aerospace Components using State Space Models and Inverse FORM

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.

  18. Regression to fuzziness method for estimation of remaining useful life in power plant components

    NASA Astrophysics Data System (ADS)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  19. Application of tire dynamics to aircraft landing gear design analysis

    NASA Technical Reports Server (NTRS)

    Black, R. J.

    1983-01-01

    The tire plays a key part in many analyses used for design of aircraft landing gear. Examples include structural design of wheels, landing gear shimmy, brake whirl, chatter and squeal, complex combination of chatter and shimmy on main landing gear (MLG) systems, anti-skid performance, gear walk, and rough terrain loads and performance. Tire parameters needed in the various analyses are discussed. Two tire models are discussed for shimmy analysis, the modified Moreland approach and the von Schlippe-Dietrich approach. It is shown that the Moreland model can be derived from the Von Schlippe-Dietrich model by certain approximations. The remaining analysis areas are discussed in general terms and the tire parameters needed for each are identified. Accurate tire data allows more accurate design analysis and the correct prediction of dynamic performance of aircraft landing gear.

  20. Global fits of GUT-scale SUSY models with GAMBIT

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  1. On the Use of the Beta Distribution in Probabilistic Resource Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olea, Ricardo A., E-mail: olea@usgs.gov

    2011-12-15

    The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less

  2. Accurate formula for gaseous transmittance in the infrared.

    PubMed

    Gibson, G A; Pierluissi, J H

    1971-07-01

    By considering the infrared transmittance model of Zachor as the equation for an elliptic cone, a quadratic generalization is proposed that yields significantly greater computational accuracy. The strong-band parameters are obtained by iterative nonlinear, curve-fitting methods using a digital computer. The remaining parameters are determined with a linear least-squares technique and a weighting function that yields better results than the one adopted by Zachor. The model is applied to CO(2) over intervals of 50 cm(-1) between 550 cm(-1) and 9150 cm(-1) and to water vapor over similar intervals between 1050 cm(-1) and 9950 cm(-1), with mean rms deviations from the original data being 2.30 x 10(-3) and 1.83 x 10(-3), respectively.

  3. Uncertainty quantification and propagation in nuclear density functional theory

    DOE PAGES

    Schunck, N.; McDonnell, J. D.; Higdon, D.; ...

    2015-12-23

    Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this study, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statisticalmore » analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.« less

  4. Performance dependence of hybrid x-ray computed tomography/fluorescence molecular tomography on the optical forward problem.

    PubMed

    Hyde, Damon; Schulz, Ralf; Brooks, Dana; Miller, Eric; Ntziachristos, Vasilis

    2009-04-01

    Hybrid imaging systems combining x-ray computed tomography (CT) and fluorescence tomography can improve fluorescence imaging performance by incorporating anatomical x-ray CT information into the optical inversion problem. While the use of image priors has been investigated in the past, little is known about the optimal use of forward photon propagation models in hybrid optical systems. In this paper, we explore the impact on reconstruction accuracy of the use of propagation models of varying complexity, specifically in the context of these hybrid imaging systems where significant structural information is known a priori. Our results demonstrate that the use of generically known parameters provides near optimal performance, even when parameter mismatch remains.

  5. A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2010-01-01

    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.

  6. Hydrodynamic and Longitudinal Impedance Analysis of Cerebrospinal Fluid Dynamics at the Craniovertebral Junction in Type I Chiari Malformation

    PubMed Central

    Martin, Bryn A.; Kalata, Wojciech; Shaffer, Nicholas; Fischer, Paul; Luciano, Mark; Loth, Francis

    2013-01-01

    Elevated or reduced velocity of cerebrospinal fluid (CSF) at the craniovertebral junction (CVJ) has been associated with type I Chiari malformation (CMI). Thus, quantification of hydrodynamic parameters that describe the CSF dynamics could help assess disease severity and surgical outcome. In this study, we describe the methodology to quantify CSF hydrodynamic parameters near the CVJ and upper cervical spine utilizing subject-specific computational fluid dynamics (CFD) simulations based on in vivo MRI measurements of flow and geometry. Hydrodynamic parameters were computed for a healthy subject and two CMI patients both pre- and post-decompression surgery to determine the differences between cases. For the first time, we present the methods to quantify longitudinal impedance (LI) to CSF motion, a subject-specific hydrodynamic parameter that may have value to help quantify the CSF flow blockage severity in CMI. In addition, the following hydrodynamic parameters were quantified for each case: maximum velocity in systole and diastole, Reynolds and Womersley number, and peak pressure drop during the CSF cardiac flow cycle. The following geometric parameters were quantified: cross-sectional area and hydraulic diameter of the spinal subarachnoid space (SAS). The mean values of the geometric parameters increased post-surgically for the CMI models, but remained smaller than the healthy volunteer. All hydrodynamic parameters, except pressure drop, decreased post-surgically for the CMI patients, but remained greater than in the healthy case. Peak pressure drop alterations were mixed. To our knowledge this study represents the first subject-specific CFD simulation of CMI decompression surgery and quantification of LI in the CSF space. Further study in a larger patient and control group is needed to determine if the presented geometric and/or hydrodynamic parameters are helpful for surgical planning. PMID:24130704

  7. Changes of peritoneal transport parameters with time on dialysis: assessment with sequential peritoneal equilibration test.

    PubMed

    Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia

    2017-10-27

    Sequential peritoneal equilibration test (sPET) is based on the consecutive performance of the peritoneal equilibration test (PET, 4-hour, glucose 2.27%) and the mini-PET (1-hour, glucose 3.86%), and the estimation of peritoneal transport parameters with the 2-pore model. It enables the assessment of the functional transport barrier for fluid and small solutes. The objective of this study was to check whether the estimated model parameters can serve as better and earlier indicators of the changes in the peritoneal transport characteristics than directly measured transport indices that depend on several transport processes. 17 patients were examined using sPET twice with the interval of about 8 months (230 ± 60 days). There was no difference between the observational parameters measured in the 2 examinations. The indices for solute transport, but not net UF, were well correlated between the examinations. Among the estimated parameters, a significant decrease between the 2 examinations was found only for hydraulic permeability LpS, and osmotic conductance for glucose, whereas the other parameters remained unchanged. These fluid transport parameters did not correlate with D/P for creatinine, although the decrease in LpS values between the examinations was observed mostly for patients with low D/P for creatinine. We conclude that changes in fluid transport parameters, hydraulic permeability and osmotic conductance for glucose, as assessed by the pore model, may precede the changes in small solute transport. The systematic assessment of fluid transport status needs specific clinical and mathematical tools beside the standard PET tests.

  8. Analysis of Flexible Car Body of Straddle Monorail Vehicle

    NASA Astrophysics Data System (ADS)

    Zhong, Yuanmu

    2018-03-01

    Based on the finite element model of straddle monorail vehicle, a rigid-flexible coupling dynamic model considering vehicle body’s flexibility is established. The influence of vertical stiffness and vertical damping of the running wheel on the modal parameters of the car body is analyzed. The effect of flexible car body on modal parameters and vehicle ride quality is also studied. The results show that when the vertical stiffness of running wheel is less than 1 MN / m, the car body bounce and pitch frequency increase with the increasing of the vertical stiffness of the running wheel, when the running wheel vertical stiffness is 1MN / m or more, car body bounce and pitch frequency remained unchanged; When the vertical stiffness of the running wheel is below 1.8 MN / m, the vehicle body bounce and pitch damping ratio increase with the increasing of the vertical stiffness of the running wheel; When the running wheel vertical stiffness is 1.8MN / m or more, the car body bounce and pitch damping ratio remained unchanged; The running wheel vertical damping on the car body bounce and pitch frequency has no effect; Car body bounce and pitch damping ratio increase with the increasing of the vertical damping of the running wheel. The flexibility of the car body has no effect on the modal parameters of the car, which will improve the vehicle ride quality index.

  9. Developing population models with data from marked individuals

    USGS Publications Warehouse

    Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,

    2016-01-01

    Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.

  10. The Virtual Brain: Modeling Biological Correlates of Recovery after Chronic Stroke

    PubMed Central

    Falcon, Maria Inez; Riley, Jeffrey D.; Jirsa, Viktor; McIntosh, Anthony R.; Shereen, Ahmed D.; Chen, E. Elinor; Solodkin, Ana

    2015-01-01

    There currently remains considerable variability in stroke survivor recovery. To address this, developing individualized treatment has become an important goal in stroke treatment. As a first step, it is necessary to determine brain dynamics associated with stroke and recovery. While recent methods have made strides in this direction, we still lack physiological biomarkers. The Virtual Brain (TVB) is a novel application for modeling brain dynamics that simulates an individual’s brain activity by integrating their own neuroimaging data with local biophysical models. Here, we give a detailed description of the TVB modeling process and explore model parameters associated with stroke. In order to establish a parallel between this new type of modeling and those currently in use, in this work we establish an association between a specific TVB parameter (long-range coupling) that increases after stroke with metrics derived from graph analysis. We used TVB to simulate the individual BOLD signals for 20 patients with stroke and 10 healthy controls. We performed graph analysis on their structural connectivity matrices calculating degree centrality, betweenness centrality, and global efficiency. Linear regression analysis demonstrated that long-range coupling is negatively correlated with global efficiency (P = 0.038), but is not correlated with degree centrality or betweenness centrality. Our results suggest that the larger influence of local dynamics seen through the long-range coupling parameter is closely associated with a decreased efficiency of the system. We thus propose that the increase in the long-range parameter in TVB (indicating a bias toward local over global dynamics) is deleterious because it reduces communication as suggested by the decrease in efficiency. The new model platform TVB hence provides a novel perspective to understanding biophysical parameters responsible for global brain dynamics after stroke, allowing the design of focused therapeutic interventions. PMID:26579071

  11. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Wanders, N.; Bierkens, M. F. P.; de Jong, S. M.; de Roo, A.; Karssenberg, D.

    2014-08-01

    Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system, in particular the unsaturated zone, remains uncalibrated. Soil moisture observations from satellites have the potential to fill this gap. Here we evaluate the added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: (1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? (2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to calibration based only on discharge observations, such that this leads to improved simulations of soil moisture content and discharge? A dual state and parameter Ensemble Kalman Filter is used to calibrate the hydrological model LISFLOOD for the Upper Danube. Calibration is done using discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS, and ASCAT. Calibration with discharge data improves the estimation of groundwater and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate identification of parameters related to land-surface processes. For the Upper Danube upstream area up to 40,000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30% in the RMSE for discharge simulations, compared to calibration on discharge alone. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models, leading to a better simulation of soil moisture content throughout the catchment and a better simulation of discharge in upstream areas. This article was corrected on 15 SEP 2014. See the end of the full text for details.

  12. Evaluating Vertical Moisture Structure of the Madden-Julian Oscillation in Contemporary GCMs

    NASA Astrophysics Data System (ADS)

    Guan, B.; Jiang, X.; Waliser, D. E.

    2013-12-01

    The Madden-Julian Oscillation (MJO) remains a major challenge in our understanding and modeling of the tropical convection and circulation. Many models have troubles in realistically simulating key characteristics of the MJO, such as the strength, period, and eastward propagation. For models that do simulate aspects of the MJO, it remains to be understood what parameters and processes are the most critical in determining the quality of the simulations. This study focuses on the vertical structure of moisture in MJO simulations, with the aim to identify and understand the relationship between MJO simulation qualities and key parameters related to moisture. A series of 20-year simulations conducted by 26 GCMs are analyzed, including four that are coupled to ocean models and two that have a two-dimensional cloud resolving model embedded (i.e., superparameterized). TRMM precipitation and ERA-Interim reanalysis are used to evaluate the model simulations. MJO simulation qualities are evaluated based on pattern correlations of lead/lag regressions of precipitation - a measure of the model representation of the eastward propagating MJO convection. Models with strongest and weakest MJOs (top and bottom quartiles) are compared in terms of differences in moisture content, moisture convergence, moistening rate, and moist static energy. It is found that models with strongest MJOs have better representations of the observed vertical tilt of moisture. Relative importance of convection, advection, boundary layer, and large scale convection/precipitation are discussed in terms of their contribution to the moistening process. The results highlight the overall importance of vertical moisture structure in MJO simulations. The work contributes to the climatological component of the joint WCRP-WWRP/THORPEX YOTC MJO Task Force and the GEWEX Atmosphere System Study (GASS) global model evaluation project focused on the vertical structure and diabatic processes of the MJO.

  13. Potential application of item-response theory to interpretation of medical codes in electronic patient records

    PubMed Central

    2011-01-01

    Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509

  14. A new method for parameter estimation in nonlinear dynamical equations

    NASA Astrophysics Data System (ADS)

    Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao

    2015-01-01

    Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.

  15. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  16. Are the Insomnia Severity Index and Pittsburgh Sleep Quality Index valid outcome measures for Cognitive Behavioral Therapy for Insomnia? Inquiry from the perspective of response shifts and longitudinal measurement invariance in their Chinese versions.

    PubMed

    Chen, Po-Yi; Jan, Ya-Wen; Yang, Chien-Ming

    2017-07-01

    The purpose of this study was to examine whether the Insomnia Severity Index (ISI) and Pittsburgh Sleep Quality Index (PSQI) are valid outcome measures for Cognitive Behavioral Therapy for Insomnia (CBT-I). Specifically, we tested whether the factorial parameters of the ISI and the PSQI could remain invariant against CBT-I, which is a prerequisite to using their change scores as an unbiased measure of the treatment outcome of CBT-I. A clinical data set including scores on the Chinese versions of the ISI and the PSQI obtained from 114 insomnia patients prior to and after a 6-week CBT-I program in Taiwan was analyzed. A series of measurement invariance (MI) tests were conducted to compare the factorial parameters of the ISI and the PSQI before and after the CBT-I treatment program. Most factorial parameters of the ISI remained invariant after CBT-I. However, the factorial model of the PSQI changed after CBT-I treatment. An extra loading with three residual correlations was added into the factorial model after treatment. The partial strong invariance of the ISI supports that it is a valid outcome measure for CBT-I. In contrast, various changes in the factor model of the PSQI indicate that it may not be an appropriate outcome measure for CBT-I. Some possible causes for the changes of the constructs of the PSQI following CBT-I are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Multi-Higgs doublet models: physical parametrization, sum rules and unitarity bounds

    NASA Astrophysics Data System (ADS)

    Bento, Miguel P.; Haber, Howard E.; Romão, J. C.; Silva, João P.

    2017-11-01

    If the scalar sector of the Standard Model is non-minimal, one might expect multiple generations of the hypercharge-1/2 scalar doublet analogous to the generational structure of the fermions. In this work, we examine the structure of a Higgs sector consisting of N Higgs doublets (where N ≥ 2). It is particularly convenient to work in the so-called charged Higgs basis, in which the neutral Higgs vacuum expectation value resides entirely in the first Higgs doublet, and the charged components of remaining N - 1 Higgs doublets are mass-eigenstate fields. We elucidate the interactions of the gauge bosons with the physical Higgs scalars and the Goldstone bosons and show that they are determined by an N × 2 N matrix. This matrix depends on ( N - 1)(2 N - 1) real parameters that are associated with the mixing of the neutral Higgs fields in the charged Higgs basis. Among these parameters, N - 1 are unphysical (and can be removed by rephasing the physical charged Higgs fields), and the remaining 2( N - 1)2 parameters are physical. We also demonstrate a particularly simple form for the cubic interaction and some of the quartic interactions of the Goldstone bosons with the physical Higgs scalars. These results are applied in the derivation of Higgs coupling sum rules and tree-level unitarity bounds that restrict the size of the quartic scalar couplings. In particular, new applications to three Higgs doublet models with an order-4 CP symmetry and with a Z_3 symmetry, respectively, are presented.

  18. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  19. [Modelling the impact of vaccination on the epidemiology of varicella zoster virus].

    PubMed

    Bonmarin, I; Santa-Olalla, P; Lévy-Bruhl, D

    2008-10-01

    The soon to come the availability of a combined MMR-varicella vaccine has re-stimulated the debate around universal infant vaccination against varicella. In France, the incidence of varicella is estimated at about 700,000 cases per year, with approximately 3500 hospitalisations and 15-25 deaths, the latter mainly occurring in those over 15years. Vaccination would certainly decrease the overall incidence of the disease but concerns about vaccination leading to a shift in the average age at infection followed by an increase in incidence of severe cases and congenital varicella, still remain. In order to provide support for decision-making, a dynamic mathematical model of varicella virus transmission was used to predict the effect of different vaccination strategies and coverages on the epidemiology of varicella and zoster. A deterministic realistic age-structured model was adapted to the French situation. Epidemiological parameters were estimated from literature or surveillance data. Various vaccine coverages and vaccination strategies were investigated. A sensitivity analysis of varicella incidence predictions was performed to test the impact of changes in the vaccine parameters and age-specific mixing patterns. The model confirms that the overall incidence and morbidity of varicella would likely be reduced by mass vaccination of 12-month-old children. Whatever the coverage and the vaccine strategy, the vaccination will cause a shift in age distribution with, for vaccination coverage up to at least 80% in the base-case analysis, an increased morbidity among adults and pregnant women. However, the total number of deaths and hospitalisations from varicella is predicted to remain below that expected without vaccination. The model is very sensitive to the matrix of contacts used and to the parameters describing vaccine effectiveness. Zoster incidence will increase over a number of decades followed by a decline to below prevaccination levels. Mass varicella vaccination, in France, will result in an overall reduction of varicella incidence but will cause a shift in age distribution with an increase in adult cases. Due to the uncertainties in key parameters values, the exact magnitude of this shift is difficult to assess.

  20. Search for Muonic Dark Forces at BABAR

    NASA Astrophysics Data System (ADS)

    Godang, Romulus

    2017-04-01

    Many models of physics beyond Standard Model predict the existence of light Higgs states, dark photons, and new gauge bosons mediating interactions between dark sectors and the Standard Model. Using a full data sample collected with the BABAR detector at the PEP-II e+e- collider, we report searches for a light non-Standard Model Higgs boson, dark photon, and a new muonic dark force mediated by a gauge boson (Z') coupling only to the second and third lepton families. Our results significantly improve upon the current bounds and further constrain the remaining region of the allowed parameter space.

  1. Desorption modeling of hydrophobic organic chemicals from plastic sheets using experimentally determined diffusion coefficients in plastics.

    PubMed

    Lee, Hwang; Byun, Da-Eun; Kim, Ju Min; Kwon, Jung-Hwan

    2018-01-01

    To evaluate rate of migration from plastic debris, desorption of model hydrophobic organic chemicals (HOCs) from polyethylene (PE)/polypropylene (PP) films to water was measured using PE/PP films homogeneously loaded with the HOCs. The HOCs fractions remaining in the PE/PP films were compared with those predicted using a model characterized by the mass transfer Biot number. The experimental data agreed with the model simulation, indicating that HOCs desorption from plastic particles can generally be described by the model. For hexachlorocyclohexanes with lower plastic-water partition coefficients, desorption was dominated by diffusion in the plastic film, whereas desorption of chlorinated benzenes with higher partition coefficients was determined by diffusion in the aqueous boundary layer. Evaluation of the fraction of HOCs remaining in plastic films with respect to film thickness and desorption time showed that the partition coefficient between plastic and water is the most important parameter influencing the desorption half-life. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. An Australian stocks and flows model for asbestos.

    PubMed

    Donovan, Sally; Pickin, Joe

    2016-10-01

    All available data on asbestos consumption in Australia were collated in order to determine the most common asbestos-containing materials remaining in the built environment. The proportion of asbestos contained within each material and the types of products these materials are most commonly found in was also determined. The lifetime of these asbestos containing products was estimated in order to develop a model that projects stocks and flows of asbestos products in Australia through to the year 2100. The model is based on a Weibull distribution and was built in an excel spreadsheet to make it user-friendly and accessible. The nature of the products under consideration means both their asbestos content and lifetime parameters are highly variable, and so for each of these a high and low estimate is presented along with the estimate used in the model. The user is able to vary the parameters in the model as better data become available. © The Author(s) 2016.

  3. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    NASA Astrophysics Data System (ADS)

    Murphy, M. J.; Simpson, R. L.; Urtiew, P. A.; Souers, P. C.; Garcia, F.; Garza, R. G.

    1996-05-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition+two growth terms) have been found using nonlinear optimization methods to determine the "best" set of model parameters. The ignition term treats the initiation of up to 0.5% of the RDX. The first growth term in the model treats the RDX growth of reaction up to 20% reacted. The second growth term treats the subsequent growth of reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the "best" set of coefficients for the three term Lee-Tarver ignition and growth of reaction model.

  4. Damage Tolerant Analysis of Cracked Al 2024-T3 Panels repaired with Single Boron/Epoxy Patch

    NASA Astrophysics Data System (ADS)

    Mahajan, Akshay D.; Murthy, A. Ramachandra; Nanda Kumar, M. R.; Gopinath, Smitha

    2018-06-01

    It is known that damage tolerant analysis has two objectives, namely, remaining life prediction and residual strength evaluation. To achieve the these objectives, determination of accurate and reliable fracture parameter is very important. XFEM methodologies for fatigue and fracture analysis of cracked aluminium panels repaired with different patch shapes made of single boron/epoxy have been developed. Heaviside and asymptotic crack tip enrichment functions are employed to model the crack. XFEM formulations such as displacement field formulation and element stiffness matrix formulation are presented. Domain form of interaction integral is employed to determine Stress Intensity Factor of repaired cracked panels. Computed SIFs are incorporated in Paris crack growth model to predict the remaining fatigue life. The residual strength has been computed by using the remaining life approach, which accounts for both crack growth constants and no. of cycles to failure. From the various studies conducted, it is observed that repaired panels have significant effect on reduction of the SIF at the crack tip and hence residual strength as well as remaining life of the patched cracked panels are improved significantly. The predicted remaining life and residual strength will be useful for design of structures/components under fatigue loading.

  5. Local overfishing may be avoided by examining parameters of a spatio-temporal model

    PubMed Central

    Shackell, Nancy; Mills Flemming, Joanna

    2017-01-01

    Spatial erosion of stock structure through local overfishing can lead to stock collapse because fish often prefer certain locations, and fisheries tend to focus on those locations. Fishery managers are challenged to maintain the integrity of the entire stock and require scientific approaches that provide them with sound advice. Here we propose a Bayesian hierarchical spatio-temporal modelling framework for fish abundance data to estimate key parameters that define spatial stock structure: persistence (similarity of spatial structure over time), connectivity (coherence of temporal pattern over space), and spatial variance (variation across the seascape). The consideration of these spatial parameters in the stock assessment process can help identify the erosion of structure and assist in preventing local overfishing. We use Atlantic cod (Gadus morhua) in eastern Canada as a case study an examine the behaviour of these parameters from the height of the fishery through its collapse. We identify clear signals in parameter behaviour under circumstances of destructive stock erosion as well as for recovery of spatial structure even when combined with a non-recovery in abundance. Further, our model reveals the spatial pattern of areas of high and low density persists over the 41 years of available data and identifies the remnant patches. Models of this sort are crucial to recovery plans if we are to identify and protect remaining sources of recolonization for Atlantic cod. Our method is immediately applicable to other exploited species. PMID:28886179

  6. Local overfishing may be avoided by examining parameters of a spatio-temporal model.

    PubMed

    Carson, Stuart; Shackell, Nancy; Mills Flemming, Joanna

    2017-01-01

    Spatial erosion of stock structure through local overfishing can lead to stock collapse because fish often prefer certain locations, and fisheries tend to focus on those locations. Fishery managers are challenged to maintain the integrity of the entire stock and require scientific approaches that provide them with sound advice. Here we propose a Bayesian hierarchical spatio-temporal modelling framework for fish abundance data to estimate key parameters that define spatial stock structure: persistence (similarity of spatial structure over time), connectivity (coherence of temporal pattern over space), and spatial variance (variation across the seascape). The consideration of these spatial parameters in the stock assessment process can help identify the erosion of structure and assist in preventing local overfishing. We use Atlantic cod (Gadus morhua) in eastern Canada as a case study an examine the behaviour of these parameters from the height of the fishery through its collapse. We identify clear signals in parameter behaviour under circumstances of destructive stock erosion as well as for recovery of spatial structure even when combined with a non-recovery in abundance. Further, our model reveals the spatial pattern of areas of high and low density persists over the 41 years of available data and identifies the remnant patches. Models of this sort are crucial to recovery plans if we are to identify and protect remaining sources of recolonization for Atlantic cod. Our method is immediately applicable to other exploited species.

  7. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  8. Hot HB Stars in Globular Clusters - Physical Parameters and Consequences for Theory. VI. The Second Parameter Pair M3 and M13

    NASA Technical Reports Server (NTRS)

    Moehler, S.; Landsman, W. B.; Sweigart, A. V.; Grundahl, F.

    2002-01-01

    We present the results of spectroscopic analyses of hot horizontal branch (HB) stars in M13 and M3, which form a famous second parameter pair. From the spectra we derived - for the first time in M13 - atmospheric parameters (effective temperature and surface gravity) as well as abundances of helium, magnesium, and iron. Consistent with analyses of hot HB stars in other globular clusters we find evidence for helium depletion and iron enrichment in stars hotter than about 12,000 K in both M3 and M13. Accounting for the iron enrichment substantially improves the agreement with canonical evolutionary models, although the derived gravities and masses are still somewhat too low. This remaining discrepancy may be an indication that scaled-solar metal-rich model atmospheres do not adequately represent the highly non-solar abundance ratios found in blue HB stars with radiative levitation. We discuss the effects of an enhancement in the envelope helium abundance on the atmospheric parameters of the blue HB stars, as might be caused by deep mixing on the red giant branch or primordial pollution from an earlier generation of intermediate mass asymptotic giant branch stars.

  9. Surface density: a new parameter in the fundamental metallicity relation of star-forming galaxies

    NASA Astrophysics Data System (ADS)

    Hashimoto, Tetsuya; Goto, Tomotsugu; Momose, Rieko

    2018-04-01

    Star-forming galaxies display a close relation among stellar mass, metallicity, and star formation rate (or molecular-gas mass). This is known as the fundamental metallicity relation (FMR) (or molecular-gas FMR), and it has a profound implication on models of galaxy evolution. However, there still remains a significant residual scatter around the FMR. We show here that a fourth parameter, the surface density of stellar mass, reduces the dispersion around the molecular-gas FMR. In a principal component analysis of 29 physical parameters of 41 338 star-forming galaxies, the surface density of stellar mass is found to be the fourth most important parameter. The new 4D fundamental relation forms a tighter hypersurface that reduces the metallicity dispersion to 50 per cent of that of the molecular-gas FMR. We suggest that future analyses and models of galaxy evolution should consider the FMR in a 4D space that includes surface density. The dilution time-scale of gas inflow and the star-formation efficiency could explain the observational dependence on surface density of stellar mass.

  10. Angular distribution of cosmological parameters as a probe of inhomogeneities: a kinematic parametrisation

    NASA Astrophysics Data System (ADS)

    Carvalho, C. Sofia; Basilakos, Spyros

    2016-08-01

    We use a kinematic parametrisation of the luminosity distance to measure the angular distribution on the sky of time derivatives of the scale factor, in particular the Hubble parameter H0, the deceleration parameter q0, and the jerk parameter j0. We apply a recently published method to complement probing the inhomogeneity of the large-scale structure by means of the inhomogeneity in the cosmic expansion. This parametrisation is independent of the cosmological equation of state, which renders it adequate to test interpretations of the cosmic acceleration alternative to the cosmological constant. For the same analytical toy model of an inhomogeneous ensemble of homogenous pixels, we derive the backreaction term in j0 due to the fluctuations of { H0,q0 } and measure it to be of order 10-2 times the corresponding average over the pixels in the absence of backreaction. In agreement with that computed using a ΛCDM parametrisation of the luminosity distance, the backreaction effect on q0 remains below the detection threshold. Although the backreaction effect on j0 is about ten times that on q0, it is also below the detection threshold. Hence backreaction remains unobservable both in q0 and in j0.

  11. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  12. Quantifying the economic importance of irrigation water reuse in a Chilean watershed using an integrated agent-based model

    NASA Astrophysics Data System (ADS)

    Arnold, R. T.; Troost, Christian; Berger, Thomas

    2015-01-01

    Irrigation with surface water enables Chilean agricultural producers to generate one of the country's most important economic exports. The Chilean water code established tradable water rights as a mechanism to allocate water amongst farmers and other water-use sectors. It remains contested whether this mechanism is effective and many authors have raised equity concerns regarding its impact on water users. For example, speculative hoarding of water rights in expectations of their increasing value has been described. This paper demonstrates how farmers can hoard water rights as a risk management strategy for variable water supply, for example, due to the cycles of El Niño or as consequence of climate change. While farmers with insufficient water rights can rely on unclaimed water during conditions of normal water availability, drought years overproportionally impact on their supply of irrigation water and thereby farm profitability. This study uses a simulation model that consists of a hydrological balance model component and a multiagent farm decision and production component. Both model components are parameterized with empirical data, while uncertain parameters are calibrated. The study demonstrates a thorough quantification of parameter uncertainty, using global sensitivity analysis and multiple behavioral parameter scenarios.

  13. Stiffness and Damping in Postural Control Increase with Age

    PubMed Central

    Cenciarini, Massimo; Loughlin, Patrick J.; Sparto, Patrick J.; Redfern, Mark S.

    2011-01-01

    Upright balance is believed to be maintained through active and passive mechanisms, both of which have been shown to be impacted by aging. A compensatory balance response often observed in older adults is increased co-contraction, which is generally assumed to enhance stability by increasing joint stiffness. We investigated the effect of aging on standing balance by fitting body sway data to a previously-developed postural control model that includes active and passive stiffness and damping parameters. Ten young (24 ± 3 y) and seven older (75 ± 5 y) adults were exposed during eyes-closed stance to perturbations consisting of lateral pseudorandom floor tilts. A least-squares fit of the measured body sway data to the postural control model found significantly larger active stiffness and damping model parameters in the older adults. These differences remained significant even after normalizing to account for different body sizes between the young and older adult groups. An age effect was also found for the normalized passive stiffness, but not for the normalized passive damping parameter. This concurrent increase in active stiffness and damping was shown to be more stabilizing than an increase in stiffness alone, as assessed by oscillations in the postural control model impulse response. PMID:19770083

  14. Figures of merit for present and future dark energy probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mortonson, Michael J.; Huterer, Dragan; Hu, Wayne

    2010-09-15

    We compare current and forecasted constraints on dynamical dark energy models from Type Ia supernovae and the cosmic microwave background using figures of merit based on the volume of the allowed dark energy parameter space. For a two-parameter dark energy equation of state that varies linearly with the scale factor, and assuming a flat universe, the area of the error ellipse can be reduced by a factor of {approx}10 relative to current constraints by future space-based supernova data and CMB measurements from the Planck satellite. If the dark energy equation of state is described by a more general basis ofmore » principal components, the expected improvement in volume-based figures of merit is much greater. While the forecasted precision for any single parameter is only a factor of 2-5 smaller than current uncertainties, the constraints on dark energy models bounded by -1{<=}w{<=}1 improve for approximately 6 independent dark energy parameters resulting in a reduction of the total allowed volume of principal component parameter space by a factor of {approx}100. Typical quintessence models can be adequately described by just 2-3 of these parameters even given the precision of future data, leading to a more modest but still significant improvement. In addition to advances in supernova and CMB data, percent-level measurement of absolute distance and/or the expansion rate is required to ensure that dark energy constraints remain robust to variations in spatial curvature.« less

  15. Modeling Patterns of Total Dissolved Solids Release from Central Appalachia, USA, Mine Spoils.

    PubMed

    Clark, Elyse V; Zipper, Carl E; Daniels, W Lee; Orndorff, Zenah W; Keefe, Matthew J

    2017-01-01

    Surface mining in the central Appalachian coalfields (USA) influences water quality because the interaction of infiltrated waters and O with freshly exposed mine spoils releases elevated levels of total dissolved solids (TDS) to streams. Modeling and predicting the short- and long-term TDS release potentials of mine spoils can aid in the management of current and future mining-influenced watersheds and landscapes. In this study, the specific conductance (SC, a proxy variable for TDS) patterns of 39 mine spoils during a sequence of 40 leaching events were modeled using a five-parameter nonlinear regression. Estimated parameter values were compared to six rapid spoil assessment techniques (RSATs) to assess predictive relationships between model parameters and RSATs. Spoil leachates reached maximum values, 1108 ± 161 μS cm on average, within the first three leaching events, then declined exponentially to a breakpoint at the 16th leaching event on average. After the breakpoint, SC release remained linear, with most spoil samples exhibiting declines in SC release with successive leaching events. The SC asymptote averaged 276 ± 25 μS cm. Only three samples had SCs >500 μS cm at the end of the 40 leaching events. Model parameters varied with mine spoil rock and weathering type, and RSATs were predictive of four model parameters. Unweathered samples released higher SCs throughout the leaching period relative to weathered samples, and rock type influenced the rate of SC release. The RSATs for SC, total S, and neutralization potential may best predict certain phases of mine spoil TDS release. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  16. Estimation in a discrete tail rate family of recapture sampling models

    NASA Technical Reports Server (NTRS)

    Gupta, Rajan; Lee, Larry D.

    1990-01-01

    In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.

  17. Concerning modeling of double-stage water evaporation cooling

    NASA Astrophysics Data System (ADS)

    Shatskiy, V. P.; Fedulova, L. I.; Gridneva, I. V.

    2018-03-01

    The matter of need for setting technical norms for production, as well as acceptable microclimate parameters, such as temperature and humidity, at the work place, remains urgent. Use of certain units should be economically sound and that should be taken into account for construction, assembly, operation, technological, and environmental requirements. Water evaporation coolers are simple to maintain, environmentally friendly, and quite cheap, but the development of the most efficient solutions requires mathematical modeling of the heat and mass transfer processes that take place in them.

  18. Getting a feel for parameters: using interactive parallel plots as a tool for parameter identification in the new rainfall-runoff model WALRUS

    NASA Astrophysics Data System (ADS)

    Brauer, Claudia; Torfs, Paul; Teuling, Ryan; Uijlenhoet, Remko

    2015-04-01

    Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS) to fill the gap between complex, spatially distributed models often used in lowland catchments and simple, parametric models which have mostly been developed for mountainous catchments (Brauer et al., 2014ab). This parametric rainfall-runoff model can be used all over the world in both freely draining lowland catchments and polders with controlled water levels. The open source model code is implemented in R and can be downloaded from www.github.com/ClaudiaBrauer/WALRUS. The structure and code of WALRUS are simple, which facilitates detailed investigation of the effect of parameters on all model variables. WALRUS contains only four parameters requiring calibration; they are intended to have a strong, qualitative relation with catchment characteristics. Parameter estimation remains a challenge, however. The model structure contains three main feedbacks: (1) between groundwater and surface water; (2) between saturated and unsaturated zone; (3) between catchment wetness and (quick/slow) flowroute division. These feedbacks represent essential rainfall-runoff processes in lowland catchments, but increase the risk of parameter dependence and equifinality. Therefore, model performance should not only be judged based on a comparison between modelled and observed discharges, but also based on the plausibility of the internal modelled variables. Here, we present a method to analyse the effect of parameter values on internal model states and fluxes in a qualitative and intuitive way using interactive parallel plotting. We applied WALRUS to ten Dutch catchments with different sizes, slopes and soil types and both freely draining and polder areas. The model was run with a large number of parameter sets, which were created using Latin Hypercube Sampling. The model output was characterised in terms of several signatures, both measures of goodness of fit and statistics of internal model variables (such as the percentage of rain water travelling through the quickflow reservoir). End users can then eliminate parameter combinations with unrealistic outcomes based on expert knowledge using interactive parallel plots. In these plots, for instance, ranges can be selected for each signature and only model runs which yield signature values in these ranges are highlighted. The resulting selection of realistic parameter sets can be used for ensemble simulations. C.C. Brauer, A.J. Teuling, P.J.J.F. Torfs, R. Uijlenhoet (2014a): The Wageningen Lowland Runoff Simulator (WALRUS): a lumped rainfall-runoff model for catchments with shallow groundwater, Geoscientific Model Development, 7, 2313-2332, www.geosci-model-dev.net/7/2313/2014/gmd-7-2313-2014.pdf C.C. Brauer, P.J.J.F. Torfs, A.J. Teuling, R. Uijlenhoet (2014b): The Wageningen Lowland Runoff Simulator (WALRUS): application to the Hupsel Brook catchment and Cabauw polder, Hydrology and Earth System Sciences, 18, 4007-4028, www.hydrol-earth-syst-sci.net/18/4007/2014/hess-18-4007-2014.pdf

  19. Responder analysis without dichotomization.

    PubMed

    Zhang, Zhiwei; Chu, Jianxiong; Rahardja, Dewi; Zhang, Hui; Tang, Li

    2016-01-01

    In clinical trials, it is common practice to categorize subjects as responders and non-responders on the basis of one or more clinical measurements under pre-specified rules. Such a responder analysis is often criticized for the loss of information in dichotomizing one or more continuous or ordinal variables. It is worth noting that a responder analysis can be performed without dichotomization, because the proportion of responders for each treatment can be derived from a model for the original clinical variables (used to define a responder) and estimated by substituting maximum likelihood estimators of model parameters. This model-based approach can be considerably more efficient and more effective for dealing with missing data than the usual approach based on dichotomization. For parameter estimation, the model-based approach generally requires correct specification of the model for the original variables. However, under the sharp null hypothesis, the model-based approach remains unbiased for estimating the treatment difference even if the model is misspecified. We elaborate on these points and illustrate them with a series of simulation studies mimicking a study of Parkinson's disease, which involves longitudinal continuous data in the definition of a responder.

  20. Trimming a hazard logic tree with a new model-order-reduction technique

    USGS Publications Warehouse

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  1. Modeling dynamic beta-gamma polymorphic transition in Tin

    NASA Astrophysics Data System (ADS)

    Chauvin, Camille; Montheillet, Frank; Petit, Jacques; CEA Gramat Collaboration; EMSE Collaboration

    2015-06-01

    Solid-solid phase transitions in metals have been studied by shock waves techniques for many decades. Recent experiments have investigated the transition during isentropic compression experiments and shock-wave compression and have highlighted the strong influence of the loading rate on the transition. Complementary data obtained with velocity and temperature measurements around the polymorphic transition beta-gamma of Tin on gas gun experiments have displayed the importance of the kinetics of the transition. But, even though this phenomenon is known, modeling the kinetic remains complex and based on empirical formulations. A multiphase EOS is available in our 1D Lagrangian code Unidim. We propose to present the influence of various kinetic laws (either empirical or involving nucleation and growth mechanisms) and their parameters (Gibbs free energy, temperature, pressure) on the transformation rate. We compare experimental and calculated velocities and temperature profiles and we underline the effects of the empirical parameters of these models.

  2. Calculation of diagnostic parameters of advanced serological and molecular tissue-print methods for detection of Citrus tristeza virus. A model for other plant pathogens

    USDA-ARS?s Scientific Manuscript database

    Citrus tristeza virus (CTV) is one of the most important virus diseases which affect citrus. Control of CTV in Spain and central California is achieved by planting virus-free citrus on CTV-tolerant or -resistant rootstocks. Quarantine and certification programs remain essential to avoid importation ...

  3. Application of advanced data assimilation techniques to the study of cloud and precipitation feedbacks in the tropical climate system

    NASA Astrophysics Data System (ADS)

    Posselt, Derek J.

    The research documented in this study centers around two topics: evaluation of the response of precipitating cloud systems to changes in the tropical climate system, and assimilation of cloud and precipitation information from remote-sensing platforms. The motivation for this work proceeds from the following outstanding problems: (1) Use of models to study the response of clouds to perturbations in the climate system is hampered by uncertainties in cloud microphysical parameterizations. (2) Though there is an ever-growing set of available observations, cloud and precipitation assimilation remains a difficult problem, particularly in the tropics. (3) Though it is widely acknowledged that cloud and precipitation processes play a key role in regulating the Earth's response to surface warming, the response of the tropical hydrologic cycle to climate perturbations remains largely unknown. The above issues are addressed in the following manner. First, Markov chain Monte Carlo (MCMC) methods are used to quantify the sensitivity of the NASA Goddard Cumulus Ensemble (GCE) cloud resolving model (CRM) to changes in its cloud odcrnpbymiC8l parameters. TRMM retrievals of precipitation rate, cloud properties, and radiative fluxes and heating rates over the South China Sea are then assimilated into the GCE model to constrain cloud microphysical parameters to values characteristic of convection in the tropics, and the resulting observation-constrained model is used to assess the response of the tropical hydrologic cycle to surface warming. The major findings of this study are the following: (1) MCMC provides an effective tool with which to evaluate both model parameterizations and the assumption of Gaussian statistics used in optimal estimation procedures. (2) Statistics of the tropical radiation budget and hydrologic cycle can be used to effectively constrain CRM cloud microphysical parameters. (3) For 2D CRM simulations run with and without shear, the precipitation efficiency of cloud systems increases with increasing sea surface temperature, while the high cloud fraction and outgoing shortwave radiation decrease.

  4. Ensemble Kalman Filter Data Assimilation in a Solar Dynamo Model

    NASA Astrophysics Data System (ADS)

    Dikpati, M.

    2017-12-01

    Despite great advancement in solar dynamo models since the first model by Parker in 1955, there remain many challenges in the quest to build a dynamo-based prediction scheme that can accurately predict the solar cycle features. One of these challenges is to implement modern data assimilation techniques, which have been used in the oceanic and atmospheric prediction models. Development of data assimilation in solar models are in the early stages. Recently, observing system simulation experiments (OSSE's) have been performed using Ensemble Kalman Filter data assimilation, in the framework of Data Assimilation Research Testbed of NCAR (NCAR-DART), for estimating parameters in a solar dynamo model. I will demonstrate how the selection of ensemble size, number of observations, amount of error in observations and the choice of assimilation interval play important role in parameter estimation. I will also show how the results of parameter reconstruction improve when accuracy in low-latitude observations is increased, despite large error in polar region data. I will then describe how implementation of data assimilation in a solar dynamo model can bring more accuracy in the prediction of polar fields in North and South hemispheres during the declining phase of cycle 24. Recent evidence indicates that the strength of the Sun's polar field during the cycle minima might be a reliable predictor for the next sunspot cycle's amplitude; therefore it is crucial to accurately predict the polar field strength and pattern.

  5. Finite element method (FEM) model of the mechanical stress on phospholipid membranes from shock waves produced in nanosecond electric pulses (nsEP)

    NASA Astrophysics Data System (ADS)

    Barnes, Ronald; Roth, Caleb C.; Shadaram, Mehdi; Beier, Hope; Ibey, Bennett L.

    2015-03-01

    The underlying mechanism(s) responsible for nanoporation of phospholipid membranes by nanosecond pulsed electric fields (nsEP) remains unknown. The passage of a high electric field through a conductive medium creates two primary contributing factors that may induce poration: the electric field interaction at the membrane and the shockwave produced from electrostriction of a polar submersion medium exposed to an electric field. Previous work has focused on the electric field interaction at the cell membrane, through such models as the transport lattice method. Our objective is to model the shock wave cell membrane interaction induced from the density perturbation formed at the rising edge of a high voltage pulse in a polar liquid resulting in a shock wave propagating away from the electrode toward the cell membrane. Utilizing previous data from cell membrane mechanical parameters, and nsEP generated shockwave parameters, an acoustic shock wave model based on the Helmholtz equation for sound pressure was developed and coupled to a cell membrane model with finite-element modeling in COMSOL. The acoustic structure interaction model was developed to illustrate the harmonic membrane displacements and stresses resulting from shockwave and membrane interaction based on Hooke's law. Poration is predicted by utilizing membrane mechanical breakdown parameters including cortical stress limits and hydrostatic pressure gradients.

  6. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  7. V3885 Sagittarius: A Comparison With a Range of Standard Model Accretion Disks

    NASA Technical Reports Server (NTRS)

    Linnell, Albert P.; Godon, Patrick; Hubeny, Ivan; Sion, Edward M; Szkody, Paula; Barrett, Paul E.

    2009-01-01

    A chi-squared analysis of standard model accretion disk synthetic spectrum fits to combined Far Ultraviolet Spectroscopic Explorer and Space Telescope Imaging Spectrograph spectra of V3885 Sagittarius, on an absolute flux basis, selects a model that accurately represents the observed spectral energy distribution. Calculation of the synthetic spectrum requires the following system parameters. The cataclysmic variable secondary star period-mass relation calibrated by Knigge in 2006 and 2007 sets the secondary component mass. A mean white dwarf (WD) mass from the same study, which is consistent with an observationally determined mass ratio, sets the adopted WD mass of 0.7M(solar mass), and the WD radius follows from standard theoretical models. The adopted inclination, i = 65 deg, is a literature consensus, and is subsequently supported by chi-squared analysis. The mass transfer rate is the remaining parameter to set the accretion disk T(sub eff) profile, and the Hipparcos parallax constrains that parameter to mas transfer = (5.0 +/- 2.0) x 10(exp -9) M(solar mass)/yr by a comparison with observed spectra. The fit to the observed spectra adopts the contribution of a 57,000 +/- 5000 K WD. The model thus provides realistic constraints on mass transfer and T(sub eff) for a large mass transfer system above the period gap.

  8. On the Use of the Beta Distribution in Probabilistic Resource Assessments

    USGS Publications Warehouse

    Olea, R.A.

    2011-01-01

    The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. The beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution. ?? 2011 International Association for Mathematical Geology (outside the USA).

  9. Molecular theory of smectic ordering in liquid crystals with nanoscale segregation of different molecular fragments

    NASA Astrophysics Data System (ADS)

    Gorkunov, M. V.; Osipov, M. A.; Kapernaum, N.; Nonnenmacher, D.; Giesselmann, F.

    2011-11-01

    A molecular statistical theory of the smectic A phase is developed taking into account specific interactions between different molecular fragments which enables one to describe different microscopic scenario of the transition into the smectic phase. The effects of nanoscale segregation are described using molecular models with different combinations of attractive and repulsive sites. These models have been used to calculate numerically coefficients in the mean filed potential as functions of molecular model parameters and the period of the smectic structure. The same coefficients are calculated also for a conventional smectic with standard Gay-Berne interaction potential which does not promote the segregation. The free energy is minimized numerically to calculate the order parameters of the smectic A phases and to study the nature of the smectic transition in both systems. It has been found that in conventional materials the smectic order can be stabilized only when the orientational order is sufficiently high, In contrast, in materials with nanosegregation the smectic order develops mainly in the form of the orientational-translational wave while the nematic order parameter remains relatively small. Microscopic mechanisms of smectic ordering in both systems are discussed in detail, and the results for smectic order parameters are compared with experimental data for materials of various molecular structure.

  10. Inflation in the closed FLRW model and the CMB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonga, Béatrice; Gupt, Brajesh; Yokomizo, Nelson, E-mail: bpb165@psu.edu, E-mail: bgupt@gravity.psu.edu, E-mail: yokomizo@gravity.psu.edu

    2016-10-01

    Recent cosmic microwave background (CMB) observations put strong constraints on the spatial curvature via estimation of the parameter Ω{sub k} assuming an almost scale invariant primordial power spectrum. We study the evolution of the background geometry and gauge-invariant scalar perturbations in an inflationary closed FLRW model and calculate the primordial power spectrum. We find that the inflationary dynamics is modified due to the presence of spatial curvature, leading to corrections to the nearly scale invariant power spectrum at the end of inflation. When evolved to the surface of last scattering, the resulting temperature anisotropy spectrum ( C {sup TT}{sub ℓ})more » shows deficit of power at low multipoles (ℓ < 20). By comparing our results with the recent Planck data we discuss the role of spatial curvature in accounting for CMB anomalies and in the estimation of the parameter Ω{sub k}. Since the curvature effects are limited to low multipoles, the Planck estimation of cosmological parameters remains robust under inclusion of positive spatial curvature.« less

  11. A strain-hardening bi-power law for the nonlinear behaviour of biological soft tissues.

    PubMed

    Nicolle, S; Vezin, P; Palierne, J-F

    2010-03-22

    Biological soft tissues exhibit a strongly nonlinear viscoelastic behaviour. Among parenchymous tissues, kidney and liver remain less studied than brain, and a first goal of this study is to report additional material properties of kidney and liver tissues in oscillatory shear and constant shear rate tests. Results show that the liver tissue is more compliant but more strain hardening than kidney. A wealth of multi-parameter mathematical models has been proposed for describing the mechanical behaviour of soft tissues. A second purpose of this work is to develop a new constitutive law capable of predicting our experimental data in the both linear and nonlinear viscoelastic regime with as few parameters as possible. We propose a nonlinear strain-hardening fractional derivative model in which six parameters allow fitting the viscoelastic behaviour of kidney and liver tissues for strains ranging from 0.01 to 1 and strain rates from 0.0151 s(-1) to 0.7s(-1). Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Taro; Kohri, Kazunori; White, Jonathan, E-mail: moritaro@post.kek.jp, E-mail: kohri@post.kek.jp, E-mail: jwhite@post.kek.jp

    We consider inflation in the system containing a Ricci scalar squared term and a canonical scalar field with quadratic mass term. In the Einstein frame this model takes the form of a two-field inflation model with a curved field space, and under the slow-roll approximation contains four free parameters corresponding to the masses of the two fields and their initial positions. We investigate how the inflationary dynamics and predictions for the primordial curvature perturbation depend on these four parameters. Our analysis is based on the δ N formalism, which allows us to determine predictions for the non-Gaussianity of the curvaturemore » perturbation as well as for quantities relating to its power spectrum. Depending on the choice of parameters, we find predictions that range from those of R {sup 2} inflation to those of quadratic chaotic inflation, with the non-Gaussianity of the curvature perturbation always remaining small. Using our results we are able to put constraints on the masses of the two fields.« less

  13. Stability of radiomic features in CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Bogowicz, M.; Riesterer, O.; Bundschuh, R. A.; Veit-Haibach, P.; Hüllner, M.; Studer, G.; Stieb, S.; Glatz, S.; Pruschy, M.; Guckenberger, M.; Tanadini-Lang, S.

    2016-12-01

    This study aimed to identify a set of stable radiomic parameters in CT perfusion (CTP) maps with respect to CTP calculation factors and image discretization, as an input for future prognostic models for local tumor response to chemo-radiotherapy. Pre-treatment CTP images of eleven patients with oropharyngeal carcinoma and eleven patients with non-small cell lung cancer (NSCLC) were analyzed. 315 radiomic parameters were studied per perfusion map (blood volume, blood flow and mean transit time). Radiomics robustness was investigated regarding the potentially standardizable (image discretization method, Hounsfield unit (HU) threshold, voxel size and temporal resolution) and non-standardizable (artery contouring and noise threshold) perfusion calculation factors using the intraclass correlation (ICC). To gain added value for our model radiomic parameters correlated with tumor volume, a well-known predictive factor for local tumor response to chemo-radiotherapy, were excluded from the analysis. The remaining stable radiomic parameters were grouped according to inter-parameter Spearman correlations and for each group the parameter with the highest ICC was included in the final set. The acceptance level was 0.9 and 0.7 for the ICC and correlation, respectively. The image discretization method using fixed number of bins or fixed intervals gave a similar number of stable radiomic parameters (around 40%). The potentially standardizable factors introduced more variability into radiomic parameters than the non-standardizable ones with 56-98% and 43-58% instability rates, respectively. The highest variability was observed for voxel size (instability rate  >97% for both patient cohorts). Without standardization of CTP calculation factors none of the studied radiomic parameters were stable. After standardization with respect to non-standardizable factors ten radiomic parameters were stable for both patient cohorts after correction for inter-parameter correlations. Voxel size, image discretization, HU threshold and temporal resolution have to be standardized to build a reliable predictive model based on CTP radiomics analysis.

  14. An improved nuclear mass model: FRDM (2012)

    NASA Astrophysics Data System (ADS)

    Moller, Peter

    2011-10-01

    We have developed an improved nuclear mass model which we plan to finalize in 2012, so we designate it FRDM(2012). Relative to our previous mass table in 1995 we do a full four-dimensional variation of the shape coordinates EPS2, EPS3, EPS4, and EPS6, we consider axial asymmetric shape degrees of freedom and we vary the density symmetry parameter L. Other additional features are also implemented. With respect to the Audi 2003 data base we now have an accuracy of 0.57 MeV. We have carefully tested the extrapolation properties of the new mass table by adjusting model parameters to limited data sets and testing on extended data sets and find it is highly reliable in new regions of nuclei. We discuss what the remaining differences between model calculations and experiment tell us about the limitations of the currently used effective single-particle potential and possible extensions. DOE No. DE-AC52-06NA25396.

  15. Relating multifrequency radar backscattering to forest biomass: Modeling and AIRSAR measurement

    NASA Technical Reports Server (NTRS)

    Sun, Guo-Qing; Ranson, K. Jon

    1992-01-01

    During the last several years, significant efforts in microwave remote sensing were devoted to relating forest parameters to radar backscattering coefficients. These and other studies showed that in most cases, the longer wavelength (i.e. P band) and cross-polarization (HV) backscattering had higher sensitivity and better correlation to forest biomass. This research examines this relationship in a northern forest area through both backscatter modeling and synthetic aperture radar (SAR) data analysis. The field measurements were used to estimate stand biomass from forest weight tables. The backscatter model described by Sun et al. was modified to simulate the backscattering coefficients with respect to stand biomass. The average number of trees per square meter or radar resolution cell, and the average tree height or diameter breast height (dbh) in the forest stand are the driving parameters of the model. The rest of the soil surface, orientation, and size distributions of leaves and branches, remain unchanged in the simulations.

  16. Generation Mechanism of Alternans in Luo-Rudy Model

    NASA Astrophysics Data System (ADS)

    Kitajima, Hiroyuki; Ioka, Eri; Yazawa, Toru

    Electrical alternans is the alternating amplitude from beat to beat in the action potential of the cardiac cell. It has been associated with ventricular arrhythmias in many clinical studies; however, its dynamical mechanisms remain unknown. The reason is that we do not have realistic network models of the heart system. Recently, Yazawa clarified the network structure of the heart and the central nerve system in the crustacean heart. In this study, we construct a simple model of the heart system based on Yazawa’s experimental data. Using this model, we clarify that two parameters (the conductance of sodium ions and free concentration of potassium ions in the extracellular compartment) play the key roles of generating alternans. In particular, we clarify that the inactivation gate of the time-independent potassium channel is the most important parameter. Moreover, interaction between the membrane potential and potassium ionic currents is significant for generating alternate rhythms. This result indicates that if the muscle cell has problems such as channelopathies, there is great risk of generating alternans.

  17. Models and observations of Arctic melt ponds

    NASA Astrophysics Data System (ADS)

    Golden, K. M.

    2016-12-01

    During the Arctic melt season, the sea ice surface undergoes a striking transformation from vast expanses of snow covered ice to complex mosaics of ice and melt ponds. Sea ice albedo, a key parameter in climate modeling, is largely determined by the complex evolution of melt pond configurations. In fact, ice-albedo feedback has played a significant role in the recent declines of the summer Arctic sea ice pack. However, understanding melt pond evolution remains a challenge to improving climate projections. It has been found that as the ponds grow and coalesce, the fractal dimension of their boundaries undergoes a transition from 1 to about 2, around a critical length scale of 100 square meters in area. As the ponds evolve they take complex, self-similar shapes with boundaries resembling space-filling curves. I will outline how mathematical models of composite materials and statistical physics, such as percolation and Ising models, are being used to describe this evolution and predict key geometrical parameters that agree very closely with observations.

  18. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    NASA Astrophysics Data System (ADS)

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.

    2015-04-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.

  19. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  20. Cerebellum-inspired neural network solution of the inverse kinematics problem.

    PubMed

    Asadi-Eydivand, Mitra; Ebadzadeh, Mohammad Mehdi; Solati-Hashjin, Mehran; Darlot, Christian; Abu Osman, Noor Azuan

    2015-12-01

    The demand today for more complex robots that have manipulators with higher degrees of freedom is increasing because of technological advances. Obtaining the precise movement for a desired trajectory or a sequence of arm and positions requires the computation of the inverse kinematic (IK) function, which is a major problem in robotics. The solution of the IK problem leads robots to the precise position and orientation of their end-effector. We developed a bioinspired solution comparable with the cerebellar anatomy and function to solve the said problem. The proposed model is stable under all conditions merely by parameter determination, in contrast to recursive model-based solutions, which remain stable only under certain conditions. We modified the proposed model for the simple two-segmented arm to prove the feasibility of the model under a basic condition. A fuzzy neural network through its learning method was used to compute the parameters of the system. Simulation results show the practical feasibility and efficiency of the proposed model in robotics. The main advantage of the proposed model is its generalizability and potential use in any robot.

  1. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  2. What hadron collider is required to discover or falsify natural supersymmetry?

    NASA Astrophysics Data System (ADS)

    Baer, Howard; Barger, Vernon; Gainer, James S.; Huang, Peisi; Savoy, Michael; Serce, Hasan; Tata, Xerxes

    2017-11-01

    Weak scale supersymmetry (SUSY) remains a compelling extension of the Standard Model because it stabilizes the quantum corrections to the Higgs and W , Z boson masses. In natural SUSY models these corrections are, by definition, never much larger than the corresponding masses. Natural SUSY models all have an upper limit on the gluino mass, too high to lead to observable signals even at the high luminosity LHC. However, in models with gaugino mass unification, the wino is sufficiently light that supersymmetry discovery is possible in other channels over the entire natural SUSY parameter space with no worse than 3% fine-tuning. Here, we examine the SUSY reach in more general models with and without gaugino mass unification (specifically, natural generalized mirage mediation), and show that the high energy LHC (HE-LHC), a pp collider with √{ s } = 33 TeV, will be able to detect the SUSY signal over the entire allowed mass range. Thus, HE-LHC would either discover or conclusively falsify natural SUSY with better than 3% fine-tuning using a conservative measure that allows for correlations among the model parameters.

  3. Reinforcement learning in depression: A review of computational research.

    PubMed

    Chen, Chong; Takahashi, Taiki; Nakagawa, Shin; Inoue, Takeshi; Kusumi, Ichiro

    2015-08-01

    Despite being considered primarily a mood disorder, major depressive disorder (MDD) is characterized by cognitive and decision making deficits. Recent research has employed computational models of reinforcement learning (RL) to address these deficits. The computational approach has the advantage in making explicit predictions about learning and behavior, specifying the process parameters of RL, differentiating between model-free and model-based RL, and the computational model-based functional magnetic resonance imaging and electroencephalography. With these merits there has been an emerging field of computational psychiatry and here we review specific studies that focused on MDD. Considerable evidence suggests that MDD is associated with impaired brain signals of reward prediction error and expected value ('wanting'), decreased reward sensitivity ('liking') and/or learning (be it model-free or model-based), etc., although the causality remains unclear. These parameters may serve as valuable intermediate phenotypes of MDD, linking general clinical symptoms to underlying molecular dysfunctions. We believe future computational research at clinical, systems, and cellular/molecular/genetic levels will propel us toward a better understanding of the disease. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Non-slow-roll dynamics in α-attractors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, K. Sravan; Marto, J.; Moniz, P. Vargas

    2016-04-01

    In this paper we consider the α−attractor model and study inflation under a non-slow-roll dynamics. More precisely, we follow the approach recently proposed by Gong and Sasaki [1] by means of assuming N=N(φ). Within this framework we obtain a family of functions describing the local shape of the potential during inflation. We study a specific model and find an inflationary scenario predicting an attractor at n{sub s}≈0.967 and r≈5.5×10{sup −4}. We further show that considering a non-slow-roll dynamics, the α−attractor model can be broaden to a wider class of models that remain compatible with value of r<0.1. We further exploremore » the model parameter space with respect to large and small field inflation and conclude that the inflaton dynamics is connected to the  α−  parameter, which is also related to the Kähler manifold curvature in the supergravity (SUGRA) embedding of this model. We also comment on the stabilization of the inflaton's trajectory.« less

  5. Force-field parameters of the Psi and Phi around glycosidic bonds to oxygen and sulfur atoms.

    PubMed

    Saito, Minoru; Okazaki, Isao

    2009-12-01

    The Psi and Phi torsion angles around glycosidic bonds in a glycoside chain are the most important determinants of the conformation of a glycoside chain. We determined force-field parameters for Psi and Phi torsion angles around a glycosidic bond bridged by a sulfur atom, as well as a bond bridged by an oxygen atom as a preparation for the next study, i.e., molecular dynamics free energy calculations for protein-sugar and protein-inhibitor complexes. First, we extracted the Psi or Phi torsion energy component from a quantum mechanics (QM) total energy by subtracting all the molecular mechanics (MM) force-field components except for the Psi or Phi torsion angle. The Psi and Phi energy components extracted (hereafter called "the remaining energy components") were calculated for simple sugar models and plotted as functions of the Psi and Phi angles. The remaining energy component curves of Psi and Phi were well represented by the torsion force-field functions consisting of four and three cosine functions, respectively. To confirm the reliability of the force-field parameters and to confirm its compatibility with other force-fields, we calculated adiabatic potential curves as functions of Psi and Phi for the model glycosides by adopting the Psi and Phi force-field parameters obtained and by energetically optimizing other degrees of freedom. The MM potential energy curves obtained for Psi and Phi well represented the QM adiabatic curves and also these curves' differences with regard to the glycosidic oxygen and sulfur atoms. Our Psi and Phi force-fields of glycosidic oxygen gave MM potential energy curves that more closely represented the respective QM curves than did those of the recently developed GLYCAM force-field. (c) 2009 Wiley Periodicals, Inc.

  6. Evolution of farm and manure management and their influence on ammonia emissions from agriculture in Switzerland between 1990 and 2010

    NASA Astrophysics Data System (ADS)

    Kupper, Thomas; Bonjour, Cyrill; Menzi, Harald

    2015-02-01

    The evolution of farm and manure management and their influence on ammonia (NH3) emissions from agriculture in Switzerland between 1990 and 2010 was modeled. In 2010, total agricultural NH3 emissions were 48,290 t N. Livestock contributed 90% (43,480 t N), with the remaining 10% (4760 t N) coming from arable and fodder crops. The emission stages of grazing, housing/exercise yard, manure storage and application produced 3%, 34%, 17% and 46%, respectively, of livestock emissions. Cattle, pigs, poultry, small ruminants, horses and other equids accounted for 78%, 15%, 3%, 2% and 2%, respectively, of the emissions from livestock and manure management. Compared to 1990, total NH3 emissions from agriculture and from livestock decreased by 16% and 14%, respectively. This was mainly due to declining livestock numbers, since the emissions per animal became bigger for most livestock categories between 1990 and 2010. The production volume for milk and meat remained constant or increased slightly. Other factors contributing to the emission mitigation were increased grazing for cattle, the growing importance of low-emission slurry application techniques and a significant reduction in the use of mineral fertilizer. However, production parameters enhancing emissions such as animal-friendly housing systems providing more surface area per animal and total volume of slurry stores increased during this time period. That such developments may counteract emission mitigation illustrates the challenge for regulators to balance the various aims in the striving toward sustainable livestock production. A sensitivity analysis identified parameters related to the excretion of total ammoniacal nitrogen from dairy cows and slurry application as being the most sensitive technical parameters influencing emissions. Further improvements to emission models should therefore focus on these parameters.

  7. Optimal observables for multiparameter seismic tomography

    NASA Astrophysics Data System (ADS)

    Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner

    2014-08-01

    We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.

  8. Reconstructing Mammalian Sleep Dynamics with Data Assimilation

    PubMed Central

    Sedigh-Sarvestani, Madineh; Schiff, Steven J.; Gluckman, Bruce J.

    2012-01-01

    Data assimilation is a valuable tool in the study of any complex system, where measurements are incomplete, uncertain, or both. It enables the user to take advantage of all available information including experimental measurements and short-term model forecasts of a system. Although data assimilation has been used to study other biological systems, the study of the sleep-wake regulatory network has yet to benefit from this toolset. We present a data assimilation framework based on the unscented Kalman filter (UKF) for combining sparse measurements together with a relatively high-dimensional nonlinear computational model to estimate the state of a model of the sleep-wake regulatory system. We demonstrate with simulation studies that a few noisy variables can be used to accurately reconstruct the remaining hidden variables. We introduce a metric for ranking relative partial observability of computational models, within the UKF framework, that allows us to choose the optimal variables for measurement and also provides a methodology for optimizing framework parameters such as UKF covariance inflation. In addition, we demonstrate a parameter estimation method that allows us to track non-stationary model parameters and accommodate slow dynamics not included in the UKF filter model. Finally, we show that we can even use observed discretized sleep-state, which is not one of the model variables, to reconstruct model state and estimate unknown parameters. Sleep is implicated in many neurological disorders from epilepsy to schizophrenia, but simultaneous observation of the many brain components that regulate this behavior is difficult. We anticipate that this data assimilation framework will enable better understanding of the detailed interactions governing sleep and wake behavior and provide for better, more targeted, therapies. PMID:23209396

  9. Development of a hybrid (numerical-hydraulic) circulatory model: prototype testing and its response to IABP assistance.

    PubMed

    Ferrari, G; Kozarski, M; De Lazzari, C; Górczyńska, K; Tosti, G; Darowski, M

    2005-07-01

    Merging numerical and physical models of the circulation makes it possible to develop a new class of circulatory models defined as hybrid. This solution reduces the costs, enhances the flexibility and opens the way to many applications ranging from research to education and heart assist devices testing. In the prototype described in this paper, a hydraulic model of systemic arterial tree is connected to a lumped parameters numerical model including pulmonary circulation and the remaining parts of systemic circulation. The hydraulic model consists of a characteristic resistance, of a silicon rubber tube to allow the insertion of an Intra-Aortic Balloon Pump (IABP) and of a lumped parameters compliance. Two electro-hydraulic interfaces, realized by means of gear pumps driven by DC motors, connect the numerical section with both terminals of the hydraulic section. The lumped parameters numerical model and the control system (including analog to digital and digital to analog converters)are developed in LabVIEW environment. The behavior of the model is analyzed by means of the ventricular pressure-volume loops and the time courses of arterial and ventricular pressures and flows in different circulatory conditions. A simulated pathological condition was set to test the IABP and verify the response of the system to this type of mechanical circulatory assistance. The results show that the model can represent hemodynamic relationships in different ventricular and circulatory conditions and is able to react to the IABP assistance.

  10. A hierarchical stress release model for synthetic seismicity

    NASA Astrophysics Data System (ADS)

    Bebbington, Mark

    1997-06-01

    We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to use the model for forward prediction.

  11. Optimization of low-level light therapy's illumination parameters for spinal cord injury in a rat model

    NASA Astrophysics Data System (ADS)

    Shuaib, Ali; Bourisly, Ali

    2018-02-01

    Spinal cord injury (SCI) can result in complete or partial loss of sensation and motor function due to interruption along the severed axonal tract(s). SCI can result in tetraplegia or paraplegia, which can have prohibitive lifetime medical costs and result in shorter life expectancy. A promising therapeutic technique that is currently in experimental phase and that has the potential to be used to treat SCI is Low-level light therapy (LLLT). Preclinical studies have shown that LLLT has reparative and regenerative capabilities on transected spinal cords, and that LLLT can enhance axonal sprouting in animal models. However, despite the promising effects of LLLT as a therapy for SCI, it remains difficult to compare published results due to the use of a wide range of illumination parameters (i.e. different wavelengths, fluences, beam types, and beam diameter), and due to the lack of a standardized experimental protocol(s). Before any clinical applications of LLLT for SCI treatment, it is crucial to standardize illumination parameters and efficacy of light delivery. Therefore, in this study we aim to evaluate the light fluence distribution on a 3D voxelated SCI rat model with different illumination parameters (wavelengths: 660, 810, and 980 nm; beam types: Gaussian and Flat; and beam diameters: 0.1, 0.2, and 0.3 cm) for LLLT using Monte Carlo simulation. This study provides an efficient approach to guide researchers in optimizing the illumination parameters for LLLT spinal cord injury in an experimental model and will aid in quantitative and qualitative standardization of LLLT-SCI treatment.

  12. Harmony search optimization in dimensional accuracy of die sinking EDM process using SS316L stainless steel

    NASA Astrophysics Data System (ADS)

    Deris, A. M.; Zain, A. M.; Sallehuddin, R.; Sharif, S.

    2017-09-01

    Electric discharge machine (EDM) is one of the widely used nonconventional machining processes for hard and difficult to machine materials. Due to the large number of machining parameters in EDM and its complicated structural, the selection of the optimal solution of machining parameters for obtaining minimum machining performance is remain as a challenging task to the researchers. This paper proposed experimental investigation and optimization of machining parameters for EDM process on stainless steel 316L work piece using Harmony Search (HS) algorithm. The mathematical model was developed based on regression approach with four input parameters which are pulse on time, peak current, servo voltage and servo speed to the output response which is dimensional accuracy (DA). The optimal result of HS approach was compared with regression analysis and it was found HS gave better result y giving the most minimum DA value compared with regression approach.

  13. Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Daigle, Matthew J.

    2014-01-01

    Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.

  14. Plausible combinations: An improved method to evaluate the covariate structure of Cormack-Jolly-Seber mark-recapture models

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.

    2013-01-01

    Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.

  15. A study on the predictability of acute lymphoblastic leukaemia response to treatment using a hybrid oncosimulator.

    PubMed

    Ouzounoglou, Eleftherios; Kolokotroni, Eleni; Stanulla, Martin; Stamatakos, Georgios S

    2018-02-06

    Efficient use of Virtual Physiological Human (VPH)-type models for personalized treatment response prediction purposes requires a precise model parameterization. In the case where the available personalized data are not sufficient to fully determine the parameter values, an appropriate prediction task may be followed. This study, a hybrid combination of computational optimization and machine learning methods with an already developed mechanistic model called the acute lymphoblastic leukaemia (ALL) Oncosimulator which simulates ALL progression and treatment response is presented. These methods are used in order for the parameters of the model to be estimated for retrospective cases and to be predicted for prospective ones. The parameter value prediction is based on a regression model trained on retrospective cases. The proposed Hybrid ALL Oncosimulator system has been evaluated when predicting the pre-phase treatment outcome in ALL. This has been correctly achieved for a significant percentage of patient cases tested (approx. 70% of patients). Moreover, the system is capable of denying the classification of cases for which the results are not trustworthy enough. In that case, potentially misleading predictions for a number of patients are avoided, while the classification accuracy for the remaining patient cases further increases. The results obtained are particularly encouraging regarding the soundness of the proposed methodologies and their relevance to the process of achieving clinical applicability of the proposed Hybrid ALL Oncosimulator system and VPH models in general.

  16. Investigation of the influence of spatial degrees of freedom on thermal infrared measurement

    NASA Astrophysics Data System (ADS)

    Fleuret, Julien R.; Yousefi, Bardia; Lei, Lei; Djupkep Dizeu, Frank Billy; Zhang, Hai; Sfarra, Stefano; Ouellet, Denis; Maldague, Xavier P. V.

    2017-05-01

    Long Wavelength Infrared (LWIR) cameras can provide a representation of a part of the light spectrum that is sensitive to temperature. These cameras also named Thermal Infrared (TIR) cameras are powerful tools to detect features that cannot be seen by other imaging technologies. For instance they enable defect detection in material, fever and anxiety in mammals and many other features for numerous applications. However, the accuracy of thermal cameras can be affected by many parameters; the most critical involves the relative position of the camera with respect to the object of interest. Several models have been proposed in order to minimize the influence of some of the parameters but they are mostly related to specific applications. Because such models are based on some prior informations related to context, their applicability to other contexts cannot be easily assessed. The few models remaining are mostly associated with a specific device. In this paper the authors studied the influence of the camera position on the measurement accuracy. Modeling of the position of the camera from the object of interest depends on many parameters. In order to propose a study which is as accurate as possible, the position of the camera will be represented as a five dimensions model. The aim of this study is to investigate and attempt to introduce a model which is as independent from the device as possible.

  17. Physically-based slope stability modelling and parameter sensitivity: a case study in the Quitite and Papagaio catchments, Rio de Janeiro, Brazil

    NASA Astrophysics Data System (ADS)

    de Lima Neves Seefelder, Carolina; Mergili, Martin

    2016-04-01

    We use the software tools r.slope.stability and TRIGRS to produce factor of safety and slope failure susceptibility maps for the Quitite and Papagaio catchments, Rio de Janeiro, Brazil. The key objective of the work consists in exploring the sensitivity of the geotechnical (r.slope.stability) and geohydraulic (TRIGRS) parameterization on the model outcomes in order to define suitable parameterization strategies for future slope stability modelling. The two landslide-prone catchments Quitite and Papagaio together cover an area of 4.4 km², extending between 12 and 995 m a.s.l. The study area is dominated by granitic bedrock and soil depths of 1-3 m. Ranges of geotechnical and geohydraulic parameters are derived from literature values. A landslide inventory related to a rainfall event in 1996 (250 mm in 48 hours) is used for model evaluation. We attempt to identify those combinations of effective cohesion and effective internal friction angle yielding the best correspondence with the observed landslide release areas in terms of the area under the ROC Curve (AUCROC), and in terms of the fraction of the area affected by the release of landslides. Thereby we test multiple parameter combinations within defined ranges to derive the slope failure susceptibility (fraction of tested parameter combinations yielding a factor of safety smaller than 1). We use the tool r.slope.stability (comparing the infinite slope stability model and an ellipsoid-based sliding surface model) to test and to optimize the geotechnical parameters, and TRIGRS (a coupled hydraulic-infinite slope stability model) to explore the sensitivity of the model results to the geohydraulic parameters. The model performance in terms of AUCROC is insensitive to the variation of the geotechnical parameterization within much of the tested ranges. Assuming fully saturated soils, r.slope.stability produces rather conservative predictions, whereby the results yielded with the sliding surface model are more conservative than those yielded with the infinite slope stability model. The sensitivity of AUCROC to variations in the geohydraulic parameters remains small as long as the calculated degree of saturation of the soils is sufficient to result in the prediction of a significant amount of landslide release pixels. Due to the poor sensitivity of AUCROC to variations of the geotechnical and geohydraulic parameters it is hard to optimize the parameters by means of statistics. Instead, the results produced with many different combinations of parameters correspond reasonably well with the distribution of the observed landslide release areas, even though they vary considerably in terms of their conservativeness. Considering the uncertainty inherent in all geotechnical and geohydraulic data, and the impossibility to capture the spatial distribution of the parameters by means of laboratory tests in sufficient detail, we conclude that landslide susceptibility maps yielded by catchment-scale physically-based models should not be interpreted in absolute terms. Building on the assumption that our findings are generally valid, we suggest that efforts to develop better strategies for dealing with the uncertainties in the spatial variation of the key parameters should be given priority in future slope stability modelling efforts.

  18. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  19. On the robustness of a Bayes estimate. [in reliability theory

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.

  20. Nonlinear dynamics of planetary gears using analytical and finite element models

    NASA Astrophysics Data System (ADS)

    Ambarisha, Vijaya Kumar; Parker, Robert G.

    2007-05-01

    Vibration-induced gear noise and dynamic loads remain key concerns in many transmission applications that use planetary gears. Tooth separations at large vibrations introduce nonlinearity in geared systems. The present work examines the complex, nonlinear dynamic behavior of spur planetary gears using two models: (i) a lumped-parameter model, and (ii) a finite element model. The two-dimensional (2D) lumped-parameter model represents the gears as lumped inertias, the gear meshes as nonlinear springs with tooth contact loss and periodically varying stiffness due to changing tooth contact conditions, and the supports as linear springs. The 2D finite element model is developed from a unique finite element-contact analysis solver specialized for gear dynamics. Mesh stiffness variation excitation, corner contact, and gear tooth contact loss are all intrinsically considered in the finite element analysis. The dynamics of planetary gears show a rich spectrum of nonlinear phenomena. Nonlinear jumps, chaotic motions, and period-doubling bifurcations occur when the mesh frequency or any of its higher harmonics are near a natural frequency of the system. Responses from the dynamic analysis using analytical and finite element models are successfully compared qualitatively and quantitatively. These comparisons validate the effectiveness of the lumped-parameter model to simulate the dynamics of planetary gears. Mesh phasing rules to suppress rotational and translational vibrations in planetary gears are valid even when nonlinearity from tooth contact loss occurs. These mesh phasing rules, however, are not valid in the chaotic and period-doubling regions.

  1. iSCHRUNK--In Silico Approach to Characterization and Reduction of Uncertainty in the Kinetic Models of Genome-scale Metabolic Networks.

    PubMed

    Andreozzi, Stefano; Miskovic, Ljubisa; Hatzimanikatis, Vassily

    2016-01-01

    Accurate determination of physiological states of cellular metabolism requires detailed information about metabolic fluxes, metabolite concentrations and distribution of enzyme states. Integration of fluxomics and metabolomics data, and thermodynamics-based metabolic flux analysis contribute to improved understanding of steady-state properties of metabolism. However, knowledge about kinetics and enzyme activities though essential for quantitative understanding of metabolic dynamics remains scarce and involves uncertainty. Here, we present a computational methodology that allow us to determine and quantify the kinetic parameters that correspond to a certain physiology as it is described by a given metabolic flux profile and a given metabolite concentration vector. Though we initially determine kinetic parameters that involve a high degree of uncertainty, through the use of kinetic modeling and machine learning principles we are able to obtain more accurate ranges of kinetic parameters, and hence we are able to reduce the uncertainty in the model analysis. We computed the distribution of kinetic parameters for glucose-fed E. coli producing 1,4-butanediol and we discovered that the observed physiological state corresponds to a narrow range of kinetic parameters of only a few enzymes, whereas the kinetic parameters of other enzymes can vary widely. Furthermore, this analysis suggests which are the enzymes that should be manipulated in order to engineer the reference state of the cell in a desired way. The proposed approach also sets up the foundations of a novel type of approaches for efficient, non-asymptotic, uniform sampling of solution spaces. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  2. Model invariance across genders of the Broad Autism Phenotype Questionnaire.

    PubMed

    Broderick, Neill; Wade, Jordan L; Meyer, J Patrick; Hull, Michael; Reeve, Ronald E

    2015-10-01

    ASD is one of the most heritable neuropsychiatric disorders, though comprehensive genetic liability remains elusive. To facilitate genetic research, researchers employ the concept of the broad autism phenotype (BAP), a milder presentation of traits in undiagnosed relatives. Research suggests that the BAP Questionnaire (BAPQ) demonstrates psychometric properties superior to other self-report measures. To examine evidence regarding validity of the BAPQ, the current study used confirmatory factor analysis to test the assumption of model invariance across genders. Results of the current study upheld model invariance at each level of parameter constraint; however, model fit indices suggested limited goodness-of-fit between the proposed model and the sample. Exploratory analyses investigated alternate factor structure models but ultimately supported the proposed three-factor structure model.

  3. Application of digital profile modeling techniques to ground-water solute transport at Barstow, California

    USGS Publications Warehouse

    Robson, Stanley G.

    1978-01-01

    This study investigated the use of a two-dimensional profile-oriented water-quality model for the simulation of head and water-quality changes through the saturated thickness of an aquifer. The profile model is able to simulate confined or unconfined aquifers with nonhomogeneous anisotropic hydraulic conductivity, nonhomogeneous specific storage and porosity, and nonuniform saturated thickness. An aquifer may be simulated under either steady or nonsteady flow conditions provided that the ground-water flow path along which the longitudinal axis of the model is oriented does not move in the aquifer during the simulation time period. The profile model parameters are more difficult to quantify than are the corresponding parameters for an areal-oriented water-fluality model. However, the sensitivity of the profile model to the parameters may be such that the normal error of parameter estimation will not preclude obtaining acceptable model results. Although the profile model has the advantage of being able to simulate vertical flow and water-quality changes in a single- or multiple-aquifer system, the types of problems to which it can be applied is limited by the requirements that (1) the ground-water flow path remain oriented along the longitudinal axis of the model and (2) any subsequent hydrologic factors to be evaluated using the model must be located along the land-surface trace of the model. Simulation of hypothetical ground-water management practices indicates that the profile model is applicable to problem-oriented studies and can provide quantitative results applicable to a variety of management practices. In particular, simulations of the movement and dissolved-solids concentration of a zone of degraded ground-water quality near Barstow, Calif., indicate that halting subsurface disposal of treated sewage effluent in conjunction with pumping a line of fully penetrating wells would be an effective means of controlling the movement of degraded ground water.

  4. On the Way to Appropriate Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, M.

    2016-12-01

    When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.

  5. Proton-pump inhibitor use does not affect semen quality in subfertile men.

    PubMed

    Keihani, Sorena; Craig, James R; Zhang, Chong; Presson, Angela P; Myers, Jeremy B; Brant, William O; Aston, Kenneth I; Emery, Benjamin R; Jenkins, Timothy G; Carrell, Douglas T; Hotaling, James M

    2018-01-01

    Proton-pump inhibitors (PPIs) are among the most widely used drugs worldwide. PPI use has recently been linked to adverse changes in semen quality in healthy men; however, the effects of PPI use on semen parameters remain largely unknown specifically in cases with male factor infertility. We examined whether PPI use was associated with detrimental effects on semen parameters in a large population of subfertile men. We retrospectively reviewed data from 12 257 subfertile men who had visited our fertility clinic from 2003 to 2013. Patients who reported using any PPIs for >3 months before semen sample collection were included; 7698 subfertile men taking no medication served as controls. Data were gathered on patient age, medication use, and conventional semen parameters; patients taking any known spermatotoxic medication were excluded. Linear mixed-effect regression models were used to test the effect of PPI use on semen parameters adjusting for age. A total of 248 patients (258 samples) used PPIs for at least 3 months before semen collection. In regression models, PPI use (either as the only medication or when used in combination with other nonspermatotoxic medications) was not associated with statistically significant changes in semen parameters. To our knowledge, this is the largest study to compare PPI use with semen parameters in subfertile men. Using PPIs was not associated with detrimental effects on semen quality in this retrospective study.

  6. Network topology and parameter estimation: from experimental design methods to gene regulatory network kinetics using a community based approach

    PubMed Central

    2014-01-01

    Background Accurate estimation of parameters of biochemical models is required to characterize the dynamics of molecular processes. This problem is intimately linked to identifying the most informative experiments for accomplishing such tasks. While significant progress has been made, effective experimental strategies for parameter identification and for distinguishing among alternative network topologies remain unclear. We approached these questions in an unbiased manner using a unique community-based approach in the context of the DREAM initiative (Dialogue for Reverse Engineering Assessment of Methods). We created an in silico test framework under which participants could probe a network with hidden parameters by requesting a range of experimental assays; results of these experiments were simulated according to a model of network dynamics only partially revealed to participants. Results We proposed two challenges; in the first, participants were given the topology and underlying biochemical structure of a 9-gene regulatory network and were asked to determine its parameter values. In the second challenge, participants were given an incomplete topology with 11 genes and asked to find three missing links in the model. In both challenges, a budget was provided to buy experimental data generated in silico with the model and mimicking the features of different common experimental techniques, such as microarrays and fluorescence microscopy. Data could be bought at any stage, allowing participants to implement an iterative loop of experiments and computation. Conclusions A total of 19 teams participated in this competition. The results suggest that the combination of state-of-the-art parameter estimation and a varied set of experimental methods using a few datasets, mostly fluorescence imaging data, can accurately determine parameters of biochemical models of gene regulation. However, the task is considerably more difficult if the gene network topology is not completely defined, as in challenge 2. Importantly, we found that aggregating independent parameter predictions and network topology across submissions creates a solution that can be better than the one from the best-performing submission. PMID:24507381

  7. Seven and up: individual differences in male voice fundamental frequency emerge before puberty and remain stable throughout adulthood

    NASA Astrophysics Data System (ADS)

    Fouquet, Meddy; Pisanski, Katarzyna; Mathevon, Nicolas; Reby, David

    2016-10-01

    Voice pitch (the perceptual correlate of fundamental frequency, F0) varies considerably even among individuals of the same sex and age, communicating a host of socially and evolutionarily relevant information. However, due to the almost exclusive utilization of cross-sectional designs in previous studies, it remains unknown whether these individual differences in voice pitch emerge before, during or after sexual maturation, and whether voice pitch remains stable into adulthood. Here, we measured the F0 parameters of men who were recorded once every 7 years from age 7 to 56 as they participated in the British television documentary Up Series. Linear mixed models revealed significant effects of age on all F0 parameters, wherein F0 mean, minimum, maximum and the standard deviation of F0 showed sharp pubertal decreases between age 7 and 21, yet remained remarkably stable after age 28. Critically, men's pre-pubertal F0 at age 7 strongly predicted their F0 at every subsequent adult age, explaining up to 64% of the variance in post-pubertal F0. This finding suggests that between-individual differences in voice pitch that are known to play an important role in men's reproductive success are in fact largely determined by age 7, and may therefore be linked to prenatal and/or pre-pubertal androgen exposure.

  8. On the modeling of breath-by-breath oxygen uptake kinetics at the onset of high-intensity exercises: simulated annealing vs. GRG2 method.

    PubMed

    Bernard, Olivier; Alata, Olivier; Francaux, Marc

    2006-03-01

    Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.

  9. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    PubMed

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Collider searches for extra dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landsberg, Greg; /Brown U.

    2004-12-01

    Searches for extra spatial dimensions remain among the most popular new directions in our quest for physics beyond the Standard Model. High-energy collider experiments of the current decade should be able to find an ultimate answer to the question of their existence in a variety of models. Until the start of the LHC in a few years, the Tevatron will remain the key player in this quest. In this paper, we review the most recent results from the Tevatron on searches for large, TeV{sup -1}-size, and Randall-Sundrum extra spatial dimensions, which have reached a new level of sensitivity and currentlymore » probe the parameter space beyond the existing constraints. While no evidence for the existence of extra dimensions has been found so far, an exciting discovery might be just steps away.« less

  11. Gait Analysis Methods for Rodent Models of Arthritic Disorders: Reviews and Recommendations

    PubMed Central

    Lakes, Emily H.; Allen, Kyle D.

    2016-01-01

    Gait analysis is a useful tool to understand behavioral changes in preclinical arthritis models. While observational scoring and spatiotemporal gait parameters are the most widely performed gait analyses in rodents, commercially available systems can now provide quantitative assessments of spatiotemporal patterns. However, inconsistencies remain between testing platforms, and laboratories often select different gait pattern descriptors to report in the literature. Rodent gait can also be described through kinetic and kinematic analyses, but systems to analyze rodent kinetics and kinematics are typically custom made and often require sensitive, custom equipment. While the use of rodent gait analysis rapidly expands, it is important to remember that, while rodent gait analysis is a relatively modern behavioral assay, the study of quadrupedal gait is not new. Nearly all gait parameters are correlated, and a collection of gait parameters is needed to understand a compensatory gait pattern used by the animal. As such, a change in a single gait parameter is unlikely to tell the full biomechanical story; and to effectively use gait analysis, one must consider how multiple different parameters contribute to an altered gait pattern. The goal of this article is to review rodent gait analysis techniques and provide recommendations on how to use these technologies in rodent arthritis models, including discussions on the strengths and limitations of observational scoring, spatiotemporal, kinetic, and kinematic measures. Recognizing rodent gait analysis is an evolving tool, we also provide technical recommendations we hope will improve the utility of these analyses in the future. PMID:26995111

  12. Integrated modelling of crop production and nitrate leaching with the Daisy model.

    PubMed

    Manevski, Kiril; Børgesen, Christen D; Li, Xiaoxin; Andersen, Mathias N; Abrahamsen, Per; Hu, Chunsheng; Hansen, Søren

    2016-01-01

    An integrated modelling strategy was designed and applied to the Soil-Vegetation-Atmosphere Transfer model Daisy for simulation of crop production and nitrate leaching under pedo-climatic and agronomic environment different than that of model original parameterisation. The points of significance and caution in the strategy are: •Model preparation should include field data in detail due to the high complexity of the soil and the crop processes simulated with process-based model, and should reflect the study objectives. Inclusion of interactions between parameters in a sensitivity analysis results in better account for impacts on outputs of measured variables.•Model evaluation on several independent data sets increases robustness, at least on coarser time scales such as month or year. It produces a valuable platform for adaptation of the model to new crops or for the improvement of the existing parameters set. On daily time scale, validation for highly dynamic variables such as soil water transport remains challenging. •Model application is demonstrated with relevance for scientists and regional managers. The integrated modelling strategy is applicable for other process-based models similar to Daisy. It is envisaged that the strategy establishes model capability as a useful research/decision-making, and it increases knowledge transferability, reproducibility and traceability.

  13. Dynamic model predicting overweight, obesity, and extreme obesity prevalence trends.

    PubMed

    Thomas, Diana M; Weedermann, Marion; Fuemmeler, Bernard F; Martin, Corby K; Dhurandhar, Nikhil V; Bredlau, Carl; Heymsfield, Steven B; Ravussin, Eric; Bouchard, Claude

    2014-02-01

    Obesity prevalence in the United States appears to be leveling, but the reasons behind the plateau remain unknown. Mechanistic insights can be provided from a mathematical model. The objective of this study is to model known multiple population parameters associated with changes in body mass index (BMI) classes and to establish conditions under which obesity prevalence will plateau. A differential equation system was developed that predicts population-wide obesity prevalence trends. The model considers both social and nonsocial influences on weight gain, incorporates other known parameters affecting obesity trends, and allows for country specific population growth. The dynamic model predicts that: obesity prevalence is a function of birthrate and the probability of being born in an obesogenic environment; obesity prevalence will plateau independent of current prevention strategies; and the US prevalence of overweight, obesity, and extreme obesity will plateau by about 2030 at 28%, 32%, and 9% respectively. The US prevalence of obesity is stabilizing and will plateau, independent of current preventative strategies. This trend has important implications in accurately evaluating the impact of various anti-obesity strategies aimed at reducing obesity prevalence. Copyright © 2013 The Obesity Society.

  14. Assessing variance components in multilevel linear models using approximate Bayes factors: A case study of ethnic disparities in birthweight

    PubMed Central

    Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.

    2013-01-01

    Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430

  15. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.; Simpson, R.L.; Urtiew, P.A.

    1995-08-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition + two growth terms) have been found using nonlinear optimization methods to determine the {open_quotes}best{close_quotes} set of model parameters. The ignition term treats the initiation of up to 0.5% of the RDX The first growth term in the model treats the RDX growth of reaction up to 20% reacted. The second growth term treats the subsequentmore » growth of reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the {open_quotes}best{close_quotes} set of coefficients for the three term Lee-Tarver ignition and growth of reaction model.« less

  16. A Bayesian method for inferring transmission chains in a partially observed epidemic.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef M.; Ray, Jaideep

    2008-10-01

    We present a Bayesian approach for estimating transmission chains and rates in the Abakaliki smallpox epidemic of 1967. The epidemic affected 30 individuals in a community of 74; only the dates of appearance of symptoms were recorded. Our model assumes stochastic transmission of the infections over a social network. Distinct binomial random graphs model intra- and inter-compound social connections, while disease transmission over each link is treated as a Poisson process. Link probabilities and rate parameters are objects of inference. Dates of infection and recovery comprise the remaining unknowns. Distributions for smallpox incubation and recovery periods are obtained from historicalmore » data. Using Markov chain Monte Carlo, we explore the joint posterior distribution of the scalar parameters and provide an expected connectivity pattern for the social graph and infection pathway.« less

  17. Engineering Hydrogel Microenvironments to Recapitulate the Stem Cell Niche.

    PubMed

    Madl, Christopher M; Heilshorn, Sarah C

    2018-06-04

    Stem cells are a powerful resource for many applications including regenerative medicine, patient-specific disease modeling, and toxicology screening. However, eliciting the desired behavior from stem cells, such as expansion in a naïve state or differentiation into a particular mature lineage, remains challenging. Drawing inspiration from the native stem cell niche, hydrogel platforms have been developed to regulate stem cell fate by controlling microenvironmental parameters including matrix mechanics, degradability, cell-adhesive ligand presentation, local microstructure, and cell-cell interactions. We survey techniques for modulating hydrogel properties and review the effects of microenvironmental parameters on maintaining stemness and controlling differentiation for a variety of stem cell types. Looking forward, we envision future hydrogel designs spanning a spectrum of complexity, ranging from simple, fully defined materials for industrial expansion of stem cells to complex, biomimetic systems for organotypic cell culture models.

  18. A probabilistic fatigue analysis of multiple site damage

    NASA Technical Reports Server (NTRS)

    Rohrbaugh, S. M.; Ruff, D.; Hillberry, B. M.; Mccabe, G.; Grandt, A. F., Jr.

    1994-01-01

    The variability in initial crack size and fatigue crack growth is incorporated in a probabilistic model that is used to predict the fatigue lives for unstiffened aluminum alloy panels containing multiple site damage (MSD). The uncertainty of the damage in the MSD panel is represented by a distribution of fatigue crack lengths that are analytically derived from equivalent initial flaw sizes. The variability in fatigue crack growth rate is characterized by stochastic descriptions of crack growth parameters for a modified Paris crack growth law. A Monte-Carlo simulation explicitly describes the MSD panel by randomly selecting values from the stochastic variables and then grows the MSD cracks with a deterministic fatigue model until the panel fails. Different simulations investigate the influences of the fatigue variability on the distributions of remaining fatigue lives. Six cases that consider fixed and variable conditions of initial crack size and fatigue crack growth rate are examined. The crack size distribution exhibited a dominant effect on the remaining fatigue life distribution, and the variable crack growth rate exhibited a lesser effect on the distribution. In addition, the probabilistic model predicted that only a small percentage of the life remains after a lead crack develops in the MSD panel.

  19. Benefits of seasonal forecasts of crop yields

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Okada, M.; Nishimori, M.; Yokozawa, M.

    2017-12-01

    Major factors behind recent fluctuations in food prices include increased biofuel production and oil price fluctuations. In addition, several extreme climate events that reduced worldwide food production coincided with upward spikes in food prices. The stabilization of crop yields is one of the most important tasks to stabilize food prices and thereby enhance food security. Recent development of technologies related to crop modeling and seasonal weather forecasting has made it possible to forecast future crop yields for maize and soybean. However, the effective use of these technologies remains limited. Here we present the potential benefits of seasonal crop-yield forecasts on a global scale for choice of planting day. For this purpose, we used a model (PRYSBI-2) that can well replicate past crop yields both for maize and soybean. This model system uses a Bayesian statistical approach to estimate the parameters of a basic process-based model of crop growth. The spatial variability of model parameters was considered by estimating the posterior distribution of the parameters from historical yield data by using the Markov-chain Monte Carlo (MCMC) method with a resolution of 1.125° × 1.125°. The posterior distributions of model parameters were estimated for each spatial grid with 30 000 MCMC steps of 10 chains each. By using this model and the estimated parameter distributions, we were able to estimate not only crop yield but also levels of associated uncertainty. We found that the global average crop yield increased about 30% as the result of the optimal selection of planting day and that the seasonal forecast of crop yield had a large benefit in and near the eastern part of Brazil and India for maize and the northern area of China for soybean. In these countries, the effects of El Niño and Indian Ocean dipole are large. The results highlight the importance of developing a system to forecast global crop yields.

  20. Applicability of Different Hydraulic Parameters to Describe Soil Detachment in Eroding Rills

    PubMed Central

    Wirtz, Stefan; Seeger, Manuel; Zell, Andreas; Wagner, Christian; Wagner, Jean-Frank; Ries, Johannes B.

    2013-01-01

    This study presents the comparison of experimental results with assumptions used in numerical models. The aim of the field experiments is to test the linear relationship between different hydraulic parameters and soil detachment. For example correlations between shear stress, unit length shear force, stream power, unit stream power and effective stream power and the detachment rate does not reveal a single parameter which consistently displays the best correlation. More importantly, the best fit does not only vary from one experiment to another, but even between distinct measurement points. Different processes in rill erosion are responsible for the changing correlations. However, not all these procedures are considered in soil erosion models. Hence, hydraulic parameters alone are not sufficient to predict detachment rates. They predict the fluvial incising in the rill's bottom, but the main sediment sources are not considered sufficiently in its equations. The results of this study show that there is still a lack of understanding of the physical processes underlying soil erosion. Exerted forces, soil stability and its expression, the abstraction of the detachment and transport processes in shallow flowing water remain still subject of unclear description and dependence. PMID:23717669

  1. Spatiotemporal variation in reproductive parameters of yellow-bellied marmots.

    PubMed

    Ozgul, Arpat; Oli, Madan K; Olson, Lucretia E; Blumstein, Daniel T; Armitage, Kenneth B

    2007-11-01

    Spatiotemporal variation in reproductive rates is a common phenomenon in many wildlife populations, but the population dynamic consequences of spatial and temporal variability in different components of reproduction remain poorly understood. We used 43 years (1962-2004) of data from 17 locations and a capture-mark-recapture (CMR) modeling framework to investigate the spatiotemporal variation in reproductive parameters of yellow-bellied marmots (Marmota flaviventris), and its influence on the realized population growth rate. Specifically, we estimated and modeled breeding probabilities of two-year-old females (earliest age of first reproduction), >2-year-old females that have not reproduced before (subadults), and >2-year-old females that have reproduced before (adults), as well as the litter sizes of two-year old and >2-year-old females. Most reproductive parameters exhibited spatial and/or temporal variation. However, reproductive parameters differed with respect to their relative influence on the realized population growth rate (lambda). Litter size had a stronger influence than did breeding probabilities on both spatial and temporal variations in lambda. Our analysis indicated that lambda was proportionately more sensitive to survival than recruitment. However, the annual fluctuation in litter size, abetted by the breeding probabilities, accounted for most of the temporal variation in lambda.

  2. A galloping quadruped model using left-right asymmetry in touchdown angles.

    PubMed

    Tanase, Masayasu; Ambe, Yuichi; Aoi, Shinya; Matsuno, Fumitoshi

    2015-09-18

    Among quadrupedal gaits, the galloping gait has specific characteristics in terms of locomotor behavior. In particular, it shows a left-right asymmetry in gait parameters such as touchdown angle and the relative phase of limb movements. In addition, asymmetric gait parameters show a characteristic dependence on locomotion speed. There are two types of galloping gaits in quadruped animals: the transverse gallop, often observed in horses; and the rotary gallop, often observed in dogs and cheetahs. These two gaits have different footfall sequences. Although these specific characteristics in quadrupedal galloping gaits have been observed and described in detail, the underlying mechanisms remain unclear. In this paper, we use a simple physical model with a rigid body and four massless springs and incorporate the left-right asymmetry of touchdown angles. Our simulation results show that our model produces stable galloping gaits for certain combinations of model parameters and explains these specific characteristics observed in the quadrupedal galloping gait. The results are then evaluated in comparison with the measured data of quadruped animals and the gait mechanisms are clarified from the viewpoint of dynamics, such as the roles of the left-right touchdown angle difference in the generation of galloping gaits and energy transfer during one gait cycle to produce two different galloping gaits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A new approach to estimate parameters of speciation models with application to apes.

    PubMed

    Becquet, Celine; Przeworski, Molly

    2007-10-01

    How populations diverge and give rise to distinct species remains a fundamental question in evolutionary biology, with important implications for a wide range of fields, from conservation genetics to human evolution. A promising approach is to estimate parameters of simple speciation models using polymorphism data from multiple loci. Existing methods, however, make a number of assumptions that severely limit their applicability, notably, no gene flow after the populations split and no intralocus recombination. To overcome these limitations, we developed a new Markov chain Monte Carlo method to estimate parameters of an isolation-migration model. The approach uses summaries of polymorphism data at multiple loci surveyed in a pair of diverging populations or closely related species and, importantly, allows for intralocus recombination. To illustrate its potential, we applied it to extensive polymorphism data from populations and species of apes, whose demographic histories are largely unknown. The isolation-migration model appears to provide a reasonable fit to the data. It suggests that the two chimpanzee species became reproductively isolated in allopatry approximately 850 Kya, while Western and Central chimpanzee populations split approximately 440 Kya but continued to exchange migrants. Similarly, Eastern and Western gorillas and Sumatran and Bornean orangutans appear to have experienced gene flow since their splits approximately 90 and over 250 Kya, respectively.

  4. Validation of systems biology derived molecular markers of renal donor organ status associated with long term allograft function.

    PubMed

    Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert

    2018-05-03

    Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.

  5. A Novel Series Connected Batteries State of High Voltage Safety Monitor System for Electric Vehicle Application

    PubMed Central

    Jiaxi, Qiang; Lin, Yang; Jianhui, He; Qisheng, Zhou

    2013-01-01

    Batteries, as the main or assistant power source of EV (Electric Vehicle), are usually connected in series with high voltage to improve the drivability and energy efficiency. Today, more and more batteries are connected in series with high voltage, if there is any fault in high voltage system (HVS), the consequence is serious and dangerous. Therefore, it is necessary to monitor the electric parameters of HVS to ensure the high voltage safety and protect personal safety. In this study, a high voltage safety monitor system is developed to solve this critical issue. Four key electric parameters including precharge, contact resistance, insulation resistance, and remaining capacity are monitored and analyzed based on the equivalent models presented in this study. The high voltage safety controller which integrates the equivalent models and control strategy is developed. By the help of hardware-in-loop system, the equivalent models integrated in the high voltage safety controller are validated, and the online electric parameters monitor strategy is analyzed and discussed. The test results indicate that the high voltage safety monitor system designed in this paper is suitable for EV application. PMID:24194677

  6. A novel series connected batteries state of high voltage safety monitor system for electric vehicle application.

    PubMed

    Jiaxi, Qiang; Lin, Yang; Jianhui, He; Qisheng, Zhou

    2013-01-01

    Batteries, as the main or assistant power source of EV (Electric Vehicle), are usually connected in series with high voltage to improve the drivability and energy efficiency. Today, more and more batteries are connected in series with high voltage, if there is any fault in high voltage system (HVS), the consequence is serious and dangerous. Therefore, it is necessary to monitor the electric parameters of HVS to ensure the high voltage safety and protect personal safety. In this study, a high voltage safety monitor system is developed to solve this critical issue. Four key electric parameters including precharge, contact resistance, insulation resistance, and remaining capacity are monitored and analyzed based on the equivalent models presented in this study. The high voltage safety controller which integrates the equivalent models and control strategy is developed. By the help of hardware-in-loop system, the equivalent models integrated in the high voltage safety controller are validated, and the online electric parameters monitor strategy is analyzed and discussed. The test results indicate that the high voltage safety monitor system designed in this paper is suitable for EV application.

  7. Nonlinear-drifted Brownian motion with multiple hidden states for remaining useful life prediction of rechargeable batteries

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yang, Fangfang; Tsui, Kwok-Leung

    2017-09-01

    Brownian motion with adaptive drift has attracted much attention in prognostics because its first hitting time is highly relevant to remaining useful life prediction and it follows the inverse Gaussian distribution. Besides linear degradation modeling, nonlinear-drifted Brownian motion has been developed to model nonlinear degradation. Moreover, the first hitting time distribution of the nonlinear-drifted Brownian motion has been approximated by time-space transformation. In the previous studies, the drift coefficient is the only hidden state used in state space modeling of the nonlinear-drifted Brownian motion. Besides the drift coefficient, parameters of a nonlinear function used in the nonlinear-drifted Brownian motion should be treated as additional hidden states of state space modeling to make the nonlinear-drifted Brownian motion more flexible. In this paper, a prognostic method based on nonlinear-drifted Brownian motion with multiple hidden states is proposed and then it is applied to predict remaining useful life of rechargeable batteries. 26 sets of rechargeable battery degradation samples are analyzed to validate the effectiveness of the proposed prognostic method. Moreover, some comparisons with a standard particle filter based prognostic method, a spherical cubature particle filter based prognostic method and two classic Bayesian prognostic methods are conducted to highlight the superiority of the proposed prognostic method. Results show that the proposed prognostic method has lower average prediction errors than the particle filter based prognostic methods and the classic Bayesian prognostic methods for battery remaining useful life prediction.

  8. Non-Deterministic Modelling of Food-Web Dynamics

    PubMed Central

    Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam

    2014-01-01

    A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as ‘null models of food-webs’ as originally advocated. PMID:25299245

  9. Population growth of Yellowstone grizzly bears: Uncertainty and future monitoring

    USGS Publications Warehouse

    Harris, R.B.; White, Gary C.; Schwartz, C.C.; Haroldson, M.A.

    2007-01-01

    Grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem of the US Rocky Mountains have recently increased in numbers, but remain vulnerable due to isolation from other populations and predicted reductions in favored food resources. Harris et al. (2006) projected how this population might fare in the future under alternative survival rates, and in doing so estimated the rate of population growth, 1983–2002. We address issues that remain from that earlier work: (1) the degree of uncertainty surrounding our estimates of the rate of population change (λ); (2) the effect of correlation among demographic parameters on these estimates; and (3) how a future monitoring system using counts of females accompanied by cubs might usefully differentiate between short-term, expected, and inconsequential fluctuations versus a true change in system state. We used Monte Carlo re-sampling of beta distributions derived from the demographic parameters used by Harris et al. (2006) to derive distributions of λ during 1983–2002 given our sampling uncertainty. Approximate 95% confidence intervals were 0.972–1.096 (assuming females with unresolved fates died) and 1.008–1.115 (with unresolved females censored at last contact). We used well-supported models of Haroldson et al. (2006) and Schwartz et al. (2006a,b,c) to assess the strength of correlations among demographic processes and the effect of omitting them in projection models. Incorporating correlations among demographic parameters yielded point estimates of λ that were nearly identical to those from the earlier model that omitted correlations, but yielded wider confidence intervals surrounding λ. Finally, we suggest that fitting linear and quadratic curves to the trend suggested by the estimated number of females with cubs in the ecosystem, and using AICc model weights to infer population sizes and λ provides an objective means to monitoring approximate population trajectories in addition to demographic analysis.

  10. Dielectric elastomer for stretchable sensors: influence of the design and material properties

    NASA Astrophysics Data System (ADS)

    Jean-Mistral, C.; Iglesias, S.; Pruvost, S.; Duchet-Rumeau, J.; Chesné, S.

    2016-04-01

    Dielectric elastomers exhibit extended capabilities as flexible sensors for the detection of load distributions, pressure or huge deformations. Tracking the human movements of the fingers or the arms could be useful for the reconstruction of sporting gesture, or to control a human-like robot. Proposing new measurements methods are addressed in a number of publications leading to improving the sensitivity and accuracy of the sensing method. Generally, the associated modelling remains simple (RC or RC transmission line). The material parameters are considered constant or having a negligible effect which can lead to serious reduction of accuracy. Comparisons between measurements and modelling require care and skill, and could be tricky. Thus, we propose here a comprehensive modelling, taking into account the influence of the material properties on the performances of the dielectric elastomer sensor (DES). Various parameters influencing the characteristics of the sensors have been identified: dielectric constant, hyper-elasticity. The variations of these parameters as a function of the strain impact the linearity and sensitivity of the sensor of few percent. The sensitivity of the DES is also evaluated changing geometrical parameters (initial thickness) and its design (rectangular and dog-bone shapes). We discuss the impact of the shape regarding stress. Finally, DES including a silicone elastomer sandwiched between two high conductive stretchable electrodes, were manufactured and investigated. Classic and reliable LCR measurements are detailed. Experimental results validate our numerical model of large strain sensor (>50%).

  11. The impact of fluid topology on residual saturations - A pore-network model study

    NASA Astrophysics Data System (ADS)

    Doster, F.; Kallel, W.; van Dijke, R.

    2014-12-01

    In two-phase flow in porous media only fractions of the resident fluid are mobilised during a displacement process and, in general, a significant amount of the resident fluid remains permanently trapped. Depending on the application, entrapment is desirable (geological carbon storage), or it should be obviated (enhanced oil recovery, contaminant remediation). Despite its utmost importance for these applications, predictions of trapped fluid saturations for macroscopic systems, in particular under changing displacement conditions, remain challenging. The models that aim to represent trapping phenomena are typically empirical and require tracking of the history of the state variables. This exacerbates the experimental verification and the design of sophisticated displacement technologies that enhance or impede trapping. Recently, experiments [1] have suggested that a macroscopic normalized Euler number, quantifying the topology of fluid distributions, could serve as a parameter to predict residual saturations based on state variables. In these experiments the entrapment of fluids was visualised through 3D micro CT imaging. However, the experiments are notoriously time consuming and therefore only allow for a sparse sampling of the parameter space. Pore-network models represent porous media through an equivalent network structure of pores and throats. Under quasi-static capillary dominated conditions displacement processes can be modeled through simple invasion percolation rules. Hence, in contrast to experiments, pore-network models are fast and therefore allow full sampling of the parameter space. Here, we use pore-network modeling [2] to critically investigate the knowledge gained through observing and tracking the normalized Euler number. More specifically, we identify conditions under which (a) systems with the same saturations but different normalized Euler numbers lead to different residual saturations and (b) systems with the same saturations and the same normalized Euler numbers but different process histories yield the same residual saturations. Special attention is given to contact angle and process histories with varying drainage and imbibition periods. [1] Herring et al., Adv. Water. Resour., 62, 47-58 (2013) [2] Ryazanov et al., Transp. Porous Media, 80, 79-99 (2009).

  12. Modeling the population dynamics and community impacts of Ambystoma tigrinum: A case study of phenotype plasticity.

    PubMed

    McCarthy, Maeve L; Wallace, Dorothy; Whiteman, Howard H; Rheingold, Evan T; Dunham, Ann M; Prosper, Olivia; Chen, Michelle; Hu-Wang, Eileen

    2017-06-01

    Phenotypic plasticity is the ability of an organism to change its phenotype in response to changes in the environment. General mathematical descriptions of the phenomenon rely on an abstract measure of "viability" that, in this study, is instantiated in the case of the Tiger Salamander, Ambystoma tigrinum. This organism has a point in its development when, upon maturing, it may take two very different forms. One is a terrestrial salamander (metamorph)that visits ponds to reproduce and eat, while the other is an aquatic form (paedomorph) that remains in the pond to breed and which consumes a variety of prey including its own offspring. A seven dimensional nonlinear system of ordinary differential equations is developed, incorporating small (Z) and large (B) invertebrates, Ambystoma young of the year (Y), juveniles (J), terrestrial metamorphs (A) and aquatic paedomorphs (P). One parameter in the model controls the proportion of juveniles maturing into A versus P. Solutions are shown to remain non-negative. Every effort was made to justify parameters biologically through studies reported in the literature. A sensitivity analysis and equilibrium analysis of model parameters demonstrate that morphological choice is critical to the overall composition of the Ambystoma population. Various population viability measures were used to select optimal percentages of juveniles maturing into metamorphs, with optimal choices differing considerably depending on the viability measure. The model suggests that the criteria for viability for this organism vary, both from location to location and also in time. Thus, optimal responses change with spatiotemporal variation, which is consistent with other phenotypically plastic systems. Two competing hypotheses for the conditions under which metamorphosis occurs are examined in light of the model and data from an Ambystoma tigrinum population at Mexican Cut, Colorado. The model clearly supports one of these over the other for this data set. There appears to be a mathematical basis to the general tenet of spatiotemporal variation being important for the maintenance of polyphenisms, and our results suggest that such variation may have cascading effects on population, community, and perhaps ecosystem dynamics because it drives the production of a keystone, cannibalistic predator. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Application of Statistically Derived CPAS Parachute Parameters

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Ray, Eric S.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.

  14. Constraints on CDM cosmology from galaxy power spectrum, CMB and SNIa evolution

    NASA Astrophysics Data System (ADS)

    Ferramacho, L. D.; Blanchard, A.; Zolnierowski, Y.

    2009-05-01

    Aims: We examine the constraints that can be obtained on standard cold dark matter models from the most currently used data set: CMB anisotropies, type Ia supernovae and the SDSS luminous red galaxies. We also examine how these constraints are widened when the equation of state parameter w and the curvature parameter Ωk are left as free parameters. Finally, we investigate the impact on these constraints of a possible form of evolution in SNIa intrinsic luminosity. Methods: We obtained our results from MCMC analysis using the full likelihood of each data set. Results: For the ΛCDM model, our “vanilla” model, cosmological parameters are tightly constrained and consistent with current estimates from various methods. When the dark energy parameter w is free we find that the constraints remain mostly unchanged, i.e. changes are smaller than the 1 sigma uncertainties. Similarly, relaxing the assumption of a flat universe leads to nearly identical constraints on the dark energy density parameter of the universe Ω_Λ , baryon density of the universe Ω_b, the optical depth τ, the index of the power spectrum of primordial fluctuations n_S, with most one sigma uncertainties better than 5%. More significant changes appear on other parameters: while preferred values are almost unchanged, uncertainties for the physical dark matter density Ω_ch^2, Hubble constant H0 and σ8 are typically twice as large. The constraint on the age of the Universe, which is very accurate for the vanilla model, is the most degraded. We found that different methodological approaches on large scale structure estimates lead to appreciable differences in preferred values and uncertainty widths. We found that possible evolution in SNIa intrinsic luminosity does not alter these constraints by much, except for w, for which the uncertainty is twice as large. At the same time, this possible evolution is severely constrained. Conclusions: We conclude that systematic uncertainties for some estimated quantities are similar or larger than statistical ones.

  15. Discontinuous hindcast simulations of estuarine bathymetric change: A case study from Suisun Bay, California

    USGS Publications Warehouse

    Ganju, Neil K.; Jaffe, Bruce E.; Schoellhamer, David H.

    2011-01-01

    Simulations of estuarine bathymetric change over decadal timescales require methods for idealization and reduction of forcing data and boundary conditions. Continuous simulations are hampered by computational and data limitations and results are rarely evaluated with observed bathymetric change data. Bathymetric change data for Suisun Bay, California span the 1867–1990 period with five bathymetric surveys during that period. The four periods of bathymetric change were modeled using a coupled hydrodynamic-sediment transport model operated at the tidal-timescale. The efficacy of idealization techniques was investigated by discontinuously simulating the four periods. The 1867–1887 period, used for calibration of wave energy and sediment parameters, was modeled with an average error of 37% while the remaining periods were modeled with error ranging from 23% to 121%. Variation in post-calibration performance is attributed to temporally variable sediment parameters and lack of bathymetric and configuration data for portions of Suisun Bay and the Delta. Modifying seaward sediment delivery and bed composition resulted in large performance increases for post-calibration periods suggesting that continuous simulation with constant parameters is unrealistic. Idealization techniques which accelerate morphological change should therefore be used with caution in estuaries where parameters may change on sub-decadal timescales. This study highlights the utility and shortcomings of estuarine geomorphic models for estimating past changes in forcing mechanisms such as sediment supply and bed composition. The results further stress the inherent difficulty of simulating estuarine changes over decadal timescales due to changes in configuration, benthic composition, and anthropogenic forcing such as dredging and channelization.

  16. Increase of Long-chain Branching by Thermo-oxidative Treatment of LDPE

    NASA Astrophysics Data System (ADS)

    Rolón-Garrido, Víctor H.; Luo, Jinji; Wagner, Manfred H.

    2011-07-01

    Low-density polyethylene (LDPE) was exposed to thermal and thermo-oxidative treatment at 170 °C, and subsequently characterized by linear-viscoelastic measurements and in uniaxial extension. The Molecular Stress Function (MSF) model was used to quantify the elongational viscosities measured. For the thermally treated samples, exposure times between 2 and 6 hours were applied. Formation of long-chain branching (LCB) was found to occur only during the first two hours of thermal treatment. At longer exposure times, no difference in the level of strain hardening was observed. This was quantified by use of the MSF model: the nonlinear parameter fmax2 increased from fmax2 = 14 for the virgin sample to fmax2 = 22 for the samples thermally treated between 2 and 6 hours. For the thermo-oxidatively treated samples, which were exposed to air during thermal treatment between 30 and 90 minutes, the level of strain hardening increases drastically up to fmax2 = 55 with increasing exposure times from 30 up to 75 min due to LCB formation, and then decreases for an exposure time of 90 minutes due to chain scission dominating LCB formation. The nonlinear parameter β of the MSF model was found to be β = 2 for all samples, indicating that the general type of the random branching structure remains the same under all thermal conditions. Consequently only the parameter fmax2 of the MSF model and the linear-viscoelastic spectra were required to describe quantitatively the experimental observations. The strain hardening index, which is sometimes used to quantify strain hardening, follows accurately the trend of the MSF model parameter fmax2.

  17. A novel model of magnetorheological damper with hysteresis division

    NASA Astrophysics Data System (ADS)

    Yu, Jianqiang; Dong, Xiaomin; Zhang, Zonglun

    2017-10-01

    Due to the complex nonlinearity of magnetorheological (MR) behavior, the modeling of MR dampers is a challenge. A simple and effective model of MR damper remains a work in progress. A novel model of MR damper is proposed with force-velocity hysteresis division method in this study. A typical hysteresis loop of MR damper can be simply divided into two novel curves with the division idea. One is the backbone curve and the other is the branch curve. The exponential-family functions which capturing the characteristics of the two curves can simplify the model and improve the identification efficiency. To illustrate and validate the novel phenomenological model with hysteresis division idea, a dual-end MR damper is designed and tested. Based on the experimental data, the characteristics of the novel curves are investigated. To simplify the parameters identification and obtain the reversibility, the maximum force part, the non-dimensional backbone part and the non-dimensional branch part are derived from the two curves. The maximum force part and the non-dimensional part are in multiplication type add-rule. The maximum force part is dependent on the current and maximum velocity. The non-dominated sorting genetic algorithm II (NSGA II) based on the design of experiments (DOE) is employed to identify the parameters of the normalized shape functions. Comparative analysis is conducted based on the identification results. The analysis shows that the novel model with few identification parameters has higher accuracy and better predictive ability.

  18. Cancer heterogeneity and multilayer spatial evolutionary games.

    PubMed

    Świerniak, Andrzej; Krześlak, Michał

    2016-10-13

    Evolutionary game theory (EGT) has been widely used to simulate tumour processes. In almost all studies on EGT models analysis is limited to two or three phenotypes. Our model contains four main phenotypes. Moreover, in a standard approach only heterogeneity of populations is studied, while cancer cells remain homogeneous. A multilayer approach proposed in this paper enables to study heterogeneity of single cells. In the extended model presented in this paper we consider four strategies (phenotypes) that can arise by mutations. We propose multilayer spatial evolutionary games (MSEG) played on multiple 2D lattices corresponding to the possible phenotypes. It enables simulation and investigation of heterogeneity on the player-level in addition to the population-level. Moreover, it allows to model interactions between arbitrary many phenotypes resulting from the mixture of basic traits. Different equilibrium points and scenarios (monomorphic and polymorphic populations) have been achieved depending on model parameters and the type of played game. However, there is a possibility of stable quadromorphic population in MSEG games for the same set of parameters like for the mean-field game. The model assumes an existence of four possible phenotypes (strategies) in the population of cells that make up tumour. Various parameters and relations between cells lead to complex analysis of this model and give diverse results. One of them is a possibility of stable coexistence of different tumour cells within the population, representing almost arbitrary mixture of the basic phenotypes. This article was reviewed by Tomasz Lipniacki, Urszula Ledzewicz and Jacek Banasiak.

  19. Systems biology as a conceptual framework for research in family medicine; use in predicting response to influenza vaccination.

    PubMed

    Majnarić-Trtica, Ljiljana; Vitale, Branko

    2011-10-01

    To introduce systems biology as a conceptual framework for research in family medicine, based on empirical data from a case study on the prediction of influenza vaccination outcomes. This concept is primarily oriented towards planning preventive interventions and includes systematic data recording, a multi-step research protocol and predictive modelling. Factors known to affect responses to influenza vaccination include older age, past exposure to influenza viruses, and chronic diseases; however, constructing useful prediction models remains a challenge, because of the need to identify health parameters that are appropriate for general use in modelling patients' responses. The sample consisted of 93 patients aged 50-89 years (median 69), with multiple medical conditions, who were vaccinated against influenza. Literature searches identified potentially predictive health-related parameters, including age, gender, diagnoses of the main chronic ageing diseases, anthropometric measures, and haematological and biochemical tests. By applying data mining algorithms, patterns were identified in the data set. Candidate health parameters, selected in this way, were then combined with information on past influenza virus exposure to build the prediction model using logistic regression. A highly significant prediction model was obtained, indicating that by using a systems biology approach it is possible to answer unresolved complex medical uncertainties. Adopting this systems biology approach can be expected to be useful in identifying the most appropriate target groups for other preventive programmes.

  20. Revisions to some parameters used in stochastic-method simulations of ground motion

    USGS Publications Warehouse

    Boore, David; Thompson, Eric M.

    2015-01-01

    The stochastic method of ground‐motion simulation specifies the amplitude spectrum as a function of magnitude (M) and distance (R). The manner in which the amplitude spectrum varies with M and R depends on physical‐based parameters that are often constrained by recorded motions for a particular region (e.g., stress parameter, geometrical spreading, quality factor, and crustal amplifications), which we refer to as the seismological model. The remaining ingredient for the stochastic method is the ground‐motion duration. Although the duration obviously affects the character of the ground motion in the time domain, it also significantly affects the response of a single‐degree‐of‐freedom oscillator. Recently published updates to the stochastic method include a new generalized double‐corner‐frequency source model, a new finite‐fault correction, a new parameterization of duration, and a new duration model for active crustal regions. In this article, we augment these updates with a new crustal amplification model and a new duration model for stable continental regions. Random‐vibration theory (RVT) provides a computationally efficient method to compute the peak oscillator response directly from the ground‐motion amplitude spectrum and duration. Because the correction factor used to account for the nonstationarity of the ground motion depends on the ground‐motion amplitude spectrum and duration, we also present new RVT correction factors for both active and stable regions.

  1. Battle Damage Modeling

    DTIC Science & Technology

    2010-05-01

    has been an increasing move towards armor systems which are both structural and protection components at the same time. Analysis of material response...the materials can move. As the FE analysis progresses the component will move while the mesh remains motionless (Figure 4). Individual nodes and cells...this parameter. This subroutine needs many inputs, such as the speed of sound in the material , the FE size mesh and the safety factor, which prevents

  2. The Rényi entropy H2 as a rigorous, measurable lower bound for the entropy of the interaction region in multi-particle production processes

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-10-01

    A model-independent lower bound on the entropy S of the multi-particle system produced in high energy collisions, provided by the measurable Rényi entropy H2, is shown to be very effective. Estimates show that the ratio H2/S remains close to one half for all realistic values of the parameters.

  3. Risk-based management of invading plant disease.

    PubMed

    Hyatt-Twynam, Samuel R; Parnell, Stephen; Stutt, Richard O J H; Gottwald, Tim R; Gilligan, Christopher A; Cunniffe, Nik J

    2017-05-01

    Effective control of plant disease remains a key challenge. Eradication attempts often involve removal of host plants within a certain radius of detection, targeting asymptomatic infection. Here we develop and test potentially more effective, epidemiologically motivated, control strategies, using a mathematical model previously fitted to the spread of citrus canker in Florida. We test risk-based control, which preferentially removes hosts expected to cause a high number of infections in the remaining host population. Removals then depend on past patterns of pathogen spread and host removal, which might be nontransparent to affected stakeholders. This motivates a variable radius strategy, which approximates risk-based control via removal radii that vary by location, but which are fixed in advance of any epidemic. Risk-based control outperforms variable radius control, which in turn outperforms constant radius removal. This result is robust to changes in disease spread parameters and initial patterns of susceptible host plants. However, efficiency degrades if epidemiological parameters are incorrectly characterised. Risk-based control including additional epidemiology can be used to improve disease management, but it requires good prior knowledge for optimal performance. This focuses attention on gaining maximal information from past epidemics, on understanding model transferability between locations and on adaptive management strategies that change over time. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  4. Modeling the human as a controller in a multitask environment

    NASA Technical Reports Server (NTRS)

    Govindaraj, T.; Rouse, W. B.

    1978-01-01

    Modeling the human as a controller of slowly responding systems with preview is considered. Along with control tasks, discrete noncontrol tasks occur at irregular intervals. In multitask situations such as these, it has been observed that humans tend to apply piecewise constant controls. It is believed that the magnitude of controls and the durations for which they remain constant are dependent directly on the system bandwidth, preview distance, complexity of the trajectory to be followed, and nature of the noncontrol tasks. A simple heuristic model of human control behavior in this situation is presented. The results of a simulation study, whose purpose was determination of the sensitivity of the model to its parameters, are discussed.

  5. Closed-loop stability of linear quadratic optimal systems in the presence of modeling errors

    NASA Technical Reports Server (NTRS)

    Toda, M.; Patel, R.; Sridhar, B.

    1976-01-01

    The well-known stabilizing property of linear quadratic state feedback design is utilized to evaluate the robustness of a linear quadratic feedback design in the presence of modeling errors. Two general conditions are obtained for allowable modeling errors such that the resulting closed-loop system remains stable. One of these conditions is applied to obtain two more particular conditions which are readily applicable to practical situations where a designer has information on the bounds of modeling errors. Relations are established between the allowable parameter uncertainty and the weighting matrices of the quadratic performance index, thereby enabling the designer to select appropriate weighting matrices to attain a robust feedback design.

  6. Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2015-05-01

    Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.

  7. Inference of epidemiological parameters from household stratified data

    PubMed Central

    Walker, James N.; Ross, Joshua V.

    2017-01-01

    We consider a continuous-time Markov chain model of SIR disease dynamics with two levels of mixing. For this so-called stochastic households model, we provide two methods for inferring the model parameters—governing within-household transmission, recovery, and between-household transmission—from data of the day upon which each individual became infectious and the household in which each infection occurred, as might be available from First Few Hundred studies. Each method is a form of Bayesian Markov Chain Monte Carlo that allows us to calculate a joint posterior distribution for all parameters and hence the household reproduction number and the early growth rate of the epidemic. The first method performs exact Bayesian inference using a standard data-augmentation approach; the second performs approximate Bayesian inference based on a likelihood approximation derived from branching processes. These methods are compared for computational efficiency and posteriors from each are compared. The branching process is shown to be a good approximation and remains computationally efficient as the amount of data is increased. PMID:29045456

  8. Two-Dimensional Wetting Transition Modeling with the Potts Model

    NASA Astrophysics Data System (ADS)

    Lopes, Daisiane M.; Mombach, José C. M.

    2017-12-01

    A droplet of a liquid deposited on a surface structured in pillars may have two states of wetting: (1) Cassie-Baxter (CB), the liquid remains on top of the pillars, also known as heterogeneous wetting, or (2) Wenzel, the liquid fills completely the cavities of the surface, also known as homogeneous wetting. Studies show that between these two states, there is an energy barrier that, when overcome, results in the transition of states. The transition can be achieved by changes in geometry parameters of the surface, by vibrations of the surface or by evaporation of the liquid. In this paper, we present a comparison of two-dimensional simulations of the Cassie-Wenzel transition on pillar-structured surfaces using the cellular Potts model (CPM) with studies performed by Shahraz et al. In our work, we determine a transition diagram by varying the surface parameters such as the interpillar distance ( G) and the pillar height ( H). Our results were compared to those obtained by Shahraz et al. obtaining good agreement.

  9. Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination

    PubMed Central

    Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos

    2010-01-01

    The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093

  10. Species-Independent Modeling of High-Frequency Ultrasound Backscatter in Hyaline Cartilage.

    PubMed

    Männicke, Nils; Schöne, Martin; Liukkonen, Jukka; Fachet, Dominik; Inkinen, Satu; Malo, Markus K; Oelze, Michael L; Töyräs, Juha; Jurvelin, Jukka S; Raum, Kay

    2016-06-01

    Apparent integrated backscatter (AIB) is a common ultrasound parameter used to assess cartilage matrix degeneration. However, the specific contributions of chondrocytes, proteoglycan and collagen to AIB remain unknown. To reveal these relationships, this work examined biopsies and cross sections of human, ovine and bovine cartilage with 40-MHz ultrasound biomicroscopy. Site-matched estimates of collagen concentration, proteoglycan concentration, collagen orientation and cell number density were employed in quasi-least-squares linear regression analyses to model AIB. A positive correlation (R(2) = 0.51, p < 10(-4)) between AIB and a combination model of cell number density and collagen concentration was obtained for collagen orientations approximately perpendicular (>70°) to the sound beam direction. These findings indicate causal relationships between AIB and cartilage structural parameters and could aid in more sophisticated future interpretations of ultrasound backscatter. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  11. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    NASA Astrophysics Data System (ADS)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  12. Bayesian estimation inherent in a Mexican-hat-type neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2016-05-01

    Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.

  13. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  14. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  15. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  16. Improving a regional model using reduced complexity and parameter estimation.

    PubMed

    Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  17. Indirect adaptive output feedback control of a biorobotic AUV using pectoral-like mechanical fins.

    PubMed

    Naik, Mugdha S; Singh, Sahjendra N; Mittal, Rajat

    2009-06-01

    This paper treats the question of servoregulation of autonomous underwater vehicles (AUVs) in the yaw plane using pectoral-like mechanical fins. The fins attached to the vehicle have oscillatory swaying and yawing motion. The bias angle of the angular motion of the fin is used for the purpose of control. Of course, the design approach considered here is applicable to AUVs for other choices of oscillation patterns of the fins, which produce periodic forces and moments. It is assumed that the vehicle parameters, hydrodynamic coefficients, as well the fin forces and moments are unknown. For the trajectory control of the yaw angle, a sampled-data indirect adaptive control system using output (yaw angle) feedback is derived. The control system has a modular structure, which includes a parameter identifier and a stabilizer. For the control law derivation, an internal model of the exosignals (reference signal (constant or ramp) and constant disturbance) is included. Unlike the direct adaptive control scheme, the derived control law is applicable to minimum as well as nonminimum phase biorobotic AUVs (BAUVs). This is important, because for most of the fin locations on the vehicle, the model is a nonminimum phase. In the closed-loop system, the yaw angle trajectory tracking error converges to zero and the remaining state variables remain bounded. Simulation results are presented which show that the derived modular control system accomplishes precise set point yaw angle control and turning maneuvers in spite of the uncertainties in the system parameters using only yaw angle feedback.

  18. Toward Detection of Exoplanetary Rings via Transit Photometry: Methodology and a Possible Candidate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aizawa, Masataka; Masuda, Kento; Suto, Yasushi

    The detection of a planetary ring of exoplanets remains one of the most attractive, but challenging, goals in the field of exoplanetary science. We present a methodology that implements a systematic search for exoplanetary rings via transit photometry of long-period planets. This methodology relies on a precise integration scheme that we develop to compute a transit light curve of a ringed planet. We apply the methodology to 89 long-period planet candidates from the Kepler data so as to estimate, and/or set upper limits on, the parameters of possible rings. While the majority of our samples do not have sufficient signal-to-noise ratios (S/Ns) to place meaningfulmore » constraints on ring parameters, we find that six systems with higher S/Ns are inconsistent with the presence of a ring larger than 1.5 times the planetary radius, assuming a grazing orbit and a tilted ring. Furthermore, we identify five preliminary candidate systems whose light curves exhibit ring-like features. After removing four false positives due to the contamination from nearby stars, we identify KIC 10403228 as a reasonable candidate for a ringed planet. A systematic parameter fit of its light curve with a ringed planet model indicates two possible solutions corresponding to a Saturn-like planet with a tilted ring. There also remain two other possible scenarios accounting for the data; a circumstellar disk and a hierarchical triple. Due to large uncertain factors, we cannot choose one specific model among the three.« less

  19. Comparative analysis of the effects of electron and hole capture on the power characteristics of a semiconductor quantum-well laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolova, Z. N., E-mail: Zina.Sokolova@mail.ioffe.ru; Pikhtin, N. A.; Tarasov, I. S.

    The operating characteristics of a semiconductor quantum-well laser calculated using three models are compared. These models are (i) a model not taking into account differences between the electron and hole parameters and using the electron parameters for both types of charge carriers; (ii) a model, which does not take into account differences between the electron and hole parameters and uses the hole parameters for both types of charge carriers; and (iii) a model taking into account the asymmetry between the electron and hole parameters. It is shown that, at the same velocity of electron and hole capture into an unoccupiedmore » quantum well, the laser characteristics, obtained using the three models, differ considerably. These differences are due to a difference between the filling of the electron and hole subbands in a quantum well. The electron subband is more occupied than the hole subband. As a result, at the same velocities of electron and hole capture into an empty quantum well, the effective electron-capture velocity is lower than the effective hole-capture velocity. Specifically, it is shown that for the laser structure studied the hole-capture velocity of 5 × 10{sup 5} cm/s into an empty quantum well and the corresponding electron-capture velocity of 3 × 10{sup 6} cm/s into an empty quantum well describe the rapid capture of these carriers, at which the light–current characteristic of the laser remains virtually linear up to high pump-current densities. However, an electron-capture velocity of 5 × 10{sup 5} cm/s and a corresponding hole-capture velocity of 8.4 × 10{sup 4} cm/s describe the slow capture of these carriers, causing significant sublinearity in the light–current characteristic.« less

  20. Modeling flash floods in ungauged mountain catchments of China: A decision tree learning approach for parameter regionalization

    NASA Astrophysics Data System (ADS)

    Ragettli, S.; Zhou, J.; Wang, H.; Liu, C.; Guo, L.

    2017-12-01

    Flash floods in small mountain catchments are one of the most frequent causes of loss of life and property from natural hazards in China. Hydrological models can be a useful tool for the anticipation of these events and the issuing of timely warnings. One of the main challenges of setting up such a system is finding appropriate model parameter values for ungauged catchments. Previous studies have shown that the transfer of parameter sets from hydrologically similar gauged catchments is one of the best performing regionalization methods. However, a remaining key issue is the identification of suitable descriptors of similarity. In this study, we use decision tree learning to explore parameter set transferability in the full space of catchment descriptors. For this purpose, a semi-distributed rainfall-runoff model is set up for 35 catchments in ten Chinese provinces. Hourly runoff data from in total 858 storm events are used to calibrate the model and to evaluate the performance of parameter set transfers between catchments. We then present a novel technique that uses the splitting rules of classification and regression trees (CART) for finding suitable donor catchments for ungauged target catchments. The ability of the model to detect flood events in assumed ungauged catchments is evaluated in series of leave-one-out tests. We show that CART analysis increases the probability of detection of 10-year flood events in comparison to a conventional measure of physiographic-climatic similarity by up to 20%. Decision tree learning can outperform other regionalization approaches because it generates rules that optimally consider spatial proximity and physical similarity. Spatial proximity can be used as a selection criteria but is skipped in the case where no similar gauged catchments are in the vicinity. We conclude that the CART regionalization concept is particularly suitable for implementation in sparsely gauged and topographically complex environments where a proximity-based regionalization concept is not applicable.

  1. Strong feedback limit of the Goodwin circadian oscillator

    NASA Astrophysics Data System (ADS)

    Woller, Aurore; Gonze, Didier; Erneux, Thomas

    2013-03-01

    The three-variable Goodwin model constitutes a prototypical oscillator based on a negative feedback loop. It was used as a minimal model for circadian oscillations. Other core models for circadian clocks are variants of the Goodwin model. The Goodwin oscillator also appears in many studies of coupled oscillator networks because of its relative simplicity compared to other biophysical models involving a large number of variables and parameters. Because the synchronization properties of Goodwin oscillators still remain difficult to explore mathematically, further simplifications of the Goodwin model have been sought. In this paper, we investigate the strong negative feedback limit of Goodwin equations by using asymptotic techniques. We find that Goodwin oscillations approach a sequence of decaying exponentials that can be described in terms of a single-variable leaky integrated-and-fire model.

  2. a Comparison Between Two Ols-Based Approaches to Estimating Urban Multifractal Parameters

    NASA Astrophysics Data System (ADS)

    Huang, Lin-Shan; Chen, Yan-Guang

    Multifractal theory provides a new spatial analytical tool for urban studies, but many basic problems remain to be solved. Among various pending issues, the most significant one is how to obtain proper multifractal dimension spectrums. If an algorithm is improperly used, the parameter spectrums will be abnormal. This paper is devoted to investigating two ordinary least squares (OLS)-based approaches for estimating urban multifractal parameters. Using empirical study and comparative analysis, we demonstrate how to utilize the adequate linear regression to calculate multifractal parameters. The OLS regression analysis has two different approaches. One is that the intercept is fixed to zero, and the other is that the intercept is not limited. The results of comparative study show that the zero-intercept regression yields proper multifractal parameter spectrums within certain scale range of moment order, while the common regression method often leads to abnormal multifractal parameter values. A conclusion can be reached that fixing the intercept to zero is a more advisable regression method for multifractal parameters estimation, and the shapes of spectral curves and value ranges of fractal parameters can be employed to diagnose urban problems. This research is helpful for scientists to understand multifractal models and apply a more reasonable technique to multifractal parameter calculations.

  3. Soil Parameters for Representing a Karst Geologic Terrain in the Noah Land-Surface Model over Tennessee and Kentucky

    NASA Astrophysics Data System (ADS)

    Sullivan, Z.; Fan, X.

    2015-12-01

    Currently, the Noah Land-Surface Model (Noah-LSM) coupled with the Weather Research and Forecasting (WRF) model does not have a representation of the physical behavior of a karst terrain found in a large area of Tennessee and Kentucky and 25% of land area worldwide. The soluble nature of the bedrock within a karst geologic terrains allows for the formation of caverns, joints, fissures, sinkholes, and underground streams which affect the hydrological behavior of the region. The Highland Rim of Tennessee and the Pennyroyal Plateau and Bluegrass region of Kentucky make up a larger karst area known as the Interior Low Plateau. The highly weathered upper portion of the karst terrain, known as the epikarst, allows for more rapid transport of water through the system. For this study, hydrological aspects, such as bedrock porosity and the hydraulic conductivity, were chosen within this region in order to determine the most representative subsurface parameters for the Noah-LSM. These values along with the use of similar proxy values were chosen to calculate and represent the remaining eight parameters within the SOILPARM.TBL for the WRF model. Hydraulic conductivity values show a variation ranging from around 10-7 and 10-5 ms-1 for the karst bedrock within this region. A sand and clay soil type was used along with bedrock parameters to determine an average soil parameter type for the epikarst bedrock located within this region. Results from this study show parameters for an epikarst bedrock type displaying higher water transport through the system, similar to that of a sandy soil type with a water retention similar to that of a loam type soil. The physical nature of epikarst may lead to a decrease in latent heat values over this region and increase sensible heat values. This, in turn, may effect boundary layer growth which could lead to convective development. Future modeling work can be conducted using these values by way of coupling the soil parameters with the karst regions of the Tennessee/Kentucky area.

  4. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.

  5. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

  6. Predicting perturbation patterns from the topology of biological networks.

    PubMed

    Santolini, Marc; Barabási, Albert-László

    2018-06-20

    High-throughput technologies, offering an unprecedented wealth of quantitative data underlying the makeup of living systems, are changing biology. Notably, the systematic mapping of the relationships between biochemical entities has fueled the rapid development of network biology, offering a suitable framework to describe disease phenotypes and predict potential drug targets. However, our ability to develop accurate dynamical models remains limited, due in part to the limited knowledge of the kinetic parameters underlying these interactions. Here, we explore the degree to which we can make reasonably accurate predictions in the absence of the kinetic parameters. We find that simple dynamically agnostic models are sufficient to recover the strength and sign of the biochemical perturbation patterns observed in 87 biological models for which the underlying kinetics are known. Surprisingly, a simple distance-based model achieves 65% accuracy. We show that this predictive power is robust to topological and kinetic parameter perturbations, and we identify key network properties that can increase up to 80% the recovery rate of the true perturbation patterns. We validate our approach using experimental data on the chemotactic pathway in bacteria, finding that a network model of perturbation spreading predicts with ∼80% accuracy the directionality of gene expression and phenotype changes in knock-out and overproduction experiments. These findings show that the steady advances in mapping out the topology of biochemical interaction networks opens avenues for accurate perturbation spread modeling, with direct implications for medicine and drug development.

  7. Geometric dependence of the parasitic components and thermal properties of HEMTs

    NASA Astrophysics Data System (ADS)

    Vun, Peter V.; Parker, Anthony E.; Mahon, Simon J.; Fattorini, Anthony

    2007-12-01

    For integrated circuit design up to 50GHz and beyond accurate models of the transistor access structures and intrinsic structures are necessary for prediction of circuit performance. The circuit design process relies on optimising transistor geometry parameters such as unit gate width, number of gates, number of vias and gate-to-gate spacing. So the relationship between electrical and thermal parasitic components in transistor access structures, and transistor geometry is important to understand when developing models for transistors of differing geometries. Current approaches to describing the geometric dependence of models are limited to empirical methods which only describe a finite set of geometries and only include unit gate width and number of gates as variables. A better understanding of the geometric dependence is seen as a way to provide scalable models that remain accurate for continuous variation of all geometric parameters. Understanding the distribution of parasitic elements between the manifold, the terminal fingers, and the reference plane discontinuities is an issue identified as important in this regard. Examination of dc characteristics and thermal images indicates that gate-to-gate thermal coupling and increased thermal conductance at the gate ends, affects the device total thermal conductance. Consequently, a distributed thermal model is proposed which accounts for these effects. This work is seen as a starting point for developing comprehensive scalable models that will allow RF circuit designers to optimise circuit performance parameters such as total die area, maximum output power, power-added-efficiency (PAE) and channel temperature/lifetime.

  8. Predicting the Impact of Multiwalled Carbon Nanotubes on the Cement Hydration Products and Durability of Cementitious Matrix Using Artificial Neural Network Modeling Technique

    PubMed Central

    Fakhim, Babak; Hassani, Abolfazl; Rashidi, Alimorad; Ghodousi, Parviz

    2013-01-01

    In this study the feasibility of using the artificial neural networks modeling in predicting the effect of MWCNT on amount of cement hydration products and improving the quality of cement hydration products microstructures of cement paste was investigated. To determine the amount of cement hydration products thermogravimetric analysis was used. Two critical parameters of TGA test are PHPloss and CHloss. In order to model the TGA test results, the ANN modeling was performed on these parameters separately. In this study, 60% of data are used for model calibration and the remaining 40% are used for model verification. Based on the highest efficiency coefficient and the lowest root mean square error, the best ANN model was chosen. The results of TGA test implied that the cement hydration is enhanced in the presence of the optimum percentage (0.3 wt%) of MWCNT. Moreover, since the efficiency coefficient of the modeling results of CH and PHP loss in both the calibration and verification stages was more than 0.96, it was concluded that the ANN could be used as an accurate tool for modeling the TGA results. Another finding of this study was that the ANN prediction in higher ages was more precise. PMID:24489487

  9. Topsoil structure stability in a restored floodplain: Impacts of fluctuating water levels, soil parameters and ecosystem engineers.

    PubMed

    Schomburg, A; Schilling, O S; Guenat, C; Schirmer, M; Le Bayon, R C; Brunner, P

    2018-10-15

    Ecosystem services provided by floodplains are strongly controlled by the structural stability of soils. The development of a stable structure in floodplain soils is affected by a complex and poorly understood interplay of hydrological, physico-chemical and biological processes. This paper aims at analysing relations between fluctuating groundwater levels, soil physico-chemical and biological parameters on soil structure stability in a restored floodplain. Water level fluctuations in the soil are modelled using a numerical surface-water-groundwater flow model and correlated to soil physico-chemical parameters and abundances of plants and earthworms. Causal relations and multiple interactions between the investigated parameters are tested through structural equation modelling (SEM). Fluctuating water levels in the soil did not directly affect the topsoil structure stability, but indirectly through affecting plant roots and soil parameters that in turn determine topsoil structure stability. These relations remain significant for mean annual days of complete and partial (>25%) water saturation. Ecosystem functioning of a restored floodplain might already be affected by the fluctuation of groundwater levels alone, and not only through complete flooding by surface water during a flood period. Surprisingly, abundances of earthworms did not show any relation to other variables in the SEM. These findings emphasise that earthworms have efficiently adapted to periodic stress and harsh environmental conditions. Variability of the topsoil structure stability is thus stronger driven by the influence of fluctuating water levels on plants than by the abundance of earthworms. This knowledge about the functional network of soil engineering organisms, soil parameters and fluctuating water levels and how they affect soil structural stability is of fundamental importance to define management strategies of near-natural or restored floodplains in the future. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Lopsided gauge mediation

    NASA Astrophysics Data System (ADS)

    de Simone, Andrea; Franceschini, Roberto; Giudice, Gian Francesco; Pappadopulo, Duccio; Rattazzi, Riccardo

    2011-05-01

    It has been recently pointed out that the unavoidable tuning among supersymmetric parameters required to raise the Higgs boson mass beyond its experimental limit opens up new avenues for dealing with the so called μ- B μ problem of gauge mediation. In fact, it allows for accommodating, with no further parameter tuning, large values of B μ and of the other Higgs-sector soft masses, as predicted in models where both μ and B μ are generated at one-loop order. This class of models, called Lopsided Gauge Mediation, offers an interesting alternative to conventional gauge mediation and is characterized by a strikingly different phenomenology, with light higgsinos, very large Higgs pseudoscalar mass, and moderately light sleptons. We discuss general parametric relations involving the fine-tuning of the model and various observables such as the chargino mass and the value of tan β. We build an explicit model and we study the constraints coming from LEP and Tevatron. We show that in spite of new interactions between the Higgs and the messenger superfields, the theory can remain perturbative up to very large scales, thus retaining gauge coupling unification.

  11. A stress sensitivity model for the permeability of porous media based on bi-dispersed fractal theory

    NASA Astrophysics Data System (ADS)

    Tan, X.-H.; Liu, C.-Y.; Li, X.-P.; Wang, H.-Q.; Deng, H.

    A stress sensitivity model for the permeability of porous media based on bidispersed fractal theory is established, considering the change of the flow path, the fractal geometry approach and the mechanics of porous media. It is noted that the two fractal parameters of the porous media construction perform differently when the stress changes. The tortuosity fractal dimension of solid cluster DcTσ become bigger with an increase of stress. However, the pore fractal dimension of solid cluster Dcfσ and capillary bundle Dpfσ remains the same with an increase of stress. The definition of normalized permeability is introduced for the analyzation of the impacts of stress sensitivity on permeability. The normalized permeability is related to solid cluster tortuosity dimension, pore fractal dimension, solid cluster maximum diameter, Young’s modulus and Poisson’s ratio. Every parameter has clear physical meaning without the use of empirical constants. Predictions of permeability of the model is accordant with the obtained experimental data. Thus, the proposed model can precisely depict the flow of fluid in porous media under stress.

  12. Charge transfer in model peptides: obtaining Marcus parameters from molecular simulation.

    PubMed

    Heck, Alexander; Woiczikowski, P Benjamin; Kubař, Tomáš; Giese, Bernd; Elstner, Marcus; Steinbrecher, Thomas B

    2012-02-23

    Charge transfer within and between biomolecules remains a highly active field of biophysics. Due to the complexities of real systems, model compounds are a useful alternative to study the mechanistic fundamentals of charge transfer. In recent years, such model experiments have been underpinned by molecular simulation methods as well. In this work, we study electron hole transfer in helical model peptides by means of molecular dynamics simulations. A theoretical framework to extract Marcus parameters of charge transfer from simulations is presented. We find that the peptides form stable helical structures with sequence dependent small deviations from ideal PPII helices. We identify direct exposure of charged side chains to solvent as a cause of high reorganization energies, significantly larger than typical for electron transfer in proteins. This, together with small direct couplings, makes long-range superexchange electron transport in this system very slow. In good agreement with experiment, direct transfer between the terminal amino acid side chains can be dicounted in favor of a two-step hopping process if appropriate bridging groups exist. © 2012 American Chemical Society

  13. [Mathematical model of micturition allowing a detailed analysis of free urine flowmetry].

    PubMed

    Valentini, F; Besson, G; Nelson, P

    1999-04-01

    A mathematical model of micturition allowing precise analysis of uroflowmetry curves (VBN method) is described together with some of its applications. The physiology of micturition and possible diagnostic hypotheses able to explain the shape of the uroflowmetry curve can be expressed by a series of differential equations. Integration of the system allows the validity of these hypotheses to be tested by simulation. A theoretical uroflowmetry is calculated in less than 1 second and analysis of a dysuric uroflowmetry takes about 5 minutes. The efficacy of the model is due to its rapidity and the precision of the comparisons between measured and predicted values. The method has been applied to almost one thousand curves. The uroflowmetries of normal subjects are restored without adjustment with a quadratic error of less than 1%, while those of dysuric patients require identification of one or two adaptive parameters characteristic of the underlying disease. These parameters remain constant during the same session, but vary with the disease and/or the treatment. This model could become a tool for noninvasive urodynamic studies.

  14. Integration of Harvest and Time-to-Event Data Used to Estimate Demographic Parameters for White-tailed Deer

    NASA Astrophysics Data System (ADS)

    Norton, Andrew S.

    An integral component of managing game species is an understanding of population dynamics and relative abundance. Harvest data are frequently used to estimate abundance of white-tailed deer. Unless harvest age-structure is representative of the population age-structure and harvest vulnerability remains constant from year to year, these data alone are of limited value. Additional model structure and auxiliary information has accommodated this shortcoming. Specifically, integrated age-at-harvest (AAH) state-space population models can formally combine multiple sources of data, and regularization via hierarchical model structure can increase flexibility of model parameters. I collected known fates data, which I evaluated and used to inform trends in survival parameters for an integrated AAH model. I used temperature and snow depth covariates to predict survival outside of the hunting season, and opening weekend temperature and percent of corn harvest covariates to predict hunting season survival. When auxiliary empirical data were unavailable for the AAH model, moderately informative priors provided sufficient information for convergence and parameter estimates. The AAH model was most sensitive to errors in initial abundance, but this error was calibrated after 3 years. Among vital rates, the AAH model was most sensitive to reporting rates (percentage of mortality during the hunting season related to harvest). The AAH model, using only harvest data, was able to track changing abundance trends due to changes in survival rates even when prior models did not inform these changes (i.e. prior models were constant when truth varied). I also compared AAH model results with estimates from the Wisconsin Department of Natural Resources (WIDNR). Trends in abundance estimates from both models were similar, although AAH model predictions were systematically higher than WIDNR estimates in the East study area. When I incorporated auxiliary information (i.e. integrated AAH model) about survival outside the hunting season from known fates data, predicted trends appeared more closely related to what was expected. Disagreements between the AAH model and WIDNR estimates in the East were likely related to biased predictions for reporting and survival rates from the AAH model.

  15. Study of charged stellar structures in f(R, T) gravity

    NASA Astrophysics Data System (ADS)

    Sharif, M.; Siddiqa, Aisha

    2017-12-01

    This paper explores charged stellar structures whose pressure and density are related through polytropic equation of state ( p=ωρ^{σ}; ω is polytropic constant, p is pressure, ρ denotes density and σ is polytropic exponent) in the scenario of f(R,T) gravity (where R is the Ricci scalar and T is the trace of energy-momentum tensor). The Einstein-Maxwell field equations are solved together with the hydrostatic equilibrium equation for f(R,T)=R+2λ T where λ is the coupling constant, also called model parameter. We discuss different features of such configurations (like pressure, mass and charge) using graphical behavior for two values of σ. It is found that the effects of model parameter λ on different quantities remain the same for both cases. The energy conditions are satisfied and stellar configurations are stable in each case.

  16. A Computational Framework to Control Verification and Robustness Analysis

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2010-01-01

    This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.

  17. Bio-heat transfer model of deep brain stimulation-induced temperature changes

    NASA Astrophysics Data System (ADS)

    Elwassif, Maged M.; Kong, Qingjun; Vazquez, Maribel; Bikson, Marom

    2006-12-01

    There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS-induced temperature changes. The parameters investigated include stimulation waveform, lead selection, brain tissue electrical and thermal conductivities, blood perfusion, metabolic heat generation during the stimulation and lead thermal conductivity/heat dissipation through the electrode. Our results show that clinical DBS protocols will increase the temperature of surrounding tissue by up to 0.8 °C depending on stimulation/tissue parameters.

  18. Optimization principles and the figure of merit for triboelectric generators.

    PubMed

    Peng, Jun; Kang, Stephen Dongmin; Snyder, G Jeffrey

    2017-12-01

    Energy harvesting with triboelectric nanogenerators is a burgeoning field, with a growing portfolio of creative application schemes attracting much interest. Although power generation capabilities and its optimization are one of the most important subjects, a satisfactory elemental model that illustrates the basic principles and sets the optimization guideline remains elusive. We use a simple model to clarify how the energy generation mechanism is electrostatic induction but with a time-varying character that makes the optimal matching for power generation more restrictive. By combining multiple parameters into dimensionless variables, we pinpoint the optimum condition with only two independent parameters, leading to predictions of the maximum limit of power density, which allows us to derive the triboelectric material and device figure of merit. We reveal the importance of optimizing device capacitance, not only load resistance, and minimizing the impact of parasitic capacitance. Optimized capacitances can lead to an overall increase in power density of more than 10 times.

  19. Customised search and comparison of in situ, satellite and model data for ocean modellers

    NASA Astrophysics Data System (ADS)

    Hamre, Torill; Vines, Aleksander; Lygre, Kjetil

    2014-05-01

    For the ocean modelling community, the amount of available data from historical and upcoming in situ sensor networks and satellite missions, provides an rich opportunity to validate and improve their simulation models. However, the problem of making the different data interoperable and intercomparable remains, due to, among others, differences in terminology and format used by different data providers and the different granularity provided by e.g. in situ data and ocean models. The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. In the project, one specific objective has been to improve the technology for accessing historical plankton and associated environmental data sets, along with earth observation data and simulation outputs. To this end, we have developed a web portal enabling ocean modellers to easily search for in situ or satellite data overlapping in space and time, and compare the retrieved data with their model results. The in situ data are retrieved from a geo-spatial repository containing both historical and new physical, biological and chemical parameters for the Southern Ocean, Atlantic, Nordic Seas and the Arctic. The satellite-derived quantities of similar parameters from the same areas are retrieved from another geo-spatial repository established in the project. Both repositories are accessed through standard interfaces, using the Open Geospatial Consortium (OGC) Web Map Service (WMS) and Web Feature Service (WFS), and OPeNDAP protocols, respectively. While the developed data repositories use standard terminology to describe the parameters, especially the measured in situ biological parameters are too fine grained to be immediately useful for modelling purposes. Therefore, the plankton parameters were grouped according to category, size and if available by element. This grouping was reflected in the web portal's graphical user interface, where the groups and subgroups were organized in a tree structure, enabling the modeller to quickly get an overview of available data, going into more detail (subgroups) if needed or staying at a higher level of abstraction (merging the parameters below) if this provided a better base for comparison with the model parameters. Once a suitable level of detail, as determined by the modeller, was decided, the system would retrieve available in situ parameters. The modellers could then select among the pre-defined models or upload his own model forecast file (in NetCDF/CF format), for comparison with the retrieved in situ data. The comparison can be shown in different kinds of plots (e.g. scatter plots), through simple statistical measures or near-coincident values of in situ of model points can be exported for further analysis in the modeller's own tools. During data search and presentation, the modeller can determine both query criteria and what associated metadata to include in the display and export of the retrieved data. Satellite-derived parameters can be queried and compared with model results in the same manner. With the developed prototype system, we have demonstrated that a customised tool for searching, presenting, comparing and exporting ocean data from multiple platforms (in situ, satellite, model), makes it easy to compare model results with independent observations. With further enhancement of functionality and inclusion of more data, we believe the resulting system can greatly benefit the wider community of ocean modellers looking for data and tools to validate their models.

  20. The absorption and first-pass metabolism of [14C]-1,3-dinitrobenzene in the isolated vascularly perfused rat small intestine.

    PubMed

    Adams, P C; Rickert, D E

    1996-11-01

    We tested the hypothesis that the small intestine is capable of the first-pass, reductive metabolism of xenobiotics. A simplified version of the isolated vascularly perfused rat small intestine was developed to test this hypothesis with 1,3-dinitrobenzene (1,3-DNB) as a model xenobiotic. Both 3-nitroaniline (3-NA) and 3-nitroacetanilide (3-NAA) were formed and absorbed following intralumenal doses of 1,3-DNB (1.8 or 4.2 mumol) to isolated vascularly perfused rat small intestine. Dose, fasting, or antibiotic pretreatment had no effect on the absorption and metabolism of 1,3-DNB in this model system. The failure of antibiotic pretreatment to alter the metabolism of 1,3-DNA indicated that 1,3-DNB metabolism was mammalian rather than microfloral in origin. All data from experiments initiated with lumenal 1,3-DNB were fit to a pharmacokinetic model (model A). ANOVA analysis revealed that dose, fasting, or antibiotic pretreatment had no statistically significant effect on the model-dependent parameters. 3-NA (1.5 mumol) was administered to the lumen of isolated vascularly perfused rat small intestine to evaluate model A predictions for the absorption and metabolism of this metabolite. All data from experiments initiated with 3-NA were fit to a pharmacokinetic model (model B). Comparison of corresponding model-dependent pharmacokinetic parameters (i.e. those parameters which describe the same processes in models A and B) revealed quantitative differences. Evidence for significant quantitative differences in the pharmacokinetics or metabolism of formed versus preformed 3-NA in rat small intestine may require better definition of the rate constants used to describe tissue and lumenal processes or identification and incorporation of the remaining unidentified metabolites into the models.

  1. Addressing the impact of environmental uncertainty in plankton model calibration with a dedicated software system: the Marine Model Optimization Testbed (MarMOT 1.1 alpha)

    NASA Astrophysics Data System (ADS)

    Hemmings, J. C. P.; Challenor, P. G.

    2012-04-01

    A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the simulation error variance to allow for this environment error performs well compared with weighting schemes used in previous calibration studies, giving improved estimates of the known parameters. The efficacy of the new scheme in real-world applications will depend on the quality of statistical characterizations of the input data. Practical approaches towards developing reliable characterizations are discussed.

  2. Integrative approaches for modeling regulation and function of the respiratory system.

    PubMed

    Ben-Tal, Alona; Tawhai, Merryn H

    2013-01-01

    Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system-which comprises the lungs and the neural circuitry that controls their ventilation-have been derived using simplifying assumptions to compartmentalize each component of the system and to define the interactions between components. These full system models often rely-through necessity-on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially distributed models of ventilation and perfusion, or multicircuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained. Copyright © 2013 Wiley Periodicals, Inc.

  3. Closed-loop control of epileptiform activities in a neural population model using a proportional-derivative controller

    NASA Astrophysics Data System (ADS)

    Wang, Jun-Song; Wang, Mei-Li; Li, Xiao-Li; Ernst, Niebur

    2015-03-01

    Epilepsy is believed to be caused by a lack of balance between excitation and inhibitation in the brain. A promising strategy for the control of the disease is closed-loop brain stimulation. How to determine the stimulation control parameters for effective and safe treatment protocols remains, however, an unsolved question. To constrain the complex dynamics of the biological brain, we use a neural population model (NPM). We propose that a proportional-derivative (PD) type closed-loop control can successfully suppress epileptiform activities. First, we determine the stability of root loci, which reveals that the dynamical mechanism underlying epilepsy in the NPM is the loss of homeostatic control caused by the lack of balance between excitation and inhibition. Then, we design a PD type closed-loop controller to stabilize the unstable NPM such that the homeostatic equilibriums are maintained; we show that epileptiform activities are successfully suppressed. A graphical approach is employed to determine the stabilizing region of the PD controller in the parameter space, providing a theoretical guideline for the selection of the PD control parameters. Furthermore, we establish the relationship between the control parameters and the model parameters in the form of stabilizing regions to help understand the mechanism of suppressing epileptiform activities in the NPM. Simulations show that the PD-type closed-loop control strategy can effectively suppress epileptiform activities in the NPM. Project supported by the National Natural Science Foundation of China (Grant Nos. 61473208, 61025019, and 91132722), ONR MURI N000141010278, and NIH grant R01EY016281.

  4. Atomistic Models of General Anesthetics for Use in in Silico Biological Studies

    PubMed Central

    2015-01-01

    While small molecules have been used to induce anesthesia in a clinical setting for well over a century, a detailed understanding of the molecular mechanism remains elusive. In this study, we utilize ab initio calculations to develop a novel set of CHARMM-compatible parameters for the ubiquitous modern anesthetics desflurane, isoflurane, sevoflurane, and propofol for use in molecular dynamics (MD) simulations. The parameters generated were rigorously tested against known experimental physicochemical properties including dipole moment, density, enthalpy of vaporization, and free energy of solvation. In all cases, the anesthetic parameters were able to reproduce experimental measurements, signifying the robustness and accuracy of the atomistic models developed. The models were then used to study the interaction of anesthetics with the membrane. Calculation of the potential of mean force for inserting the molecules into a POPC bilayer revealed a distinct energetic minimum of 4–5 kcal/mol relative to aqueous solution at the level of the glycerol backbone in the membrane. The location of this minimum within the membrane suggests that anesthetics partition to the membrane prior to binding their ion channel targets, giving context to the Meyer–Overton correlation. Moreover, MD simulations of these drugs in the membrane give rise to computed membrane structural parameters, including atomic distribution, deuterium order parameters, dipole potential, and lateral stress profile, that indicate partitioning of anesthetics into the membrane at the concentration range studied here, which does not appear to perturb the structural integrity of the lipid bilayer. These results signify that an indirect, membrane-mediated mechanism of channel modulation is unlikely. PMID:25303275

  5. Consistent Parameter and Transfer Function Estimation using Context Free Grammars

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a search space for equations. The parametrization of the transfer functions is then achieved through a second optimization routine. The contribution explores different aspects of the described procedure through a set of experiments. These experiments can be divided into three categories: (1) The inference of transfer functions from directly measurable parameters; (2) The estimation of global parameters for given transfer functions from runoff data; and (3) The estimation of sets of completely unknown transfer functions from runoff data. The conducted tests reveal different potentials and limits of the procedure. In concrete it is shown that example (1) and (2) work remarkably well. Example (3) is much more dependent on the setup. In general, it can be said that in that case much more data is needed to derive transfer function estimations, even for simple models and setups. References: - Chomsky, N. (1956): Three Models for the Description of Language. IT IRETr. 2(3), p 113-124 - O'Neil, M. (2001): Grammatical Evolution. IEEE ToEC, Vol.5, No. 4 - Samaniego, L.; Kumar, R.; Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale. WWR, Vol. 46, W05523, doi:10.1029/2008WR007327

  6. Hot HB Stars in Globular Clusters: Physical Parameters and Consequences for Theory. VI; The Second Parameter Pair M 3 and M 13

    NASA Technical Reports Server (NTRS)

    Moehler, S.; Landsman, W. B.; Sweigart, A. V.; Grundahl, F.

    2003-01-01

    We present the results of spectroscopic analyses of hot horizontal branch (HB) stars in M 13 and M 3, which form a famous "second parameter" pair. F rom the spectra and Stromgren photometry we derived - for the first time in M 13 - atmospheric parameters (effective temperature and surface gravity). For stars with Stromgren temperatures between 10,000 and 12,000 K we found excellent agreement between the atmospheric parameters derived from Stromgren photometry and those derived from Balmer line profile fits. However, for cooler stars there is a disagreement in the parameters derived by the two methods, for which we have no satisfactory explanation. Stars hotter than 12,000 K show evidence for helium depletion and iron enrichment, both in M 3 and M 13. Accounting for the iron enrichment substantially improves the agreement with canonical evolutionary models, although the derived gravities and masses are still somewhat too low. This remaining discrepancy may be an indication that scaled-solar metal-rich model atmospheres do not adequately represent the highly non-solar abundance ratios found in blue HB stars affected by diffusion. We discuss the effects of an enhancement in the envelope helium abundance on the atmospheric parameters of the blue HB stars, as might be caused by deep mixing on the red giant branch or primordial pollution from an earlier generation of intermediate mass asymptotic giant branch stars. Key words. Stars: atmospheres - Stars: evolution - Stars: horizontal branch - Globular clusters: individual: M 3 - Globular clusters: individual: M 13

  7. Classifying the Sizes of Explosive Eruptions using Tephra Deposits: The Advantages of a Numerical Inversion Approach

    NASA Astrophysics Data System (ADS)

    Connor, C.; Connor, L.; White, J.

    2015-12-01

    Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.

  8. On the analysis of incoherent scatter radar data from non-thermal ionospheric plasma - Effects of measurement noise and an inexact theory

    NASA Astrophysics Data System (ADS)

    Suvanto, K.

    1990-07-01

    Statistical inversion theory is employed to estimate parameter uncertainties in incoherent scatter radar studies of non-Maxwellian ionospheric plasma. Measurement noise and the inexact nature of the plasma model are considered as potential sources of error. In most of the cases investigated here, it is not possible to determine electron density, line-of-sight ion and electron temperatures, ion composition, and two non-Maxwellian shape factors simultaneously. However, if the molecular ion velocity distribution is highly non-Maxwellian, all these quantities can sometimes be retrieved from the data. This theoretical result supports the validity of the only successful non-Maxwellian, mixed-species fit discussed in the literature. A priori information on one of the parameters, e.g., the electron density, often reduces the parameter uncertainties significantly and makes composition fits possible even if the six-parameter fit cannot be performed. However, small (less than 0.5) non-Maxwellian shape factors remain difficult to distinguish.

  9. Reducing streamflow forecast uncertainty: Application and qualitative assessment of the upper klamath river Basin, Oregon

    USGS Publications Warehouse

    Hay, L.E.; McCabe, G.J.; Clark, M.P.; Risley, J.C.

    2009-01-01

    The accuracy of streamflow forecasts depends on the uncertainty associated with future weather and the accuracy of the hydrologic model that is used to produce the forecasts. We present a method for streamflow forecasting where hydrologic model parameters are selected based on the climate state. Parameter sets for a hydrologic model are conditioned on an atmospheric pressure index defined using mean November through February (NDJF) 700-hectoPascal geopotential heights over northwestern North America [Pressure Index from Geopotential heights (PIG)]. The hydrologic model is applied in the Sprague River basin (SRB), a snowmelt-dominated basin located in the Upper Klamath basin in Oregon. In the SRB, the majority of streamflow occurs during March through May (MAM). Water years (WYs) 1980-2004 were divided into three groups based on their respective PIG values (high, medium, and low PIG). Low (high) PIG years tend to have higher (lower) than average MAM streamflow. Four parameter sets were calibrated for the SRB, each using a different set of WYs. The initial set used WYs 1995-2004 and the remaining three used WYs defined as high-, medium-, and low-PIG years. Two sets of March, April, and May streamflow volume forecasts were made using Ensemble Streamflow Prediction (ESP). The first set of ESP simulations used the initial parameter set. Because the PIG is defined using NDJF pressure heights, forecasts starting in March can be made using the PIG parameter set that corresponds with the year being forecasted. The second set of ESP simulations used the parameter set associated with the given PIG year. Comparison of the ESP sets indicates that more accuracy and less variability in volume forecasts may be possible when the ESP is conditioned using the PIG. This is especially true during the high-PIG years (low-flow years). ?? 2009 American Water Resources Association.

  10. Nonlinear spherical perturbations in quintessence models of dark energy

    NASA Astrophysics Data System (ADS)

    Pratap Rajvanshi, Manvendra; Bagla, J. S.

    2018-06-01

    Observations have confirmed the accelerated expansion of the universe. The accelerated expansion can be modelled by invoking a cosmological constant or a dynamical model of dark energy. A key difference between these models is that the equation of state parameter w for dark energy differs from ‑1 in dynamical dark energy (DDE) models. Further, the equation of state parameter is not constant for a general DDE model. Such differences can be probed using the variation of scale factor with time by measuring distances. Another significant difference between the cosmological constant and DDE models is that the latter must cluster. Linear perturbation analysis indicates that perturbations in quintessence models of dark energy do not grow to have a significant amplitude at small length scales. In this paper we study the response of quintessence dark energy to non-linear perturbations in dark matter. We use a fully relativistic model for spherically symmetric perturbations. In this study we focus on thawing models. We find that in response to non-linear perturbations in dark matter, dark energy perturbations grow at a faster rate than expected in linear perturbation theory. We find that dark energy perturbation remains localised and does not diffuse out to larger scales. The dominant drivers of the evolution of dark energy perturbations are the local Hubble flow and a supression of gradients of the scalar field. We also find that the equation of state parameter w changes in response to perturbations in dark matter such that it also becomes a function of position. The variation of w in space is correlated with density contrast for matter. Variation of w and perturbations in dark energy are more pronounced in response to large scale perturbations in matter while the dependence on the amplitude of matter perturbations is much weaker.

  11. Localization of (photo)respiration and CO2 re-assimilation in tomato leaves investigated with a reaction-diffusion model

    PubMed Central

    Berghuijs, Herman N. C.; Yin, Xinyou; Ho, Q. Tri; Verboven, Pieter; Nicolaï, Bart M.

    2017-01-01

    The rate of photosynthesis depends on the CO2 partial pressure near Rubisco, Cc, which is commonly calculated by models using the overall mesophyll resistance. Such models do not explain the difference between the CO2 level in the intercellular air space and Cc mechanistically. This problem can be overcome by reaction-diffusion models for CO2 transport, production and fixation in leaves. However, most reaction-diffusion models are complex and unattractive for procedures that require a large number of runs, like parameter optimisation. This study provides a simpler reaction-diffusion model. It is parameterized by both leaf physiological and leaf anatomical data. The anatomical data consisted of the thickness of the cell wall, cytosol and stroma, and the area ratios of mesophyll exposed to the intercellular air space to leaf surfaces and exposed chloroplast to exposed mesophyll surfaces. The model was used directly to estimate photosynthetic parameters from a subset of the measured light and CO2 response curves; the remaining data were used for validation. The model predicted light and CO2 response curves reasonably well for 15 days old tomato (cv. Admiro) leaves, if (photo)respiratory CO2 release was assumed to take place in the inner cytosol or in the gaps between the chloroplasts. The model was also used to calculate the fraction of CO2 produced by (photo)respiration that is re-assimilated in the stroma, and this fraction ranged from 56 to 76%. In future research, the model should be further validated to better understand how the re-assimilation of (photo)respired CO2 is affected by environmental conditions and physiological parameters. PMID:28880924

  12. Localization of (photo)respiration and CO2 re-assimilation in tomato leaves investigated with a reaction-diffusion model.

    PubMed

    Berghuijs, Herman N C; Yin, Xinyou; Ho, Q Tri; Retta, Moges A; Verboven, Pieter; Nicolaï, Bart M; Struik, Paul C

    2017-01-01

    The rate of photosynthesis depends on the CO2 partial pressure near Rubisco, Cc, which is commonly calculated by models using the overall mesophyll resistance. Such models do not explain the difference between the CO2 level in the intercellular air space and Cc mechanistically. This problem can be overcome by reaction-diffusion models for CO2 transport, production and fixation in leaves. However, most reaction-diffusion models are complex and unattractive for procedures that require a large number of runs, like parameter optimisation. This study provides a simpler reaction-diffusion model. It is parameterized by both leaf physiological and leaf anatomical data. The anatomical data consisted of the thickness of the cell wall, cytosol and stroma, and the area ratios of mesophyll exposed to the intercellular air space to leaf surfaces and exposed chloroplast to exposed mesophyll surfaces. The model was used directly to estimate photosynthetic parameters from a subset of the measured light and CO2 response curves; the remaining data were used for validation. The model predicted light and CO2 response curves reasonably well for 15 days old tomato (cv. Admiro) leaves, if (photo)respiratory CO2 release was assumed to take place in the inner cytosol or in the gaps between the chloroplasts. The model was also used to calculate the fraction of CO2 produced by (photo)respiration that is re-assimilated in the stroma, and this fraction ranged from 56 to 76%. In future research, the model should be further validated to better understand how the re-assimilation of (photo)respired CO2 is affected by environmental conditions and physiological parameters.

  13. A multidisciplinary effort to assign realistic source parameters to models of volcanic ash-cloud transport and dispersion during eruptions

    USGS Publications Warehouse

    Mastin, Larry G.; Guffanti, Marianne C.; Servranckx, R.; Webley, P.; Barsotti, S.; Dean, K.; Durant, A.; Ewert, John W.; Neri, A.; Rose, W.I.; Schneider, David J.; Siebert, L.; Stunder, B.; Swanson, G.; Tupper, A.; Volentik, A.; Waythomas, Christopher F.

    2009-01-01

    During volcanic eruptions, volcanic ash transport and dispersion models (VATDs) are used to forecast the location and movement of ash clouds over hours to days in order to define hazards to aircraft and to communities downwind. Those models use input parameters, called “eruption source parameters”, such as plume height H, mass eruption rate Ṁ, duration D, and the mass fraction m63 of erupted debris finer than about 4ϕ or 63 μm, which can remain in the cloud for many hours or days. Observational constraints on the value of such parameters are frequently unavailable in the first minutes or hours after an eruption is detected. Moreover, observed plume height may change during an eruption, requiring rapid assignment of new parameters. This paper reports on a group effort to improve the accuracy of source parameters used by VATDs in the early hours of an eruption. We do so by first compiling a list of eruptions for which these parameters are well constrained, and then using these data to review and update previously studied parameter relationships. We find that the existing scatter in plots of H versus Ṁ yields an uncertainty within the 50% confidence interval of plus or minus a factor of four in eruption rate for a given plume height. This scatter is not clearly attributable to biases in measurement techniques or to well-recognized processes such as elutriation from pyroclastic flows. Sparse data on total grain-size distribution suggest that the mass fraction of fine debris m63 could vary by nearly two orders of magnitude between small basaltic eruptions (∼ 0.01) and large silicic ones (> 0.5). We classify eleven eruption types; four types each for different sizes of silicic and mafic eruptions; submarine eruptions; “brief” or Vulcanian eruptions; and eruptions that generate co-ignimbrite or co-pyroclastic flow plumes. For each eruption type we assign source parameters. We then assign a characteristic eruption type to each of the world's ∼ 1500 Holocene volcanoes. These eruption types and associated parameters can be used for ash-cloud modeling in the event of an eruption, when no observational constraints on these parameters are available.

  14. Theory-Based Parameterization of Semiotics for Measuring Pre-literacy Development

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.

    2013-09-01

    A probabilistic model was applied to problem of measuring pre-literacy in young children. First, semiotic philosophy and contemporary cognition research were conceptually integrated to establish theoretical foundations for rating 14 characteristics of children's drawings and narratives (N = 120). Then ratings were transformed with a Rasch model, which estimated linear item parameter values that accounted for 79 percent of rater variance. Principle Components Analysis of item residual matrix confirmed variance remaining after item calibration was largely unsystematic. Validation analyses found positive correlations between semiotic measures and preschool literacy outcomes. Practical implications of a semiotics dimension for preschool practice were discussed.

  15. Sinking a Granular Raft

    NASA Astrophysics Data System (ADS)

    Protière, Suzie; Josserand, Christophe; Aristoff, Jeffrey M.; Stone, Howard A.; Abkarian, Manouk

    2017-03-01

    We report experiments that yield new insights on the behavior of granular rafts at an oil-water interface. We show that these particle aggregates can float or sink depending on dimensionless parameters taking into account the particle densities and size and the densities of the two fluids. We characterize the raft shape and stability and propose a model to predict its shape and maximum length to remain afloat. Finally we find that wrinkles and folds appear along the raft due to compression by its own weight, which can trigger destabilization. These features are characteristics of an elastic instability, which we discuss, including the limitations of our model.

  16. Informing soil models using pedotransfer functions: challenges and perspectives

    NASA Astrophysics Data System (ADS)

    Pachepsky, Yakov; Romano, Nunzio

    2015-04-01

    Pedotransfer functions (PTFs) are empirical relationships between parameters of soil models and more easily obtainable data on soil properties. PTFs have become an indispensable tool in modeling soil processes. As alternative methods to direct measurements, they bridge the data we have and data we need by using soil survey and monitoring data to enable modeling for real-world applications. Pedotransfer is extensively used in soil models addressing the most pressing environmental issues. The following is an attempt to provoke a discussion by listing current issues that are faced by PTF development. 1. As more intricate biogeochemical processes are being modeled, development of PTFs for parameters of those processes becomes essential. 2. Since the equations to express PTF relationships are essentially unknown, there has been a trend to employ highly nonlinear equations, e.g. neural networks, which in theory are flexible enough to simulate any dependence. This, however, comes with the penalty of large number of coefficients that are difficult to estimate reliably. A preliminary classification applied to PTF inputs and PTF development for each of the resulting groups may provide simple, transparent, and more reliable pedotransfer equations. 3. The multiplicity of models, i.e. presence of several models producing the same output variables, is commonly found in soil modeling, and is a typical feature in the PTF research field. However, PTF intercomparisons are lagging behind PTF development. This is aggravated by the fact that coefficients of PTF based on machine-learning methods are usually not reported. 4. The existence of PTFs is the result of some soil processes. Using models of those processes to generate PTFs, and more general, developing physics-based PTFs remains to be explored. 5. Estimating the variability of soil model parameters becomes increasingly important, as the newer modeling technologies such as data assimilation, ensemble modeling, and model abstraction, become progressively more popular. The variability PTFs rely on the spatio-temporal dynamics of soil variables, and that opens new sources of PTF inputs stemming from technology advances such as monitoring networks, remote and proximal sensing, and omics. 6. Burgeoning PTF development has not so far affected several persisting regional knowledge gaps. Remarkably little effort was put so far into PTF development for saline soils, calcareous and gypsiferous soils, peat soils, paddy soils, soils with well expressed shrink-swell behavior, and soils affected by freeze-thaw cycles. 7. Soils from tropical regions are quite often considered as a pseudo-entity for which a single PTF can be applied. This assumption will not be needed as more regional data will be accumulated and analyzed. 8. Other advances in regional PTFs will be possible due to presence of large databases on region-specific useful PTF inputs such as moisture equivalent, laser diffractometry data, or soil specific surface. 9. Most of flux models in soils, be it water, solutes, gas, or heat, involve parameters that are scale-dependent. Including scale dependencies in PTFs will be critical to improve PTF usability. 10. Another scale-related matter is pedotransfer for coarse-scale soil modeling, for example, in weather or climate models. Soil hydraulic parameters in these models cannot be measured and the efficiency of the pedotransfer can be evaluated only in terms of its utility. There is a pressing need to determine combinations of pedotransfer and upscaling procedures that can lead to the derivation of suitable coarse-scale soil model parameters. 11. The spatial coarse scale often assumes a coarse temporal support, and that may lead to including in PTFs other environmental variables such as topographic, weather, and management attributes. 12. Some PTF inputs are time- or space-dependent, and yet little is known whether the spatial or temporal structure of PTF outputs is properly predicted from such inputs 13. Further exploration is needed to use PTF as a source of hypotheses on and insights into relationships between soil processes and soil composition as well as between soil structure and soil functioning. PTFs are empirical relationships and their accuracy outside the database used for the PTF development is essentially unknown. Therefore they should never be considered as an ultimate source of parameters in soil modeling. Rather they strive to provide a balance between accuracy and availability. The primary role of PTF is to assist in modeling for screening and comparative purposes, establishing ranges and/or probability distributions of model parameters, and creating realistic synthetic soil datasets and scenarios. Developing and improving PTFs will remain the mainstream way of packaging data and knowledge for applications of soil modeling.

  17. Electroweak vacuum stability in classically conformal B - L extension of the standard model

    DOE PAGES

    Das, Arindam; Okada, Nobuchika; Papapietro, Nathan

    2017-02-23

    Here, we consider the minimal U(1) B - L extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1) B - L gauge symme- try is introduced along with three generations of right-handed neutrinos and a U(1) B - L Higgs field. Because of the classi- cally conformal symmetry, all dimensional parameters are forbidden. The B - L gauge symmetry is radiatively bro- ken through the Coleman–Weinberg mechanism, generating the mass for the U(1) B - L gauge boson (Z' boson) and the right-handed neutrinos. Through a small negative coupling betweenmore » the SM Higgs doublet and the B - L Higgs field, the negative mass term for the SM Higgs doublet is gener- ated and the electroweak symmetry is broken. We investigate the electroweak vacuum instability problem in the SM in this model context. It is well known that in the classically conformal U(1) B - L extension of the SM, the electroweak vacuum remains unstable in the renormalization group anal- ysis at the one-loop level. In this paper, we extend the anal- ysis to the two-loop level, and perform parameter scans. We also identify a parameter region which not only solve the vacuum instability problem, but also satisfy the recent ATLAS and CMS bounds from search for Z ' boson resonance at the LHC Run-2. Considering self-energy corrections to the SM Higgs doublet through the right-handed neutrinos and the Z ' boson, we derive the naturalness bound on the model parameters to realize the electroweak scale without fine-tunings.« less

  18. Electroweak vacuum stability in classically conformal B - L extension of the standard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Arindam; Okada, Nobuchika; Papapietro, Nathan

    Here, we consider the minimal U(1) B - L extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1) B - L gauge symme- try is introduced along with three generations of right-handed neutrinos and a U(1) B - L Higgs field. Because of the classi- cally conformal symmetry, all dimensional parameters are forbidden. The B - L gauge symmetry is radiatively bro- ken through the Coleman–Weinberg mechanism, generating the mass for the U(1) B - L gauge boson (Z' boson) and the right-handed neutrinos. Through a small negative coupling betweenmore » the SM Higgs doublet and the B - L Higgs field, the negative mass term for the SM Higgs doublet is gener- ated and the electroweak symmetry is broken. We investigate the electroweak vacuum instability problem in the SM in this model context. It is well known that in the classically conformal U(1) B - L extension of the SM, the electroweak vacuum remains unstable in the renormalization group anal- ysis at the one-loop level. In this paper, we extend the anal- ysis to the two-loop level, and perform parameter scans. We also identify a parameter region which not only solve the vacuum instability problem, but also satisfy the recent ATLAS and CMS bounds from search for Z ' boson resonance at the LHC Run-2. Considering self-energy corrections to the SM Higgs doublet through the right-handed neutrinos and the Z ' boson, we derive the naturalness bound on the model parameters to realize the electroweak scale without fine-tunings.« less

  19. Traversable geometric dark energy wormholes constrained by astrophysical observations

    NASA Astrophysics Data System (ADS)

    Wang, Deng; Meng, Xin-he

    2016-09-01

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω <-1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω _X<-1 (or z<0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology.

  20. Multivariate models for prediction of rheological characteristics of filamentous fermentation broth from the size distribution.

    PubMed

    Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V

    2008-05-01

    The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.

  1. Mathematical quantification of the induced stress resistance of microbial populations during non-isothermal stresses.

    PubMed

    Garre, Alberto; Huertas, Juan Pablo; González-Tejedor, Gerardo A; Fernández, Pablo S; Egea, Jose A; Palop, Alfredo; Esnoz, Arturo

    2018-02-02

    This contribution presents a mathematical model to describe non-isothermal microbial inactivation processes taking into account the acclimation of the microbial cell to thermal stress. The model extends the log-linear inactivation model including a variable and model parameters quantifying the induced thermal resistance. The model has been tested on cells of Escherichia coli against two families of non-isothermal profiles with different constant heating rates. One of the families was composed of monophasic profiles, consisting of a non-isothermal heating stage from 35 to 70°C; the other family was composed of biphasic profiles, consisting of a non-isothermal heating stage followed by a holding period at constant temperature of 57.5°C. Lower heating rates resulted in a higher thermal resistance of the bacterial population. This was reflected in a higher D-value. The parameter estimation was performed in two steps. Firstly, the D and z-values were estimated from the isothermal experiments. Next, the parameters describing the acclimation were estimated using one of the biphasic profiles. This set of parameters was able to describe the remaining experimental data. Finally, a methodology for the construction of diagrams illustrating the magnitude of the induced thermal resistance is presented. The methodology has been illustrated by building it for a biphasic temperature profile with a linear heating phase and a holding phase. This diagram provides a visualization of how the shape of the temperature profile (heating rate and holding temperature) affects the acclimation of the cell to the thermal stress. This diagram can be used for the design of inactivation treatments by industry taking into account the acclimation of the cell to the thermal stress. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Assessing tiger population dynamics using photographic capture-recapture sampling

    USGS Publications Warehouse

    Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Hines, J.E.

    2006-01-01

    Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, ?robust design? capture?recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of =K' =Y' 0.10 ? 0.069 (values are estimated mean ? SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 ? 0.051, and the estimated probability that a newly caught animal was a transient was = 0.18 ? 0.11. During the period when the sampled area was of constant size, the estimated population size Nt varied from 17 ? 1.7 to 31 ? 2.1 tigers, with a geometric mean rate of annual population change estimated as = 1.03 ? 0.020, representing a 3% annual increase. The estimated recruitment of new animals, Bt, varied from 0 ? 3.0 to 14 ? 2.9 tigers. Population density estimates, D, ranged from 7.33 ? 0.8 tigers/100 km2 to 21.73 ? 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.

  3. Assessing tiger population dynamics using photographic capture-recapture sampling.

    PubMed

    Karanth, K Ullas; Nichols, James D; Kumar, N Samba; Hines, James E

    2006-11-01

    Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, "robust design" capture-recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of gamma" = gamma' = 0.10 +/- 0.069 (values are estimated mean +/- SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 +/- 0.051, and the estimated probability that a newly caught animal was a transient was tau = 0.18 +/- 0.11. During the period when the sampled area was of constant size, the estimated population size N(t) varied from 17 +/- 1.7 to 31 +/- 2.1 tigers, with a geometric mean rate of annual population change estimated as lambda = 1.03 +/- 0.020, representing a 3% annual increase. The estimated recruitment of new animals, B(t), varied from 0 +/- 3.0 to 14 +/- 2.9 tigers. Population density estimates, D, ranged from 7.33 +/- 0.8 tigers/100 km2 to 21.73 +/- 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.

  4. A terrestrial biosphere model optimized to atmospheric CO2 concentration and above ground woody biomass

    NASA Astrophysics Data System (ADS)

    Saito, M.; Ito, A.; Maksyutov, S. S.

    2013-12-01

    This study documents an optimization of a prognostic biosphere model (VISIT; Vegetation Integrative Similator for Trace gases) to observations of atmospheric CO2 concentration and above ground woody biomass by using a Bayesian inversion method combined with an atmospheric tracer transport model (NIES-TM; National Institute for Environmental Studies / Frontier Research Center for Global Change (NIES/FRCGC) off-line global atmospheric tracer transport model). The assimilated observations include 74 station records of surface atmospheric CO2 concentration and aggregated grid data sets of above ground woody biomass (AGB) and net primary productivity (NPP) over the globe. Both the biosphere model and the atmospheric transport model are used at a horizontal resolution of 2.5 deg x 2.5 deg grid with temporal resolutions of a day and an hour, respectively. The atmospheric transport model simulates atmospheric CO2 concentration with nine vertical levels using daily net ecosystem CO2 exchange rate (NEE) from the biosphere model, oceanic CO2 flux, and fossil fuel emission inventory. The models are driven by meteorological data from JRA-25 (Japanese 25-year ReAnalysis) and JCDAS (JMA Climate Data Assimilation System). Statistically optimum physiological parameters in the biosphere model are found by iterative minimization of the corresponding Bayesian cost function. We select thirteen physiological parameter with high sensitivity to NEE, NPP, and AGB for the minimization. Given the optimized physiological parameters, the model shows error reductions in seasonal variation of the CO2 concentrations especially in the northern hemisphere due to abundant observation stations, while errors remain at a few stations that are located in coastal coastal area and stations in the southern hemisphere. The model also produces moderate estimates of the mean magnitudes and probability distributions in AGB and NPP for each biome. However, the model fails in the simulation of the terrestrial vegetation compositions in some grids. These misfits are assumed to derive from simplified representation in the biosphere model without the impact of land use change and dire disturbance and the seasonal variability in the physiological parameters.

  5. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  6. With-in host dynamics of L. monocytogenes and thresholds for distinct infection scenarios.

    PubMed

    Rahman, Ashrafur; Munther, Daniel; Fazil, Aamir; Smith, Ben; Wu, Jianhong

    2018-05-26

    The case fatality and illness rates associated with L. monocytogenes continue to pose a serious public health burden despite the significant efforts and control protocol administered by private and public sectors. Due to the advance in surveillance and improvement in detection methodology, the knowledge of sources, transmission routes, growth potential in food process units and storage, effect of pH and temperature are well understood. However, the with-in host growth and transmission mechanisms of L. monocytogenes, particularly within the human host, remain unclear, largely due to the limited access to scientific experimentation on the human population. In order to provide insight towards the human immune response to the infection caused by L. monocytogenes, we develop a with-in host mathematical model. The model explains, in terms of biological parameters, the states of asymptomatic infection, mild infection and systemic infection leading to listeriosis. The activation and proliferation of T-cells are found to be critical for the susceptibility of the infection. Utilizing stability analysis and numerical simulation, the ranges of the critical parameters relative to infection states are established. Bifurcation analysis shows the impact of the differences of these parameters on the dynamics of the model. Finally, we present model applications in regards to predicting the risk potential of listeriosis relative to the susceptible human population. Copyright © 2018. Published by Elsevier Ltd.

  7. Power-law modeling based on least-squares minimization criteria.

    PubMed

    Hernández-Bermejo, B; Fairén, V; Sorribas, A

    1999-10-01

    The power-law formalism has been successfully used as a modeling tool in many applications. The resulting models, either as Generalized Mass Action or as S-systems models, allow one to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. The power-law formalism was first derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The especial characteristics of this approximation produce an extremely useful systemic representation that allows a complete system characterization. Furthermore, their parameters have a precise interpretation as local sensitivities of each of the individual processes and as rate-constants. This facilitates a qualitative discussion and a quantitative estimation of their possible values in relation to the kinetic properties. Following this interpretation, parameter estimation is also possible by relating the systemic behavior to the underlying processes. Without leaving the general formalism, in this paper we suggest deriving the power-law representation in an alternative way that uses least-squares minimization. The resulting power-law mimics the target rate-law in a wider range of concentration values than the classical power-law. Although the implications of this alternative approach remain to be established, our results show that the predicted steady-state using the least-squares power-law is closest to the actual steady-state of the target system.

  8. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  9. Agent-Based Model with Asymmetric Trading and Herding for Complex Financial Systems

    PubMed Central

    Chen, Jun-Jie; Zheng, Bo; Tan, Lei

    2013-01-01

    Background For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results. Methods To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors’ asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data. Results With the model parameters determined for six representative stock-market indices in the world, respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities. Conclusions We reveal that for the leverage and anti-leverage effects, both the investors’ asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors’ trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries. PMID:24278146

  10. Agent-based model with asymmetric trading and herding for complex financial systems.

    PubMed

    Chen, Jun-Jie; Zheng, Bo; Tan, Lei

    2013-01-01

    For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results. To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors' asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data. With the model parameters determined for six representative stock-market indices in the world, respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities. We reveal that for the leverage and anti-leverage effects, both the investors' asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors' trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries.

  11. Numerical Modelling of Effects of Biphasic Layers of Corrosion Products to the Degradation of Magnesium Metal In Vitro

    PubMed Central

    Ahmed, Safia K.; Ward, John P.; Liu, Yang

    2017-01-01

    Magnesium (Mg) is becoming increasingly popular for orthopaedic implant materials. Its mechanical properties are closer to bone than other implant materials, allowing for more natural healing under stresses experienced during recovery. Being biodegradable, it also eliminates the requirement of further surgery to remove the hardware. However, Mg rapidly corrodes in clinically relevant aqueous environments, compromising its use. This problem can be addressed by alloying the Mg, but challenges remain at optimising the properties of the material for clinical use. In this paper, we present a mathematical model to provide a systematic means of quantitatively predicting Mg corrosion in aqueous environments, providing a means of informing standardisation of in vitro investigation of Mg alloy corrosion to determine implant design parameters. The model describes corrosion through reactions with water, to produce magnesium hydroxide Mg(OH)2, and subsequently with carbon dioxide to form magnesium carbonate MgCO3. The corrosion products produce distinct protective layers around the magnesium block that are modelled as porous media. The resulting model of advection–diffusion equations with multiple moving boundaries was solved numerically using asymptotic expansions to deal with singular cases. The model has few free parameters, and it is shown that these can be tuned to predict a full range of corrosion rates, reflecting differences between pure magnesium or magnesium alloys. Data from practicable in vitro experiments can be used to calibrate the model’s free parameters, from which model simulations using in vivo relevant geometries provide a cheap first step in optimising Mg-based implant materials. PMID:29267244

  12. A Diffusion Model Analysis of Episodic Recognition in Individuals with a Family History for Alzheimer Disease: The Adult Children Study

    PubMed Central

    Aschenbrenner, Andrew J.; Balota, David A.; Gordon, Brian A.; Ratcliff, Roger; Morris, John C.

    2015-01-01

    Objective A family history of Alzheimer disease (AD) increases the risk of developing AD and can influence the accumulation of well-established AD biomarkers. There is some evidence that family history can influence episodic memory performance even in cognitively normal individuals. We attempted to replicate the effect of family history on episodic memory and used a specific computational model of binary decision making (the diffusion model) to understand precisely how family history influences cognition. Finally, we assessed the sensitivity of model parameters to family history controlling for standard neuropsychological test performance. Method Across two experiments, cognitively healthy participants from the Adult Children Study completed an episodic recognition test consisting of high and low frequency words. The diffusion model was applied to decompose accuracy and reaction time into latent parameters which were analyzed as a function of family history. Results In both experiments, individuals with a family history of AD exhibited lower recognition accuracy and this occurred in the absence of an apolipoprotein E (APOE) ε4 allele. The diffusion model revealed this difference was due to changes in the quality of information accumulation (the drift rate) and not differences in response caution or other model parameters. This difference remained after controlling for several standard neuropsychological tests. Conclusions These results confirm that the presence of a family history of AD confers a subtle cognitive deficit in episodic memory as reflected by decreased drift rate that cannot be attributed to APOE. This measure may serve as a novel cognitive marker of preclinical AD. PMID:26192539

  13. Rough parameter dependence in climate models and the role of Ruelle-Pollicott resonances.

    PubMed

    Chekroun, Mickaël David; Neelin, J David; Kondrashov, Dmitri; McWilliams, James C; Ghil, Michael

    2014-02-04

    Despite the importance of uncertainties encountered in climate model simulations, the fundamental mechanisms at the origin of sensitive behavior of long-term model statistics remain unclear. Variability of turbulent flows in the atmosphere and oceans exhibits recurrent large-scale patterns. These patterns, while evolving irregularly in time, manifest characteristic frequencies across a large range of time scales, from intraseasonal through interdecadal. Based on modern spectral theory of chaotic and dissipative dynamical systems, the associated low-frequency variability may be formulated in terms of Ruelle-Pollicott (RP) resonances. RP resonances encode information on the nonlinear dynamics of the system, and an approach for estimating them--as filtered through an observable of the system--is proposed. This approach relies on an appropriate Markov representation of the dynamics associated with a given observable. It is shown that, within this representation, the spectral gap--defined as the distance between the subdominant RP resonance and the unit circle--plays a major role in the roughness of parameter dependences. The model statistics are the most sensitive for the smallest spectral gaps; such small gaps turn out to correspond to regimes where the low-frequency variability is more pronounced, whereas autocorrelations decay more slowly. The present approach is applied to analyze the rough parameter dependence encountered in key statistics of an El-Niño-Southern Oscillation model of intermediate complexity. Theoretical arguments, however, strongly suggest that such links between model sensitivity and the decay of correlation properties are not limited to this particular model and could hold much more generally.

  14. Rough parameter dependence in climate models and the role of Ruelle-Pollicott resonances

    PubMed Central

    Chekroun, Mickaël David; Neelin, J. David; Kondrashov, Dmitri; McWilliams, James C.; Ghil, Michael

    2014-01-01

    Despite the importance of uncertainties encountered in climate model simulations, the fundamental mechanisms at the origin of sensitive behavior of long-term model statistics remain unclear. Variability of turbulent flows in the atmosphere and oceans exhibits recurrent large-scale patterns. These patterns, while evolving irregularly in time, manifest characteristic frequencies across a large range of time scales, from intraseasonal through interdecadal. Based on modern spectral theory of chaotic and dissipative dynamical systems, the associated low-frequency variability may be formulated in terms of Ruelle-Pollicott (RP) resonances. RP resonances encode information on the nonlinear dynamics of the system, and an approach for estimating them—as filtered through an observable of the system—is proposed. This approach relies on an appropriate Markov representation of the dynamics associated with a given observable. It is shown that, within this representation, the spectral gap—defined as the distance between the subdominant RP resonance and the unit circle—plays a major role in the roughness of parameter dependences. The model statistics are the most sensitive for the smallest spectral gaps; such small gaps turn out to correspond to regimes where the low-frequency variability is more pronounced, whereas autocorrelations decay more slowly. The present approach is applied to analyze the rough parameter dependence encountered in key statistics of an El-Niño–Southern Oscillation model of intermediate complexity. Theoretical arguments, however, strongly suggest that such links between model sensitivity and the decay of correlation properties are not limited to this particular model and could hold much more generally. PMID:24443553

  15. Seasonal Influenza Forecasting in Real Time Using the Incidence Decay With Exponential Adjustment Model.

    PubMed

    Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan; Fisman, David N

    2017-01-01

    Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. We used the previously described "incidence decay with exponential adjustment" (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015-2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. The 2015-2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R 0 approximately 1.4 for all fits). Lower R 0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance.

  16. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.

  17. Power spectrum and non-Gaussianities in anisotropic inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dey, Anindya; Kovetz, Ely D.; Paban, Sonia, E-mail: anindya@physics.utexas.edu, E-mail: elykovetz@gmail.com, E-mail: paban@physics.utexas.edu

    2014-06-01

    We study the planar regime of curvature perturbations for single field inflationary models in an axially symmetric Bianchi I background. In a theory with standard scalar field action, the power spectrum for such modes has a pole as the planarity parameter goes to zero. We show that constraints from back reaction lead to a strong lower bound on the planarity parameter for high-momentum planar modes and use this bound to calculate the signal-to-noise ratio of the anisotropic power spectrum in the CMB, which in turn places an upper bound on the Hubble scale during inflation allowed in our model. Wemore » find that non-Gaussianities for these planar modes are enhanced for the flattened triangle and the squeezed triangle configurations, but show that the estimated values of the f{sub NL} parameters remain well below the experimental bounds from the CMB for generic planar modes (other, more promising signatures are also discussed). For a standard action, f{sub NL} from the squeezed configuration turns out to be larger compared to that from the flattened triangle configuration in the planar regime. However, in a theory with higher derivative operators, non-Gaussianities from the flattened triangle can become larger than the squeezed configuration in a certain limit of the planarity parameter.« less

  18. Caenorhabditis elegans vulval cell fate patterning

    NASA Astrophysics Data System (ADS)

    Félix, Marie-Anne

    2012-08-01

    The spatial patterning of three cell fates in a row of competent cells is exemplified by vulva development in the nematode Caenorhabditis elegans. The intercellular signaling network that underlies fate specification is well understood, yet quantitative aspects remain to be elucidated. Quantitative models of the network allow us to test the effect of parameter variation on the cell fate pattern output. Among the parameter sets that allow us to reach the wild-type pattern, two general developmental patterning mechanisms of the three fates can be found: sequential inductions and morphogen-based induction, the former being more robust to parameter variation. Experimentally, the vulval cell fate pattern is robust to stochastic and environmental challenges, and minor variants can be detected. The exception is the fate of the anterior cell, P3.p, which is sensitive to stochastic variation and spontaneous mutation, and is also evolving the fastest. Other vulval precursor cell fates can be affected by mutation, yet little natural variation can be found, suggesting stabilizing selection. Despite this fate pattern conservation, different Caenorhabditis species respond differently to perturbations of the system. In the quantitative models, different parameter sets can reconstitute their response to perturbation, suggesting that network variation among Caenorhabditis species may be quantitative. Network rewiring likely occurred at longer evolutionary scales.

  19. Micromechanical investigation of sand migration in gas hydrate-bearing sediments

    NASA Astrophysics Data System (ADS)

    Uchida, S.; Klar, A.; Cohen, E.

    2017-12-01

    Past field gas production tests from hydrate bearing sediments have indicated that sand migration is an important phenomenon that needs to be considered for successful long-term gas production. The authors previously developed the continuum based analytical thermo-hydro-mechanical sand migration model that can be applied to predict wellbore responses during gas production. However, the model parameters involved in the model still needs to be calibrated and studied thoroughly and it still remains a challenge to conduct well-defined laboratory experiments of sand migration, especially in hydrate-bearing sediments. Taking the advantage of capability of micromechanical modelling approach through discrete element method (DEM), this work presents a first step towards quantifying one of the model parameters that governs stresses reduction due to grain detachment. Grains represented by DEM particles are randomly removed from an isotropically loaded DEM specimen and statistical analyses reveal that linear proportionality exists between the normalized volume of detached solids and normalized reduced stresses. The DEM specimen with different porosities (different packing densities) are also considered and statistical analyses show that there is a clear transition between loose sand behavior and dense sand behavior, characterized by the relative density.

  20. MODELING THE SOLAR WIND AT THE ULYSSES , VOYAGER , AND NEW HORIZONS SPACECRAFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, T. K.; Pogorelov, N. V.; Zank, G. P.

    The outer heliosphere is a dynamic region shaped largely by the interaction between the solar wind and the interstellar medium. While interplanetary magnetic field and plasma observations by the Voyager spacecraft have significantly improved our understanding of this vast region, modeling the outer heliosphere still remains a challenge. We simulate the three-dimensional, time-dependent solar wind flow from 1 to 80 astronomical units (au), where the solar wind is assumed to be supersonic, using a two-fluid model in which protons and interstellar neutral hydrogen atoms are treated as separate fluids. We use 1 day averages of the solar wind parameters frommore » the OMNI data set as inner boundary conditions to reproduce time-dependent effects in a simplified manner which involves interpolation in both space and time. Our model generally agrees with Ulysses data in the inner heliosphere and Voyager data in the outer heliosphere. Ultimately, we present the model solar wind parameters extracted along the trajectory of the New Horizons spacecraft. We compare our results with in situ plasma data taken between 11 and 33 au and at the closest approach to Pluto on 2015 July 14.« less

  1. Ground Motion Prediction Models for Caucasus Region

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  2. Dynamic Model Predicting Overweight, Obesity, and Extreme Obesity Prevalence Trends

    PubMed Central

    Thomas, Diana M.; Weedermann, Marion; Fuemmeler, Bernard F.; Martin, Corby K.; Dhurandhar, Nikhil V.; Bredlau, Carl; Heymsfield, Steven B.; Ravussin, Eric; Bouchard, Claude

    2013-01-01

    Objective Obesity prevalence in the United States (US) appears to be leveling, but the reasons behind the plateau remain unknown. Mechanistic insights can be provided from a mathematical model. The objective of this study is to model known multiple population parameters associated with changes in body mass index (BMI) classes and to establish conditions under which obesity prevalence will plateau. Design and Methods A differential equation system was developed that predicts population-wide obesity prevalence trends. The model considers both social and non-social influences on weight gain, incorporates other known parameters affecting obesity trends, and allows for country specific population growth. Results The dynamic model predicts that: obesity prevalence is a function of birth rate and the probability of being born in an obesogenic environment; obesity prevalence will plateau independent of current prevention strategies; and the US prevalence of obesity, overweight, and extreme obesity will plateau by about 2030 at 28%, 32%, and 9%, respectively. Conclusions The US prevalence of obesity is stabilizing and will plateau, independent of current preventative strategies. This trend has important implications in accurately evaluating the impact of various anti-obesity strategies aimed at reducing obesity prevalence. PMID:23804487

  3. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  4. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.; Simpson, R.L.; Urtiew, P.A.

    1996-05-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition+two growth terms) have been found using nonlinear optimization methods to determine the {open_quotes}best{close_quotes} set of model parameters. The ignition term treats the initiation of up to 0.5{percent} of the RDX. The first growth term in the model treats the RDX growth of reaction up to 20{percent} reacted. The second growth term treats the subsequent growth ofmore » reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the {open_quotes}best{close_quotes} set of coefficients for the three term Lee-Tarver ignition and growth of reaction model. {copyright} {ital 1996 American Institute of Physics.}« less

  5. "Horseshoe" Structures in the Debris Disks of Planet-Hosting Binary Stars

    NASA Astrophysics Data System (ADS)

    Demidova, T. V.

    2018-03-01

    The formation of a planetary system from the protoplanetary disk leads to destruction of the latter; however, a debris disk can remain in the form of asteroids and cometary material. The motion of planets can cause the formation of coorbital structures from the debris disk matter. Previous calculations have shown that such a ring-like structure is more stable if there is a binary star in the center of the system, as opposed to a single star. To analyze the properties of the coorbital structure, we have calculated a grid of models of binary star systems with a circumbinary planet moving in a planetesimal disk. The calculations are performed considering circular orbits of the stars and the planet; the mass and position of the planet, as well as the mass ratio of the stars, are varied. The analysis of the models shows that the width of the coorbital ring and its stability significantly depend on the initial parameters of the problem. Additionally, the empirical dependences of the width of the coorbital structure on the parameters of the system have been obtained, and the parameters of the models with the most stable coorbital structures have been determined. The results of the present study can be used for the search of planets around binary stars with debris disks.

  6. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  7. Towards a supersymmetric description of the Fermi Galactic center excess

    DOE PAGES

    Cahill-Rowley, M.; Gainer, J. S.; Hewett, J. L.; ...

    2015-02-10

    We attempt to build a model that describes the Fermi galactic gamma-ray excess (FGCE) within a UV-complete Supersymmetric framework; we find this to be highly non-trivial. At the very least a successful Supersymmetric explanation must have several important ingredients in order to fit the data and satisfy other theoretical and experimental constraints. Under the assumption that a single annihilation mediator is responsible for both the observed relic density as well as the FGCE, we show that the requirements are not easily satisfied in many TeV-scale SUSY models, but can be met with some model building effort in the general NMSSMmore » with ~ 10 parameters beyond the MSSM. We find that the data selects a particular region of the parameter space with a mostly singlino lightest Supersymmetric particle and a relatively light CP-odd Higgs boson that acts as the mediator for dark matter annihilation. We study the predictions for various observables within this parameter space, and find that searches for this light CP-odd state at the LHC, as well as searches for the direct detection of dark matter, are likely to be quite challenging. It is possible that a signature could be observed in the flavor sector; however, indirect detection remains the best probe of this scenario.« less

  8. Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network.

    PubMed

    Pan, Wei-Xing; Schmidt, Robert; Wickens, Jeffery R; Hyland, Brian I

    2005-06-29

    Behavioral conditioning of cue-reward pairing results in a shift of midbrain dopamine (DA) cell activity from responding to the reward to responding to the predictive cue. However, the precise time course and mechanism underlying this shift remain unclear. Here, we report a combined single-unit recording and temporal difference (TD) modeling approach to this question. The data from recordings in conscious rats showed that DA cells retain responses to predicted reward after responses to conditioned cues have developed, at least early in training. This contrasts with previous TD models that predict a gradual stepwise shift in latency with responses to rewards lost before responses develop to the conditioned cue. By exploring the TD parameter space, we demonstrate that the persistent reward responses of DA cells during conditioning are only accurately replicated by a TD model with long-lasting eligibility traces (nonzero values for the parameter lambda) and low learning rate (alpha). These physiological constraints for TD parameters suggest that eligibility traces and low per-trial rates of plastic modification may be essential features of neural circuits for reward learning in the brain. Such properties enable rapid but stable initiation of learning when the number of stimulus-reward pairings is limited, conferring significant adaptive advantages in real-world environments.

  9. Laser effects based optimal laser parameter identifications for paint removal from metal substrate at 1064 nm: a multi-pulse model

    NASA Astrophysics Data System (ADS)

    Han, Jinghua; Cui, Xudong; Wang, Sha; Feng, Guoying; Deng, Guoliang; Hu, Ruifeng

    2017-10-01

    Paint removal by laser ablation is favoured among cleaning techniques due to its high efficiency. How to predict the optimal laser parameters without producing damage to substrate still remains challenging for accurate paint stripping. On the basis of ablation morphologies and combining experiments with numerical modelling, the underlying mechanisms and the optimal conditions for paint removal by laser ablation are thoroughly investigated. Our studies suggest that laser paint removal is dominated by the laser vaporization effect, thermal stress effect and laser plasma effect, in which thermal stress effect is the most favoured while laser plasma effect should be avoided during removal operations. Based on the thermodynamic equations, we numerically evaluated the spatial distribution of the temperature as well as thermal stress in the paint and substrate under the irradiation of laser pulse at 1064 nm. The obtained curves of the paint thickness vs. threshold fluences can provide the reference standard of laser parameter selection in view of the paint layer with different thickness. A multi-pulse model is proposed and validated under a constant laser fluence to perfectly remove a thicker paint layer. The investigations and the methods proposed here might give hints to the efficient operations on the paint removal and lowering the risk of substrate damages.

  10. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  11. Development of a Higher Fidelity Model for the Cascade Distillation Subsystem (CDS)

    NASA Technical Reports Server (NTRS)

    Perry, Bruce; Anderson, Molly

    2014-01-01

    Significant improvements have been made to the ACM model of the CDS, enabling accurate predictions of dynamic operations with fewer assumptions. The model has been utilized to predict how CDS performance would be impacted by changing operating parameters, revealing performance trade-offs and possibilities for improvement. CDS efficiency is driven by the THP coefficient of performance, which in turn is dependent on heat transfer within the system. Based on the remaining limitations of the simulation, priorities for further model development include: center dot Relaxing the assumption of total condensation center dot Incorporating dynamic simulation capability for the buildup of dissolved inert gasses in condensers center dot Examining CDS operation with more complex feeds center dot Extending heat transfer analysis to all surfaces

  12. Solid Rocket Motor Combustion Instability Modeling in COMSOL Multiphysics

    NASA Technical Reports Server (NTRS)

    Fischbach, S. R.

    2015-01-01

    Combustion instability modeling of Solid Rocket Motors (SRM) remains a topic of active research. Many rockets display violent fluctuations in pressure, velocity, and temperature originating from the complex interactions between the combustion process, acoustics, and steady-state gas dynamics. Recent advances in defining the energy transport of disturbances within steady flow-fields have been applied by combustion stability modelers to improve the analysis framework. Employing this more accurate global energy balance requires a higher fidelity model of the SRM flow-field and acoustic mode shapes. The current industry standard analysis tool utilizes a one dimensional analysis of the time dependent fluid dynamics along with a quasi-three dimensional propellant grain regression model to determine the SRM ballistics. The code then couples with another application that calculates the eigenvalues of the one dimensional homogenous wave equation. The mean flow parameters and acoustic normal modes are coupled to evaluate the stability theory developed and popularized by Culick. The assumption of a linear, non-dissipative wave in a quiescent fluid remains valid while acoustic amplitudes are small and local gas velocities stay below Mach 0.2. The current study employs the COMSOL Multiphysics finite element framework to model the steady flow-field parameters and acoustic normal modes of a generic SRM. This work builds upon previous efforts to verify the use of the acoustic velocity potential equation (AVPE) laid out by Campos. The acoustic velocity potential (psi) describing the acoustic wave motion in the presence of an inhomogeneous steady high-speed flow is defined by, del squared psi - (lambda/c) squared psi - M x [M x del((del)(psi))] - 2((lambda)(M)/c + M x del(M) x (del)(psi) - 2(lambda)(psi)[M x del(1/c)] = 0. with M as the Mach vector, c as the speed of sound, and ? as the complex eigenvalue. The study requires one way coupling of the CFD High Mach Number Flow (HMNF) and mathematics module. The HMNF module evaluates the gas flow inside of a SRM using St. Robert's law to model the solid propellant burn rate, slip boundary conditions, and the supersonic outflow condition. Results from the HMNF model are verified by comparing the pertinent ballistics parameters with the industry standard code outputs (i.e. pressure drop, axial velocity, exit velocity). These results are then used by the coefficient form of the mathematics module to determine the complex eigenvalues of the AVPE. The mathematics model is truncated at the nozzle sonic line, where a zero flux boundary condition is self-satisfying. The remaining boundaries are modeled with a zero flux boundary condition, assuming zero acoustic absorption on all surfaces. The one way coupled analysis is perform four times utilizing geometries determined through traditional SRM modeling procedures. The results of the steady-state CFD and AVPE analyses are used to calculate the linear acoustic growth rate as is defined by Flandro and Jacob. In order to verify the process implemented within COMSOL we first employ the Culick theory and compare the results with the industry standard. After the process is verified, the Flandro/Jacob energy balance theory is employed and results displayed.

  13. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence.

    PubMed

    Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.

  14. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence

    PubMed Central

    Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430

  15. DEVELOPMENT OF A POPULATION BALANCE MODEL TO SIMULATE FRACTIONATION OF GROUND SWITCHGRASS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naimi, L.J.; Bi, X.T.; Lau, A.K.

    The population balance model represents a time-dependent formulation of mass conservation for a ground biomass that flows through a set of sieves. The model is suitable for predicting the change in size and distribution of ground biomass while taking into account the flow rate processes of particles through a grinder. This article describes the development and application of this model to a switchgrass grinding operation. The mass conservation formulation of the model contains two parameters: breakage rate and breakage ratio. A laboratory knife mill was modified to act as a batch or flow-through grinder. The ground switchgrass was analyzed overmore » a set of six Tyler sieves with apertures ranging from 5.66 mm (top sieve) to 1 mm (bottom sieve). The breakage rate was estimated from the sieving tests. For estimating the breakage ratio, each of the six fractions was further ground and sieved to 11 fractions on a set of sieves with apertures ranging from 5.66 to 0.25 mm (and pan). These data formed a matrix of values for determining the breakage ratio. Using the two estimated parameters, the transient population balance model was solved numerically. Results indicated that the population balance model generally underpredicted the fractions remaining on sieves with 5.66, 4.00, and 2.83 mm apertures and overpredicted fractions remaining on sieves with 2.00, 1.41, and 1.00 mm apertures. These trends were similar for both the batch and flow-through grinder configurations. The root mean square of residuals (RSE), representing the difference between experimental and simulated mass of fractions, was 0.32 g for batch grinding and 0.1 g for flow-through grinding. The breakage rate exhibited a linear function of the logarithm of particle size, with a regression coefficient of 0.99.« less

  16. Norms and values in sociohydrological models

    NASA Astrophysics Data System (ADS)

    Roobavannan, Mahendran; van Emmerik, Tim H. M.; Elshafei, Yasmina; Kandasamy, Jaya; Sanderson, Matthew R.; Vigneswaran, Saravanamuthu; Pande, Saket; Sivapalan, Murugesu

    2018-02-01

    Sustainable water resources management relies on understanding how societies and water systems coevolve. Many place-based sociohydrology (SH) modeling studies use proxies, such as environmental degradation, to capture key elements of the social component of system dynamics. Parameters of assumed relationships between environmental degradation and the human response to it are usually obtained through calibration. Since these relationships are not yet underpinned by social-science theories, confidence in the predictive power of such place-based sociohydrologic models remains low. The generalizability of SH models therefore requires major advances in incorporating more realistic relationships, underpinned by appropriate hydrological and social-science data and theories. The latter is a critical input, since human culture - especially values and norms arising from it - influences behavior and the consequences of behaviors. This paper reviews a key social-science theory that links cultural factors to environmental decision-making, assesses how to better incorporate social-science insights to enhance SH models, and raises important questions to be addressed in moving forward. This is done in the context of recent progress in sociohydrological studies and the gaps that remain to be filled. The paper concludes with a discussion of challenges and opportunities in terms of generalization of SH models and the use of available data to allow future prediction and model transfer to ungauged basins.

  17. Concurrent progressive ratio schedules: Effects of reinforcer probability on breakpoint and response allocation.

    PubMed

    Jarmolowicz, David P; Sofis, Michael J; Darden, Alexandria C

    2016-07-01

    Although progressive ratio (PR) schedules have been used to explore effects of a range of reinforcer parameters (e.g., magnitude, delay), effects of reinforcer probability remain underexplored. The present project used independently progressing concurrent PR PR schedules to examine effects of reinforcer probability on PR breakpoint (highest completed ratio prior to a session terminating 300s pause) and response allocation. The probability of reinforcement on one lever remained at 100% across all conditions while the probability of reinforcement on the other lever was systematically manipulated (i.e., 100%, 50%, 25%, 12.5%, and a replication of 25%). Breakpoints systematically decreased with decreasing reinforcer probabilities while breakpoints on the control lever remained unchanged. Patterns of switching between the two levers were well described by a choice-by-choice unit price model that accounted for the hyperbolic discounting of the value of probabilistic reinforcers. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A CATALOG OF BULGE+DISK DECOMPOSITIONS AND UPDATED PHOTOMETRY FOR 1.12 MILLION GALAXIES IN THE SLOAN DIGITAL SKY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simard, Luc; McConnachie, Alan W.; Trevor Mendel, J.

    We perform two-dimensional, point-spread-function-convolved, bulge+disk decompositions in the g and r bandpasses on a sample of 1,123,718 galaxies from the Legacy area of the Sloan Digital Sky Survey Data Release Seven. Four different decomposition procedures are investigated which make improvements to sky background determinations and object deblending over the standard SDSS procedures that lead to more robust structural parameters and integrated galaxy magnitudes and colors, especially in crowded environments. We use a set of science-based quality assurance metrics, namely, the disk luminosity-size relation, the galaxy color-magnitude diagram, and the galaxy central (fiber) colors to show the robustness of our structuralmore » parameters. The best procedure utilizes simultaneous, two-bandpass decompositions. Bulge and disk photometric errors remain below 0.1 mag down to bulge and disk magnitudes of g {approx_equal} 19 and r {approx_equal} 18.5. We also use and compare three different galaxy fitting models: a pure Sersic model, an n{sub b} = 4 bulge + disk model, and a Sersic (free n{sub b}) bulge + disk model. The most appropriate model for a given galaxy is determined by the F-test probability. All three catalogs of measured structural parameters, rest-frame magnitudes, and colors are publicly released here. These catalogs should provide an extensive comparison set for a wide range of observational and theoretical studies of galaxies.« less

  19. A global resource allocation strategy governs growth transition kinetics of Escherichia coli

    PubMed Central

    Erickson, David W; Schink, Severin J.; Patsalo, Vadim; Williamson, James R.; Gerland, Ulrich; Hwa, Terence

    2018-01-01

    A grand challenge of systems biology is to predict the kinetic responses of living systems to perturbations starting from the underlying molecular interactions. Changes in the nutrient environment have long been used to study regulation and adaptation phenomena in microorganisms1–3 and they remain a topic of active investigation4–11. Although much is known about the molecular interactions that govern the regulation of key metabolic processes in response to applied perturbations12–17, they are insufficiently quantified for predictive bottom-up modelling. Here we develop a top-down approach, expanding the recently established coarse-grained proteome allocation models15,18–20 from steady-state growth into the kinetic regime. Using only qualitative knowledge of the underlying regulatory processes and imposing the condition of flux balance, we derive a quantitative model of bacterial growth transitions that is independent of inaccessible kinetic parameters. The resulting flux-controlled regulation model accurately predicts the time course of gene expression and biomass accumulation in response to carbon upshifts and downshifts (for example, diauxic shifts) without adjustable parameters. As predicted by the model and validated by quantitative proteomics, cells exhibit suboptimal recovery kinetics in response to nutrient shifts owing to a rigid strategy of protein synthesis allocation, which is not directed towards alleviating specific metabolic bottlenecks. Our approach does not rely on kinetic parameters, and therefore points to a theoretical framework for describing a broad range of such kinetic processes without detailed knowledge of the underlying biochemical reactions. PMID:29072300

  20. Stochastic, goal-oriented rapid impact modeling of uncertainty and environmental impacts in poorly-sampled sites using ex-situ priors

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Li, Yandong; Chang, Ching-Fu; Tan, Benjamin; Chen, Ziyang; Sege, Jon; Wang, Changhong; Rubin, Yoram

    2018-01-01

    Modeling of uncertainty associated with subsurface dynamics has long been a major research topic. Its significance is widely recognized for real-life applications. Despite the huge effort invested in the area, major obstacles still remain on the way from theory and applications. Particularly problematic here is the confusion between modeling uncertainty and modeling spatial variability, which translates into a (mis)conception, in fact an inconsistency, in that it suggests that modeling of uncertainty and modeling of spatial variability are equivalent, and as such, requiring a lot of data. This paper investigates this challenge against the backdrop of a 7 km, deep underground tunnel in China, where environmental impacts are of major concern. We approach the data challenge by pursuing a new concept for Rapid Impact Modeling (RIM), which bypasses altogether the need to estimate posterior distributions of model parameters, focusing instead on detailed stochastic modeling of impacts, conditional to all information available, including prior, ex-situ information and in-situ measurements as well. A foundational element of RIM is the construction of informative priors for target parameters using ex-situ data, relying on ensembles of well-documented sites, pre-screened for geological and hydrological similarity to the target site. The ensembles are built around two sets of similarity criteria: a physically-based set of criteria and an additional set covering epistemic criteria. In another variation to common Bayesian practice, we update the priors to obtain conditional distributions of the target (environmental impact) dependent variables and not the hydrological variables. This recognizes that goal-oriented site characterization is in many cases more useful in applications compared to parameter-oriented characterization.

  1. Remaining lifetime modeling using State-of-Health estimation

    NASA Astrophysics Data System (ADS)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.

  2. Development of a finite element model of the middle ear.

    PubMed

    Williams, K R; Blayney, A W; Rice, H J

    1996-01-01

    A representative finite element model of the healthy ear is developed commencing with a description of the decoupled isotropic tympanic membrane. This model was shown to vibrate in a manner similar to that found both numerically (1, 2) and experimentally (8). The introduction of a fibre system into the membrane matrix significantly altered the modes of vibration. The first mode "remains as a piston like movement as for the isotropic membrane. However, higher modes show a simpler vibration pattern similar to the second mode but with a varying axis of movement and lower amplitudes. The introduction of a malleus and incus does not change the natural frequencies or mode shapes of the membrane for certain support conditions. When constraints are imposed along the ossicular chain by simulation of a cochlear impedance term then significantly altered modes can occur. More recently a revised model of the ear has been developed by the inclusion of the outer ear canal. This discretisation uses geometries extracted from a Nuclear Magnetic resonance scan of a healthy subject and a crude inner ear model using stiffness parameters ultimately fixed through a parameter tuning process. The subsequently tuned model showed behaviour consistent with previous findings and should provide a good basis for subsequent modelling of diseased ears and assessment of the performance of middle ear prostheses.

  3. Hepatic transporter drug-drug interactions: an evaluation of approaches and methodologies.

    PubMed

    Williamson, Beth; Riley, Robert J

    2017-12-01

    Drug-drug interactions (DDIs) continue to account for 5% of hospital admissions and therefore remain a major regulatory concern. Effective, quantitative prediction of DDIs will reduce unexpected clinical findings and encourage projects to frontload DDI investigations rather than concentrating on risk management ('manage the baggage') later in drug development. A key challenge in DDI prediction is the discrepancies between reported models. Areas covered: The current synopsis focuses on four recent influential publications on hepatic drug transporter DDIs using static models that tackle interactions with individual transporters and in combination with other drug transporters and metabolising enzymes. These models vary in their assumptions (including input parameters), transparency, reproducibility and complexity. In this review, these facets are compared and contrasted with recommendations made as to their application. Expert opinion: Over the past decade, static models have evolved from simple [I]/k i models to incorporate victim and perpetrator disposition mechanisms including the absorption rate constant, the fraction of the drug metabolised/eliminated and/or clearance concepts. Nonetheless, models that comprise additional parameters and complexity do not necessarily out-perform simpler models with fewer inputs. Further, consideration of the property space to exploit some drug target classes has also highlighted the fine balance required between frontloading and back-loading studies to design out or 'manage the baggage'.

  4. Transmission probabilities and durations of immunity for three pathogenic group B Streptococcus serotypes

    PubMed Central

    Percha, Bethany; Newman, M. E. J.; Foxman, Betsy

    2012-01-01

    Group B Streptococcus (GBS) remains a major cause of neonatal sepsis and is an emerging cause of invasive bacterial infections. The 9 known serotypes vary in virulence, and there is little cross-immunity. Key parameters for planning an effective vaccination strategy, such as average length of immunity and transmission probabilities by serotype, are unknown. We simulated GBS spread in a population using a computational model with parameters derived from studies of GBS sexual transmission in a college dormitory. Here we provide estimates of the duration of immunity relative to the transmission probabilities for the 3 GBS serotypes most associated with invasive disease: Ia, III, and V. We also place upper limits on the durations of immunity for serotype Ia (570 days), III (1125 days) and V (260 days). Better transmission estimates are required to establish the epidemiological parameters of GBS infection and determine the best vaccination strategies to prevent GBS disease. PMID:21605704

  5. SEE rate estimation based on diffusion approximation of charge collection

    NASA Astrophysics Data System (ADS)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  6. A mathematical model of electrolyte and fluid transport across corneal endothelium.

    PubMed

    Fischbarg, J; Diecke, F P J

    2005-01-01

    To predict the behavior of a transporting epithelium by intuitive means can be complex and frustrating. As the number of parameters to be considered increases beyond a few, the task can be termed impossible. The alternative is to model epithelial behavior by mathematical means. For that to be feasible, it has been presumed that a large amount of experimental information is required, so as to be able to use known values for the majority of kinetic parameters. However, in the present case, we are modeling corneal endothelial behavior beginning with experimental values for only five of eleven parameters. The remaining parameter values are calculated assuming cellular steady state and using algebraic software. With that as base, as in preceding treatments but with a distribution of channels/transporters suited to the endothelium, temporal cell and tissue behavior are computed by a program written in Basic that monitors changes in chemical and electrical driving forces across cell membranes and the paracellular pathway. We find that the program reproduces quite well the behaviors experimentally observed for the translayer electrical potential difference and rate of fluid transport, (a) in the steady state, (b) after perturbations by changes in ambient conditions HCO3-, Na+, and Cl- concentrations), and (c) after challenge by inhibitors (ouabain, DIDS, Na+- and Cl(-)-channel inhibitors). In addition, we have used the program to compare predictions of translayer fluid transport by two competing theories, electro-osmosis and local osmosis. Only predictions using electro-osmosis fit all the experimental data.

  7. Study of stellar structures in f(R,T) gravity

    NASA Astrophysics Data System (ADS)

    Sharif, M.; Siddiqa, Aisha

    This paper is devoted to study the compact objects whose pressure and density are related through polytropic equation-of-state (EoS) and MIT bag model (for quark stars) in the background of f(R,T) gravity. We solve the field equations together with the hydrostatic equilibrium equation numerically for the model f(R,T) = R + αR2 + λT and discuss physical properties of the resulting solution. It is observed that for both types of stars (polytropic and quark stars), the effects of model parameters α and λ remain the same. We also obtain that the energy conditions are satisfied and stellar configurations are stable for both EoS.

  8. Diagnosing the impact of alternative calibration strategies on coupled hydrologic models

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Perera, C.; Corrigan, C.

    2017-12-01

    Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.

  9. Effective electrostatic interactions among charged thermo-responsive microgels immersed in a simple electrolyte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    González-Mozuelos, P.

    This work explores the nature and thermodynamic behavior of the effective electrostatic interactions among charged microgels immersed in a simple electrolyte, taking special interest in the effects due to the thermally induced variation of the microgel size while the remaining parameters (microgel charge and concentration, plus the amount of added salt) are kept constant. To this end, the rigorous approach obtained from applying the precise methodology of the dressed ion theory to the proper definition of the effective direct correlation functions, which emerge from tracing-out the degrees of freedom of the microscopic ions, is employed to provide an exact descriptionmore » of the parameters characterizing such interactions: screening length, effective permittivity, and renormalized charges. A model solution with three components is assumed: large permeable anionic spheres for the microgels, plus small charged hard spheres of equal size for the monovalent cations and anions. The two-body correlations among the components of this model suspension, used as the input for the determination of the effective interaction parameters, are here calculated by using the hyper-netted chain approximation. It is then found that at finite microgel concentrations the values of these parameters change as the microgel size increases, even though the ionic strength of the supporting electrolyte and the bare charge of the microgels remain fixed during this process. The variation of the screening length, as well as that of the effective permittivity, is rather small, but still interesting in view of the fact that the corresponding Debye length stays constant. The renormalized charges, in contrast, increase markedly as the microgels swell. The ratio of the renormalized charge to the corresponding analytic result obtained in the context of an extended linear response theory allows us to introduce an effective charge that accounts for the non-linear effects induced by the short-ranged association of microions to the microgels. The behavior of these effective charges as a function of the amount of added salt and the macroion charge, size, and concentration reveals the interplay among all these system parameters.« less

  10. Survival strategies in semi-arid climate for isohydric and anisohydric species

    NASA Astrophysics Data System (ADS)

    Guerin, M. F.; Gentine, P.; Uriarte, M.

    2013-12-01

    The understanding of survival strategies in dry land remains a challenging problem aiming at the interrelationship between local hydrology, plant physiology and climate. Carbon starvation and hydraulic failure are thought to be the two main factors leading to drought-induced mortality beside biotic perturbation. In order to better comprehend mortality the understanding of abiotic mechanisms triggering mortality is being studied in a tractable model for soil-plant-atmosphere continuum emphasizing the role of soil hydraulic properties, photosynthesis, embolism, leaf-gas exchange and climate. In particular the role of the frequency vs. the intensity of droughts is highlighted within such model. The analysis of the model included a differentiation between isohydric and anisohydric tree regulation and is supported by an extensive dataset of Pinion and Juniper growing in a semi-arid ecosystem. An objective of reduced number of parameters was approached with allometric equations to characterize tree's main traits and their hydraulic controls. Leaf area, sapwood area and tree's height are used to derive capacitance, conductance and photosynthetic abilities of the plant. A parameter sensitivity is performed highlighting the role of root:shoot ratio, rooting depth, photosynthetic capacity, quantum efficiency, and most importantly water use efficiency. Analytic development emphasizes two regimes of transpiration/photosynthesis denoted as stage-I (no embolism) and stage-II (embolism dominated) in analogy with stage I-stage II treminology for evaporation (Phillip,1957). Anisohydric species tend to remain in stage-I during which they still can assimilate carbon at full potential thus avoiding carbon starvation. Isohydric species tend to remain longer in stage-II. The effects of drought intensity/frequency on those 2 stages are described. Figure: sensitivity of Piñons stage 1 (top left), stage 2 (top right), and total cavitation duration (sum of stage 1 and stage 2 - bottom left) and time to carbon starvation (defined as 0-crossover of NSC content - bottom right) to Leaf Area Index (LAI) and root:shoot area.

  11. Using data to inform soil microbial carbon model structure and parameters

    NASA Astrophysics Data System (ADS)

    Hagerty, S. B.; Schimel, J.

    2016-12-01

    There is increasing consensus that explicitly representing microbial mechanisms in soil carbon models can improve model predictions of future soil carbon stocks. However, which microbial mechanisms must be represented in these new models and how remains under debate. One of the major challenges in developing microbially explicit soil carbon models is that there is little data available to validate model structure. Empirical studies of microbial mechanisms often fail to capture the full range of microbial processes; from the cellular processes that occur within minutes to hours of substrate consumption to community turnover which may occur over weeks or longer. We added isotopically labeled 14C-glucose to soil incubated in the lab and traced its movement into the microbial biomass, carbon dioxide, and K2SO4 extractable carbon pool. We measured the concentration of 14C in each of these pools at 1, 3, 6, 24, and 72 hours and at 7, 14, and 21 days. We used this data to compare data fits among models that match our conceptual understanding of microbial carbon transformations and to estimate microbial parameters that control the fate of soil carbon. Over 90% of the added glucose was consumed within the first hour after it was added and concentration of the label was highest in biomass at this time. After the first hour, the label in biomass declined, with the rate that the label moved from the biomass slowing after 24hours, because of this models representing the microbial biomass as two pools fit best. Recovery of the label decreased with incubation time, from nearly 80% in the first hour to 67% after three weeks, indicating that carbon is moving into unextractable pools in the soil likely as microbial products and necromass sorb to soil particles and that these mechanisms must be represented in microbial models. This data fitting exercise demonstrates how isotopic data can be useful in validating model structure and estimating microbial model parameters. Future studies can apply this inverse modeling approach to compare the response of microbial parameters to changes in environmental conditions.

  12. SU-E-I-07: Response Characteristics and Signal Conversion Modeling of KV Flat-Panel Detector in Cone Beam CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yu; Cao, Ruifen; Pei, Xi

    2015-06-15

    Purpose: The flat-panel detector response characteristics are investigated to optimize the scanning parameter considering the image quality and less radiation dose. The signal conversion model is also established to predict the tumor shape and physical thickness changes. Methods: With the ELEKTA XVI system, the planar images of 10cm water phantom were obtained under different image acquisition conditions, including tube voltage, electric current, exposure time and frames. The averaged responses of square area in center were analyzed using Origin8.0. The response characteristics for each scanning parameter were depicted by different fitting types. The transmission measured for 10cm water was compared tomore » Monte Carlo simulation. Using the quadratic calibration method, a series of variable-thickness water phantoms images were acquired to derive the signal conversion model. A 20cm wedge water phantom with 2cm step thickness was used to verify the model. At last, the stability and reproducibility of the model were explored during a four week period. Results: The gray values of image center all decreased with the increase of different image acquisition parameter presets. The fitting types adopted were linear fitting, quadratic polynomial fitting, Gauss fitting and logarithmic fitting with the fitting R-Square 0.992, 0.995, 0.997 and 0.996 respectively. For 10cm water phantom, the transmission measured showed better uniformity than Monte Carlo simulation. The wedge phantom experiment show that the radiological thickness changes prediction error was in the range of (-4mm, 5mm). The signal conversion model remained consistent over a period of four weeks. Conclusion: The flat-panel response decrease with the increase of different scanning parameters. The preferred scanning parameter combination was 100kV, 10mA, 10ms, 15frames. It is suggested that the signal conversion model could effectively be used for tumor shape change and radiological thickness prediction. Supported by National Natural Science Foundation of China (81101132, 11305203) and Natural Science Foundation of Anhui Province (11040606Q55, 1308085QH138)« less

  13. Comparison of Immature Platelet Count to Established Predictors of Platelet Reactivity During Thienopyridine Therapy.

    PubMed

    Stratz, Christian; Bömicke, Timo; Younas, Iris; Kittel, Anja; Amann, Michael; Valina, Christian M; Nührenberg, Thomas; Trenk, Dietmar; Neumann, Franz-Josef; Hochholzer, Willibald

    2016-07-19

    Previous data suggest that reticulated platelets significantly affect antiplatelet response to thienopyridines. It is unknown whether parameters describing reticulated platelets can predict antiplatelet response to thienopyridines. The authors sought to determine the extent to which parameters describing reticulated platelets can predict antiplatelet response to thienopyridine loading compared with established predictors. This study randomized 300 patients undergoing elective coronary stenting to loading with clopidogrel 600 mg, prasugrel 30 mg, or prasugrel 60 mg. Adenosine diphosphate (ADP)-induced platelet reactivity was assessed by impedance aggregometry before loading (intrinsic platelet reactivity) and again on day 1 after loading. Multiple parameters of reticulated platelets were assessed by automated whole blood flow cytometry: absolute immature platelet count (IPC), immature platelet fraction, and highly fluorescent immature platelet fraction. Each parameter of reticulated platelets correlated significantly with ADP-induced platelet reactivity (p < 0.01 for all 3 parameters). In a multivariable model including all 3 parameters, only IPC remained a significant predictor of platelet reactivity (p < 0.001). In models adjusting each of the 3 parameters for known predictors of on-treatment platelet reactivity including cytochrome P450 2C19 (CYP2C19) polymorphisms, age, body mass index, diabetes, and intrinsic platelet reactivity, only IPC prevailed as an independent predictor (p = 0.001). In this model, IPC was the strongest predictor of on-treatment platelet reactivity followed by intrinsic platelet reactivity. IPC is the strongest independent platelet count-derived predictor of antiplatelet response to thienopyridine treatment. Given its easy availability, together with its even stronger association with on-treatment platelet reactivity compared with known predictors, including the CYP2C19*2 polymorphism, IPC may become the preferred predictor of antiplatelet response to thienopyridine treatment. (Impact of Extent of Clopidogrel-Induced Platelet Inhibition During Elective Stent Implantation on Clinical Event Rate-Advanced Loading Strategies [ExcelsiorLOAD]; DRKS00006102). Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  14. An Open Source Simulation Model for Soil and Sediment Bioturbation

    PubMed Central

    Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin

    2011-01-01

    Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach. PMID:22162997

  15. An open source simulation model for soil and sediment bioturbation.

    PubMed

    Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin

    2011-01-01

    Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach.

  16. Nonlinear Geometric Effects in Mechanical Bistable Morphing Structures

    NASA Astrophysics Data System (ADS)

    Chen, Zi; Guo, Qiaohang; Majidi, Carmel; Chen, Wenzhe; Srolovitz, David J.; Haataja, Mikko P.

    2012-09-01

    Bistable structures associated with nonlinear deformation behavior, exemplified by the Venus flytrap and slap bracelet, can switch between different functional shapes upon actuation. Despite numerous efforts in modeling such large deformation behavior of shells, the roles of mechanical and nonlinear geometric effects on bistability remain elusive. We demonstrate, through both theoretical analysis and tabletop experiments, that two dimensionless parameters control bistability. Our work classifies the conditions for bistability, and extends the large deformation theory of plates and shells.

  17. Radiotracer Technology in Mixing Processes for Industrial Applications

    PubMed Central

    Othman, N.; Kamarudin, S. K.

    2014-01-01

    Many problems associated with the mixing process remain unsolved and result in poor mixing performance. The residence time distribution (RTD) and the mixing time are the most important parameters that determine the homogenisation that is achieved in the mixing vessel and are discussed in detail in this paper. In addition, this paper reviews the current problems associated with conventional tracers, mathematical models, and computational fluid dynamics simulations involved in radiotracer experiments and hybrid of radiotracer. PMID:24616642

  18. Climatic impact on isovolumetric weathering of a coarse-grained schist in the northern Piedmont Province of the central Atlantic states

    USGS Publications Warehouse

    Cleaves, E.T.

    1993-01-01

    The possible impact of periglacial climates on the rate of chemical weathering of a coarse-grained plagioclase-muscovite-quartz schist has been determined for a small watershed near Baltimore, Maryland. The isovolumetric chemical weathering model formulated from the geochemical mass balance study of the watershed shows that the weathering front advances at a velocity of 9.1 m/m.y., if the modern environmental parameters remain the same back through time. However, recent surficial geological mapping demonstrates that periglacial climates have impacted the area. Such an impact significantly affects two key chemical weathering parameters, the concentration of CO2 in the soil and groundwater moving past the weathering front. Depending upon the assumptions used in the model, the rate of saprolitization varies from 2.2 to 5.3 m/m.y. The possible impact of periglacial processes suggested by the chemical weathering rates indicates a need to reconsider theories of landscape evolution as they apply to the northern Piedmont Province of the mid-Atlantic states. I suggest that from the Late Miocene to the present that the major rivers have become incised in their present locations; this incision has enhanced groundwater circulation and chemical weathering such that crystalline rocks beneath interfluvial areas remain mantled by saprolite; and the saprolite mantle has been partially stripped as periglacial conditions alternate with humid-temperate conditions. ?? 1993.

  19. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Karssenberg, D.; Wanders, N.; de Roo, A.; de Jong, S.; Bierkens, M. F.

    2013-12-01

    Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide the closest thing to a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? 2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to approaches that calibrate only with discharge, such that this leads to improved forecasts of soil moisture content and discharge as well? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper Danube area. Calibration is done with discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS and ASCAT. Four scenarios are studied: no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data (three satellites) and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated for the calibrated model parameters on a validation period of 10 years. Results show that calibration with discharge data improves the estimation of groundwater parameters (e.g., groundwater reservoir constant) and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate calibration of parameters related to land surface process (e.g., the saturated conductivity of the soil), which is not possible when calibrating on discharge alone. For the upstream area up to 40000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30 % in the RMSE for discharge simulations, compared to calibration on discharge alone. For discharge in the downstream area, the model performance due to assimilation of remotely sensed soil moisture is not increased or slightly decreased, most probably due to the longer relative importance of the routing and contribution of groundwater in downstream areas. When microwave soil moisture is used for calibration the RMSE of soil moisture simulations decreases from 0.072 m3m-3 to 0.062 m3m-3. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models leading to a better simulation of soil moisture content throughout and a better simulation of discharge in upstream areas, particularly if discharge observations are sparse.

  20. SU-E-T-248: An Extended Generalized Equivalent Uniform Dose Accounting for Dose-Range Dependency of Radio-Biological Parameters.

    PubMed

    Troeller, A; Soehn, M; Yan, D

    2012-06-01

    Introducing an extended, phenomenological, generalized equivalent uniform dose (eEUD) that incorporates multiple volume-effect parameters for different dose-ranges. The generalized EUD (gEUD) was introduced as an estimate of the EUD that incorporates a single, tissue-specific parameter - the volume-effect-parameter (VEP) 'a'. As a purely phenomenological concept, its radio-biological equivalency to a given inhomogeneous dose distribution is not a priori clear and mechanistic models based on radio-biological parameters are assumed to better resemble the underlying biology. However, for normal organs mechanistic models are hard to derive, since the structural organization of the tissue plays a significant role. Consequently, phenomenological approaches might be especially useful in order to describe dose-response for normal tissues. However, the single parameter used to estimate the gEUD may not suffice in accurately representing more complex biological effects that have been discussed in the literature. For instance, radio-biological parameters and hence the effects of fractionation are known to be dose-range dependent. Therefore, we propose an extended phenomenological eEUD formula that incorporates multiple VEPs accounting for dose-range dependency. The eEUD introduced is a piecewise polynomial expansion of the gEUD formula. In general, it allows for an arbitrary number of VEPs, each valid for a certain dose-range. We proved that the formula fulfills required mathematical and physical criteria such as invertibility of the underlying dose-effect and continuity in dose. Furthermore, it contains the gEUD as a special case, if all VEPs are equal to 'a' from the gEUD model. The eEUD is a concept that expands the gEUD such that it can theoretically represent dose-range dependent effects. Its practicality, however, remains to be shown. As a next step, this will be done by estimating the eEUD from patient data using maximum-likelihood based NTCP modelling in the same way it is commonly done for the gEUD. © 2012 American Association of Physicists in Medicine.

  1. Use of plant trait data in the ISBA-A-gs model

    NASA Astrophysics Data System (ADS)

    Calvet, Jean-Christophe

    2014-05-01

    ISBA-A-gs is a CO2-responsive LSM (Calvet et al., 1998; Gibelin et al., 2006), able to simulate the diurnal cycle of carbon and water vapour fluxes, together with LAI and soil moisture evolution. The various components of ISBA-A-gs are based to a large extent on meta-analyses of trait data. (1) Photosynthesis: ISBA-A-gs uses the model of Goudriaan et al. (1985) modified by Jacobs (1994) and Jacobs et al. (1996). The main parameter is mesophyll conductance (gm). Leaf-level photosynthesis observations were used together with canopy level flux observations to derive gm together with other key parameters of the Jacobs model, including in drought conditions. This permitted implementing detailed representations of the soil moisture stress. Two different types of drought responses are distinguished for both herbaceous vegetation (Calvet, 2000) and forests (Calvet et al., 2004), depending on the evolution of the water use efficiency (WUE) under moderate stress: WUE increases in the early soil water stress stages in the case of the drought-avoiding response, whereas WUE decreases or remains stable in the case of the drought-tolerant response. (2) Plant growth: the leaf biomass is provided by a growth model (Calvet et al., 1998; Calvet and Soussana, 2001) driven by photosynthesis. In contrast to other land surface models, no GDD-based phenology model is used in ISBA-A-gs, as the vegetation growth and senescence are entirely driven by photosynthesis. The leaf biomass is supplied with the carbon assimilated by photosynthesis, and decreased by a turnover and a respiration term. Turnover is increased by a deficit in photosynthesis. The leaf onset is triggered by sufficient photosynthesis levels and a minimum LAI value is prescribed. The maximum annual value of LAI is prognostic, i.e. it can be predicted by the model. LAI is derived from leaf biomass using SLA values. The latter are derived from the leaf nitrogen concentration using plasticity parameters. (3) CO2 effect: the photosynthesis model is able to represent the antitranspirant effect of CO2. The plant growth model represents the fertilization effect of CO2. However, the nitrogen dilution triggered by the CO2 increase has to be represented. A pragmatic solution consists in decreasing the leaf nitrogen concentration parameter in response to CO2, using existing meta-analyses of this parameter (Calvet et al., 2008). The TRY database could be used to improve the current parameterizations, together with the mapping of the model parameters.

  2. Supervised machine learning for analysing spectra of exoplanetary atmospheres

    NASA Astrophysics Data System (ADS)

    Márquez-Neila, Pablo; Fisher, Chloe; Sznitman, Raphael; Heng, Kevin

    2018-06-01

    The use of machine learning is becoming ubiquitous in astronomy1-3, but remains rare in the study of the atmospheres of exoplanets. Given the spectrum of an exoplanetary atmosphere, a multi-parameter space is swept through in real time to find the best-fit model4-6. Known as atmospheric retrieval, this technique originates in the Earth and planetary sciences7. Such methods are very time-consuming, and by necessity there is a compromise between physical and chemical realism and computational feasibility. Machine learning has previously been used to determine which molecules to include in the model, but the retrieval itself was still performed using standard methods8. Here, we report an adaptation of the `random forest' method of supervised machine learning9,10, trained on a precomputed grid of atmospheric models, which retrieves full posterior distributions of the abundances of molecules and the cloud opacity. The use of a precomputed grid allows a large part of the computational burden to be shifted offline. We demonstrate our technique on a transmission spectrum of the hot gas-giant exoplanet WASP-12b using a five-parameter model (temperature, a constant cloud opacity and the volume mixing ratios or relative abundances of molecules of water, ammonia and hydrogen cyanide)11. We obtain results consistent with the standard nested-sampling retrieval method. We also estimate the sensitivity of the measured spectrum to the model parameters, and we are able to quantify the information content of the spectrum. Our method can be straightforwardly applied using more sophisticated atmospheric models to interpret an ensemble of spectra without having to retrain the random forest.

  3. Understanding hydrological and nitrogen interactions by sensitivity analysis of a catchment-scale nitrogen model

    NASA Astrophysics Data System (ADS)

    Medici, Chiara; Wade, Andrew; Frances, Felix

    2010-05-01

    Nitrogen is present in both terrestrial and aquatic ecosystems and research is needed to understand its storage, transportation and transformations in river catchments world-wide because of its importance in controlling plant growth and freshwater trophic status (Vitousek et al. 2009; Chu et al. 2008; Schlesinger et al 2006; Ocampo et al. 2006; Green et al., 2004; Arheimer et al., 1996). Numerous mathematical models have been developed to describe the nitrogen dynamics, but there is a substantial gap between the outputs now expected from these models and what modellers are able to provide with scientific justification (McIntyre et al., 2005). In fact, models will always necessarily be simplification of reality; hence simplifying assumptions are sources of uncertainty that must be well understood for an accurate model results interpretation. Therefore, estimating prediction uncertainties in water quality modelling is becoming increasingly appreciated (Dean et al., 2009, Kruger et al., 2007, Rode et al., 2007). In this work the lumped LU4-N model (Medici et al., 2008; Medici et al., EGU2009-7497) is subjected to an extensive regionalised sensitivity analysis (GSA, based on Monte Carlo simulations) in application to the Fuirosos catchment, Catalonia. The main results are: 1) the hydrological model is greatly affected by the maximum static storage water content (Hu_max), which defines the amount of water held in soil that can leave the catchment only by evapotranspiration. Thus, it defines also the amount of water not retained that is free to move and supplies the other model tanks; 2) the use of several objective functions in order to take into account different hydrograph characteristic helped to constrain parameter values; 3) concerning nitrogen, to obtain a sufficient level of behavioural parameter sets for the statistical analysis, not very severe criteria could be adopted; 4) stream water concentrations are sensitive to the shallow aquifer parameters, especially the nitrification constant (Knitr-aquif) and also to the certain soil parameters, like the mineralization constant (Kmin), the annual maximum ammonium uptake (MaxUPNH4) and the mineralization, nitrification and immobilisation thresholds (Umin, Unitr and Uimmob). Moreover the results give a clear indication that the hydrological model greatly affects the streamwater nitrate and ammonium concentrations; 5) result shows that the LU4-N model succeeded in achieving near-optimum fits simultaneously to flow and nitrate, but not ammonium; 6) however, the optimum flow model has not produced a near-optimum nitrate model. The analysis of this result indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters instead of calibrating all parameters together, may not be the best strategy as pointed out for another study by McIntyre et al., 2005 ; 7) a final analysis seems also to support the idea that to obtain a satisfactory nitrogen simulation necessarily the flow should be acceptably represented, which lead to the conclusion that observed stream concentrations may indirectly help to calibrated the rainfall-runoff model, or at least the parameters to which they are sensitive.

  4. Asymptotic solutions for the case of nearly symmetric gravitational lens systems

    NASA Astrophysics Data System (ADS)

    Wertz, O.; Pelgrims, V.; Surdej, J.

    2012-08-01

    Gravitational lensing provides a powerful tool to determine the Hubble parameter H0 from the measurement of the time delay Δt between two lensed images of a background variable source. Nevertheless, knowledge of the deflector mass distribution constitutes a hurdle. We propose in the present work interesting solutions for the case of nearly symmetric gravitational lens systems. For the case of a small misalignment between the source, the deflector and the observer, we first consider power-law (ɛ) axially symmetric models for which we derive an analytical relation between the amplification ratio and source position which is independent of the power-law slope ɛ. According to this relation, we deduce an expression for H0 also irrespective of the value ɛ. Secondly, we consider the power-law axially symmetric lens models with an external large-scale gravitational field, the shear γ, resulting in the so-called ɛ-γ models, for which we deduce simple first-order equations linking the model parameters and the lensed image positions, the latter being observable quantities. We also deduce simple relations between H0 and observables quantities only. From these equations, we may estimate the value of the Hubble parameter in a robust way. Nevertheless, comparison between the ɛ-γ and singular isothermal ellipsoid (SIE) models leads to the conclusion that these models remain most often distinct. Therefore, even for the case of a small misalignment, use of the first-order equations and precise astrometric measurements of the positions of the lensed images with respect to the centre of the deflector enables one to discriminate between these two families of models. Finally, we confront the models with numerical simulations to evaluate the intrinsic error of the first-order expressions used when deriving the model parameters under the assumption of a quasi-alignment between the source, the deflector and the observer. From these same simulations, we estimate for the case of the ɛ-γ family of models that the standard deviation affecting H0 is ? which merely reflects the adopted astrometric uncertainties on the relative image positions, typically ? arcsec. In conclusions, we stress the importance of getting very accurate measurements of the relative positions of the multiple lensed images and of the time delays for the case of nearly symmetric gravitational lens systems, in order to derive robust and precise values of the Hubble parameter.

  5. Challenges in Materials Transformation Modeling for Polyolefins Industry

    NASA Astrophysics Data System (ADS)

    Lai, Shih-Yaw; Swogger, Kurt W.

    2004-06-01

    Unlike most published polymer processing and/or forming research, the transformation of polyolefins to fabricated articles often involves non-confined flow or so-called free surface flow (e.g. fiber spinning, blown films, and cast films) in which elongational flow takes place during a fabrication process. Obviously, the characterization and validation of extensional rheological parameters and their use to develop rheological constitutive models are the focus of polyolefins materials transformation research. Unfortunately, there are challenges that remain with limited validation for non-linear, non-isothermal constitutive models for polyolefins. Further complexity arises in the transformation of polyolefins in the elongational flow system as it involves stress-induced crystallization process. The complicated nature of elongational, non-linear rheology and non-isothermal crystallization kinetics make the development of numerical methods very challenging for the polyolefins materials forming modeling. From the product based company standpoint, the challenges of materials transformation research go beyond elongational rheology, crystallization kinetics and its numerical modeling. In order to make models useful for the polyolefin industry, it is critical to develop links between molecular parameters to both equipment and materials forming parameters. The recent advances in the constrained geometry catalysis and materials sciences understanding (INSITE technology and molecular design capability) has made industrial polyolefinic materials forming modeling more viable due to the fact that the molecular structure of the polymer can be well predicted and controlled during the polymerization. In this paper, we will discuss inter-relationship (models) among molecular parameters such as polymer molecular weight (Mw), molecular weight distribution (MWD), long chain branching (LCB), short chain branching (SCB or comonomer types and distribution) and their affects on shear and elongational rheologies, on tie-molecules probabilities, on non-isothermal stress-induced crystallization, on crystalline/amorphous orientation vs. mechanical property relationship, etc. All of the above mentioned inter-relationships (models) are critical to the successful development of a knowledge based industrial model. Dow Polyolefins and Elastomers business is one of the world largest polyolefins resin producers with the most advanced INSITE technology and a "6-Day model" molecular design capability. Dow also offers one of the broadest polyolefinic product ranges and applications to the market.

  6. The biomechanics of simple steatosis and steatohepatitis

    NASA Astrophysics Data System (ADS)

    Parker, K. J.; Ormachea, J.; Drage, M. G.; Kim, H.; Hah, Z.

    2018-05-01

    Magnetic resonance and ultrasound elastography techniques are now important tools for staging high-grade fibrosis in patients with chronic liver disease. However, uncertainty remains about the effects of simple accumulation of fat (steatosis) and inflammation (steatohepatitis) on the parameters that can be measured using different elastographic techniques. To address this, we examine the rheological models that are capable of capturing the dominant viscoelastic behaviors associated with fat and inflammation in the liver, and quantify the resulting changes in shear wave speed and viscoelastic parameters. Theoretical results are shown to match measurements in phantoms and animal studies reported in the literature. These results are useful for better design of elastographic studies of fatty liver disease and steatohepatitis, potentially leading to improved diagnosis of these conditions.

  7. Scheme variations of the QCD coupling

    NASA Astrophysics Data System (ADS)

    Boito, Diogo; Jamin, Matthias; Miravitllas, Ramon

    2017-03-01

    The Quantum Chromodynamics (QCD) coupling αs is a central parameter in the Standard Model of particle physics. However, it depends on theoretical conventions related to renormalisation and hence is not an observable quantity. In order to capture this dependence in a transparent way, a novel definition of the QCD coupling, denoted by â, is introduced, whose running is explicitly renormalisation scheme invariant. The remaining renormalisation scheme dependence is related to transformations of the QCD scale Λ, and can be parametrised by a single parameter C. Hence, we call â the C-scheme coupling. The dependence on C can be exploited to study and improve perturbative predictions of physical observables. This is demonstrated for the QCD Adler function and hadronic decays of the τ lepton.

  8. Tracking signal test to monitor an intelligent time series forecasting model

    NASA Astrophysics Data System (ADS)

    Deng, Yan; Jaraiedi, Majid; Iskander, Wafik H.

    2004-03-01

    Extensive research has been conducted on the subject of Intelligent Time Series forecasting, including many variations on the use of neural networks. However, investigation of model adequacy over time, after the training processes is completed, remains to be fully explored. In this paper we demonstrate a how a smoothed error tracking signals test can be incorporated into a neuro-fuzzy model to monitor the forecasting process and as a statistical measure for keeping the forecasting model up-to-date. The proposed monitoring procedure is effective in the detection of nonrandom changes, due to model inadequacy or lack of unbiasedness in the estimation of model parameters and deviations from the existing patterns. This powerful detection device will result in improved forecast accuracy in the long run. An example data set has been used to demonstrate the application of the proposed method.

  9. EOSlib, Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Nathan; Menikoff, Ralph

    2017-02-03

    Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, asmore » well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.« less

  10. Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower

    NASA Astrophysics Data System (ADS)

    Fujii, Kenzo; Yamamoto, Toru

    In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.

  11. Skeletal Characterization of the Fgfr3 Mouse Model of Achondroplasia Using Micro-CT and MRI Volumetric Imaging.

    PubMed

    Shazeeb, Mohammed Salman; Cox, Megan K; Gupta, Anurag; Tang, Wen; Singh, Kuldeep; Pryce, Cynthia T; Fogle, Robert; Mu, Ying; Weber, William D; Bangari, Dinesh S; Ying, Xiaoyou; Sabbagh, Yves

    2018-01-11

    Achondroplasia, the most common form of dwarfism, affects more than a quarter million people worldwide and remains an unmet medical need. Achondroplasia is caused by mutations in the fibroblast growth factor receptor 3 (FGFR3) gene which results in over-activation of the receptor, interfering with normal skeletal development leading to disproportional short stature. Multiple mouse models have been generated to study achondroplasia. The characterization of these preclinical models has been primarily done with 2D measurements. In this study, we explored the transgenic model expressing mouse Fgfr3 containing the achondroplasia mutation G380R under the Col2 promoter (Ach). Survival and growth rate of the Ach mice were reduced compared to wild-type (WT) littermates. Axial skeletal defects and abnormalities of the sternebrae and vertebrae were observed in the Ach mice. Further evaluation of the Ach mouse model was performed by developing 3D parameters from micro-computed tomography (micro-CT) and magnetic resonance imaging (MRI). The 3-week-old mice showed greater differences between the Ach and WT groups compared to the 6-week-old mice for all parameters. Deeper understanding of skeletal abnormalities of this model will help guide future studies for evaluating novel and effective therapeutic approaches for the treatment of achondroplasia.

  12. Universalities of thermodynamic signatures in topological phases

    PubMed Central

    Kempkes, S. N.; Quelle, A.; Smith, C. Morais

    2016-01-01

    Topological insulators (superconductors) are materials that host symmetry-protected metallic edge states in an insulating (superconducting) bulk. Although they are well understood, a thermodynamic description of these materials remained elusive, firstly because the edges yield a non-extensive contribution to the thermodynamic potential, and secondly because topological field theories involve non-local order parameters, and cannot be captured by the Ginzburg-Landau formalism. Recently, this challenge has been overcome: by using Hill thermodynamics to describe the Bernevig-Hughes-Zhang model in two dimensions, it was shown that at the topological phase transition the thermodynamic potential does not scale extensively due to boundary effects. Here, we extend this approach to different topological models in various dimensions (the Kitaev chain and Su-Schrieffer-Heeger model in one dimension, the Kane-Mele model in two dimensions and the Bernevig-Hughes-Zhang model in three dimensions) at zero temperature. Surprisingly, all models exhibit the same universal behavior in the order of the topological-phase transition, depending on the dimension. Moreover, we derive the topological phase diagram at finite temperature using this thermodynamic description, and show that it displays a good agreement with the one calculated from the Uhlmann phase. Our work reveals unexpected universalities and opens the path to a thermodynamic description of systems with a non-local order parameter. PMID:27929041

  13. Universalities of thermodynamic signatures in topological phases.

    PubMed

    Kempkes, S N; Quelle, A; Smith, C Morais

    2016-12-08

    Topological insulators (superconductors) are materials that host symmetry-protected metallic edge states in an insulating (superconducting) bulk. Although they are well understood, a thermodynamic description of these materials remained elusive, firstly because the edges yield a non-extensive contribution to the thermodynamic potential, and secondly because topological field theories involve non-local order parameters, and cannot be captured by the Ginzburg-Landau formalism. Recently, this challenge has been overcome: by using Hill thermodynamics to describe the Bernevig-Hughes-Zhang model in two dimensions, it was shown that at the topological phase transition the thermodynamic potential does not scale extensively due to boundary effects. Here, we extend this approach to different topological models in various dimensions (the Kitaev chain and Su-Schrieffer-Heeger model in one dimension, the Kane-Mele model in two dimensions and the Bernevig-Hughes-Zhang model in three dimensions) at zero temperature. Surprisingly, all models exhibit the same universal behavior in the order of the topological-phase transition, depending on the dimension. Moreover, we derive the topological phase diagram at finite temperature using this thermodynamic description, and show that it displays a good agreement with the one calculated from the Uhlmann phase. Our work reveals unexpected universalities and opens the path to a thermodynamic description of systems with a non-local order parameter.

  14. Model calibration criteria for estimating ecological flow characteristics

    USGS Publications Warehouse

    Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William J.; Seibert, Jan; Breuer, Lutz; Kraft, Philipp

    2016-01-01

    Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.

  15. Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei

    2017-10-01

    To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.

  16. Transition-state theory predicts clogging at the microscale

    NASA Astrophysics Data System (ADS)

    Laar, T. Van De; Klooster, S. Ten; Schroën, K.; Sprakel, J.

    2016-06-01

    Clogging is one of the main failure mechanisms encountered in industrial processes such as membrane filtration. Our understanding of the factors that govern the build-up of fouling layers and the emergence of clogs is largely incomplete, so that prevention of clogging remains an immense and costly challenge. In this paper we use a microfluidic model combined with quantitative real-time imaging to explore the influence of pore geometry and particle interactions on suspension clogging in constrictions, two crucial factors which remain relatively unexplored. We find a distinct dependence of the clogging rate on the entrance angle to a membrane pore which we explain quantitatively by deriving a model, based on transition-state theory, which describes the effect of viscous forces on the rate with which particles accumulate at the channel walls. With the same model we can also predict the effect of the particle interaction potential on the clogging rate. In both cases we find excellent agreement between our experimental data and theory. A better understanding of these clogging mechanisms and the influence of design parameters could form a stepping stone to delay or prevent clogging by rational membrane design.

  17. Observational properties of massive black hole binary progenitors

    NASA Astrophysics Data System (ADS)

    Hainich, R.; Oskinova, L. M.; Shenar, T.; Marchant, P.; Eldridge, J. J.; Sander, A. A. C.; Hamann, W.-R.; Langer, N.; Todt, H.

    2018-01-01

    Context. The first directly detected gravitational waves (GW 150914) were emitted by two coalescing black holes (BHs) with masses of ≈ 36 M⊙ and ≈ 29 M⊙. Several scenarios have been proposed to put this detection into an astrophysical context. The evolution of an isolated massive binary system is among commonly considered models. Aims: Various groups have performed detailed binary-evolution calculations that lead to BH merger events. However, the question remains open as to whether binary systems with the predicted properties really exist. The aim of this paper is to help observers to close this gap by providing spectral characteristics of massive binary BH progenitors during a phase where at least one of the companions is still non-degenerate. Methods: Stellar evolution models predict fundamental stellar parameters. Using these as input for our stellar atmosphere code (Potsdam Wolf-Rayet), we compute a set of models for selected evolutionary stages of massive merging BH progenitors at different metallicities. Results: The synthetic spectra obtained from our atmosphere calculations reveal that progenitors of massive BH merger events start their lives as O2-3V stars that evolve to early-type blue supergiants before they undergo core-collapse during the Wolf-Rayet phase. When the primary has collapsed, the remaining system will appear as a wind-fed high-mass X-ray binary. Based on our atmosphere models, we provide feedback parameters, broad band magnitudes, and spectral templates that should help to identify such binaries in the future. Conclusions: While the predicted parameter space for massive BH binary progenitors is partly realized in nature, none of the known massive binaries match our synthetic spectra of massive BH binary progenitors exactly. Comparisons of empirically determined mass-loss rates with those assumed by evolution calculations reveal significant differences. The consideration of the empirical mass-loss rates in evolution calculations will possibly entail a shift of the maximum in the predicted binary-BH merger rate to higher metallicities, that is, more candidates should be expected in our cosmic neighborhood than previously assumed.

  18. Interpreting a CMS excess in l l j j +missing -transverse-momentum with the golden cascade of the minimal supersymmetric standard model

    NASA Astrophysics Data System (ADS)

    Allanach, Ben; Kvellestad, Anders; Raklev, Are

    2015-06-01

    The CMS experiment recently reported an excess consistent with an invariant mass edge in opposite-sign same flavor leptons, when produced in conjunction with at least two jets and missing transverse momentum. We provide an interpretation of the edge in terms of (anti)squark pair production followed by the "golden cascade" decay for one of the squarks: q ˜ →χ˜2 0q →l ˜ l q →χ˜1 0q l l in the minimal supersymmetric standard model. A simplified model involving binos, winos, an on-shell slepton, and the first two generations of squarks fits the event rate and the invariant mass edge. We check consistency with a recent ATLAS search in a similar region, finding that much of the good-fit parameter space is still allowed at the 95% confidence level (C.L.). However, a combination of other LHC searches, notably two-lepton stop pair searches and jets plus p T, rule out all of the remaining parameter space at the 95% C.L.

  19. Model-based iterative learning control of Parkinsonian state in thalamic relay neuron

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Li, Huiyan; Xue, Zhiqin; Deng, Bin; Wei, Xile

    2014-09-01

    Although the beneficial effects of chronic deep brain stimulation on Parkinson's disease motor symptoms are now largely confirmed, the underlying mechanisms behind deep brain stimulation remain unclear and under debate. Hence, the selection of stimulation parameters is full of challenges. Additionally, due to the complexity of neural system, together with omnipresent noises, the accurate model of thalamic relay neuron is unknown. Thus, the iterative learning control of the thalamic relay neuron's Parkinsonian state based on various variables is presented. Combining the iterative learning control with typical proportional-integral control algorithm, a novel and efficient control strategy is proposed, which does not require any particular knowledge on the detailed physiological characteristics of cortico-basal ganglia-thalamocortical loop and can automatically adjust the stimulation parameters. Simulation results demonstrate the feasibility of the proposed control strategy to restore the fidelity of thalamic relay in the Parkinsonian condition. Furthermore, through changing the important parameter—the maximum ionic conductance densities of low-threshold calcium current, the dominant characteristic of the proposed method which is independent of the accurate model can be further verified.

  20. Synchronization of glycolytic oscillations in a yeast cell population.

    PubMed

    Danø, S; Hynne, F; De Monte, S; d'Ovidio, F; Sørensen, P G; Westerhoff, H

    2001-01-01

    The mechanism of active phase synchronization in a suspension of oscillatory yeast cells has remained a puzzle for almost half a century. The difficulty of the problem stems from the fact that the synchronization phenomenon involves the entire metabolic network of glycolysis and fermentation, and consequently it cannot be addressed at the level of a single enzyme or a single chemical species. In this paper it is shown how this system in a CSTR (continuous flow stirred tank reactor) can be modelled quantitatively as a population of Stuart-Landau oscillators interacting by exchange of metabolites through the extracellular medium, thus reducing the complexity of the problem without sacrificing the biochemical realism. The parameters of the model can be derived by a systematic expansion from any full-scale model of the yeast cell kinetics with a supercritical Hopf bifurcation. Some parameter values can also be obtained directly from analysis of perturbation experiments. In the mean-field limit, equations for the study of populations having a distribution of frequencies are used to simulate the effect of the inherent variations between cells.

  1. Exploring information transmission in gene networks using stochastic simulation and machine learning

    NASA Astrophysics Data System (ADS)

    Park, Kyemyung; Prüstel, Thorsten; Lu, Yong; Narayanan, Manikandan; Martins, Andrew; Tsang, John

    How gene regulatory networks operate robustly despite environmental fluctuations and biochemical noise is a fundamental question in biology. Mathematically the stochastic dynamics of a gene regulatory network can be modeled using chemical master equation (CME), but nonlinearity and other challenges render analytical solutions of CMEs difficult to attain. While approaches of approximation and stochastic simulation have been devised for simple models, obtaining a more global picture of a system's behaviors in high-dimensional parameter space without simplifying the system substantially remains a major challenge. Here we present a new framework for understanding and predicting the behaviors of gene regulatory networks in the context of information transmission among genes. Our approach uses stochastic simulation of the network followed by machine learning of the mapping between model parameters and network phenotypes such as information transmission behavior. We also devised ways to visualize high-dimensional phase spaces in intuitive and informative manners. We applied our approach to several gene regulatory circuit motifs, including both feedback and feedforward loops, to reveal underexplored aspects of their operational behaviors. This work is supported by the Intramural Program of NIAID/NIH.

  2. Infrared thermography of welding zones produced by polymer extrusion additive manufacturing✩

    PubMed Central

    Seppala, Jonathan E.; Migler, Kalman D.

    2016-01-01

    In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters. PMID:29167755

  3. Characterizing the size and shape of sea ice floes

    PubMed Central

    Gherardi, Marco; Lagomarsino, Marco Cosentino

    2015-01-01

    Monitoring drift ice in the Arctic and Antarctic regions directly and by remote sensing is important for the study of climate, but a unified modeling framework is lacking. Hence, interpretation of the data, as well as the decision of what to measure, represent a challenge for different fields of science. To address this point, we analyzed, using statistical physics tools, satellite images of sea ice from four different locations in both the northern and southern hemispheres, and measured the size and the elongation of ice floes (floating pieces of ice). We find that (i) floe size follows a distribution that can be characterized with good approximation by a single length scale , which we discuss in the framework of stochastic fragmentation models, and (ii) the deviation of their shape from circularity is reproduced with remarkable precision by a geometric model of coalescence by freezing, based on random Voronoi tessellations, with a single free parameter expressing the shape disorder. Although the physical interpretations remain open, this advocates the parameters and as two independent indicators of the environment in the polar regions, which are easily accessible by remote sensing. PMID:26014797

  4. Infrared thermography of welding zones produced by polymer extrusion additive manufacturing.

    PubMed

    Seppala, Jonathan E; Migler, Kalman D

    2016-10-01

    In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters.

  5. Effects of climate change on evapotranspiration over the Okavango Delta water resources

    NASA Astrophysics Data System (ADS)

    Moses, Oliver; Hambira, Wame L.

    2018-06-01

    In semi-arid developing countries, most poor people depend on contaminated surface or groundwater resources since they do not have access to safe and centrally supplied water. These water resources are threatened by several factors that include high evapotranspiration rates. In the Okavango Delta region in the north-western Botswana, communities facing insufficient centrally supplied water rely mainly on the surface water resources of the Delta. The Delta loses about 98% of its water through evapotranspiration. However, the 2% remaining water rescues the communities facing insufficient water from the main stream water supply. To understand the effects of climate change on evapotranspiration over the Okavango Delta water resources, this study analysed trends in the main climatic parameters needed as input variables in evapotranspiration models. The Mann Kendall test was used in the analysis. Trend analysis is crucial since it reveals the direction of trends in the climatic parameters, which is helpful in determining the effects of climate change on evapotranspiration. The main climatic parameters required as input variables in evapotranspiration models that were of interest in this study were wind speeds, solar radiation and relative humidity. Very little research has been conducted on these climatic parameters in the Okavango Delta region. The conducted trend analysis was more on wind speeds, which had relatively longer data records than the other two climatic parameters of interest. Generally, statistically significant increasing trends have been found, which suggests that climate change is likely to further increase evapotranspiration over the Okavango Delta water resources.

  6. Influence of the coating level on the heterogeneous ozonolysis kinetics and product yields of chlorpyrifos ethyl adsorbed on sand particles.

    PubMed

    El Masri, Ahmad; Laversin, Hélène; Chakir, Abdelkhaleq; Roth, Estelle

    2016-12-01

    Heterogeneous oxidation of chlorpyrifos ethyl (CLP) coated sand particles by gaseous ozone was studied. Mono-size sand was coated with CLP at different coating levels between 10 and 100 μg g -1 and exposed to ozone. Results were analyzed thanks to Gas Surface Reaction and Surface Layer Reaction Models. Kinetic parameters derived from these models were analyzed and led to several conclusions. The equilibrium constant of O 3 between the gas phase and the CLP-coated sand was independent on the sand contamination level. Ozone seems to have similar affinity for coated or uncoated sand surface. Meanwhile, the kinetic parameters decreased with an increasing coating level. Chlorpyrifos Oxon, (CLPO) has been identified and quantified as an ozonolysis product. The product yield of CLPO remains constant (53 ± 10%) for the different coating level. The key parameter influencing the CLP reactivity towards ozone was the CLP-coating level. This dependence had a great influence on the lifetime of the CLP coated on sand particles, with respect to ozone, which could reach several years at high contamination level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. N-body simulations of terrestrial planet formation under the influence of a hot Jupiter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogihara, Masahiro; Kobayashi, Hiroshi; Inutsuka, Shu-ichiro, E-mail: omasahiro@oca.eu, E-mail: ogihara@nagoya-u.jp

    We investigate the formation of multiple-planet systems in the presence of a hot Jupiter (HJ) using extended N-body simulations that are performed simultaneously with semianalytic calculations. Our primary aims are to describe the planet formation process starting from planetesimals using high-resolution simulations, and to examine the dependences of the architecture of planetary systems on input parameters (e.g., disk mass, disk viscosity). We observe that protoplanets that arise from oligarchic growth and undergo type I migration stop migrating when they join a chain of resonant planets outside the orbit of an HJ. The formation of a resonant chain is almost independentmore » of our model parameters, and is thus a robust process. At the end of our simulations, several terrestrial planets remain at around 0.1 AU. The formed planets are not equal mass; the largest planet constitutes more than 50% of the total mass in the close-in region, which is also less dependent on parameters. In the previous work of this paper, we have found a new physical mechanism of induced migration of the HJ, which is called a crowding-out. If the HJ opens up a wide gap in the disk (e.g., owing to low disk viscosity), crowding-out becomes less efficient and the HJ remains. We also discuss angular momentum transfer between the planets and disk.« less

  8. Fish bioaccumulation and biomarkers in environmental risk assessment: a review.

    PubMed

    van der Oost, Ron; Beyer, Jonny; Vermeulen, Nico P E

    2003-02-01

    In this review, a wide array of bioaccumulation markers and biomarkers, used to demonstrate exposure to and effects of environmental contaminants, has been discussed in relation to their feasibility in environmental risk assessment (ERA). Fish bioaccumulation markers may be applied in order to elucidate the aquatic behavior of environmental contaminants, as bioconcentrators to identify certain substances with low water levels and to assess exposure of aquatic organisms. Since it is virtually impossible to predict the fate of xenobiotic substances with simple partitioning models, the complexity of bioaccumulation should be considered, including toxicokinetics, metabolism, biota-sediment accumulation factors (BSAFs), organ-specific bioaccumulation and bound residues. Since it remains hard to accurately predict bioaccumulation in fish, even with highly sophisticated models, analyses of tissue levels are required. The most promising fish bioaccumulation markers are body burdens of persistent organic pollutants, like PCBs and DDTs. Since PCDD and PCDF levels in fish tissues are very low as compared with the sediment levels, their value as bioaccumulation markers remains questionable. Easily biodegradable compounds, such as PAHs and chlorinated phenols, do not tend to accumulate in fish tissues in quantities that reflect the exposure. Semipermeable membrane devices (SPMDs) have been successfully used to mimic bioaccumulation of hydrophobic organic substances in aquatic organisms. In order to assess exposure to or effects of environmental pollutants on aquatic ecosystems, the following suite of fish biomarkers may be examined: biotransformation enzymes (phase I and II), oxidative stress parameters, biotransformation products, stress proteins, metallothioneins (MTs), MXR proteins, hematological parameters, immunological parameters, reproductive and endocrine parameters, genotoxic parameters, neuromuscular parameters, physiological, histological and morphological parameters. All fish biomarkers are evaluated for their potential use in ERA programs, based upon six criteria that have been proposed in the present paper. This evaluation demonstrates that phase I enzymes (e.g. hepatic EROD and CYP1A), biotransformation products (e.g. biliary PAH metabolites), reproductive parameters (e.g. plasma VTG) and genotoxic parameters (e.g. hepatic DNA adducts) are currently the most valuable fish biomarkers for ERA. The use of biomonitoring methods in the control strategies for chemical pollution has several advantages over chemical monitoring. Many of the biological measurements form the only way of integrating effects on a large number of individual and interactive processes in aquatic organisms. Moreover, biological and biochemical effects may link the bioavailability of the compounds of interest with their concentration at target organs and intrinsic toxicity. The limitations of biomonitoring, such as confounding factors that are not related to pollution, should be carefully considered when interpreting biomarker data. Based upon this overview there is little doubt that measurements of bioaccumulation and biomarker responses in fish from contaminated sites offer great promises for providing information that can contribute to environmental monitoring programs designed for various aspects of ERA.

  9. Plane Wave SH₀ Piezoceramic Transduction Optimized Using Geometrical Parameters.

    PubMed

    Boivin, Guillaume; Viens, Martin; Belanger, Pierre

    2018-02-10

    Structural health monitoring is a prominent alternative to the scheduled maintenance of safety-critical components. The nondispersive nature as well as the through-thickness mode shape of the fundamental shear horizontal guided wave mode (SH 0 ) make it a particularly attractive candidate for ultrasonic guided wave structural health monitoring. However, plane wave excitation of SH 0 at a high level of purity remains challenging because of the existence of the fundamental Lamb modes (A 0 and S 0 ) below the cutoff frequency thickness product of high-order modes. This paper presents a piezoelectric transducer concept optimized for plane SH 0 wave transduction based on the transducer geometry. The transducer parameter exploration was initially performed using a simple analytical model. A 3D multiphysics finite element model was then used to refine the transducer design. Finally, an experimental validation was conducted with a 3D laser Doppler vibrometer system. The analytical model, the finite element model, and the experimental measurement showed excellent agreement. The modal selectivity of SH 0 within a 20 ∘ beam opening angle at the design frequency of 425 kHz in a 1.59 mm aluminum plate was 23 dB, and the angle of the 6 dB wavefront was 86 ∘ .

  10. Quantification of downscaled precipitation uncertainties via Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nury, A. H.; Sharma, A.; Marshall, L. A.

    2017-12-01

    Prediction of precipitation from global climate model (GCM) outputs remains critical to decision-making in water-stressed regions. In this regard, downscaling of GCM output has been a useful tool for analysing future hydro-climatological states. Several downscaling approaches have been developed for precipitation downscaling, including those using dynamical or statistical downscaling methods. Frequently, outputs from dynamical downscaling are not readily transferable across regions for significant methodical and computational difficulties. Statistical downscaling approaches provide a flexible and efficient alternative, providing hydro-climatological outputs across multiple temporal and spatial scales in many locations. However these approaches are subject to significant uncertainty, arising due to uncertainty in the downscaled model parameters and in the use of different reanalysis products for inferring appropriate model parameters. Consequently, these will affect the performance of simulation in catchment scale. This study develops a Bayesian framework for modelling downscaled daily precipitation from GCM outputs. This study aims to introduce uncertainties in downscaling evaluating reanalysis datasets against observational rainfall data over Australia. In this research a consistent technique for quantifying downscaling uncertainties by means of Bayesian downscaling frame work has been proposed. The results suggest that there are differences in downscaled precipitation occurrences and extremes.

  11. UncertiantyQuantificationinTsunamiEarlyWarningCalculations

    NASA Astrophysics Data System (ADS)

    Anunziato, Alessandro

    2016-04-01

    The objective of the Tsunami calculations is the estimation of the impact of waves caused by large seismic events on the coasts and the determination of potential inundation areas. In the case of Early Warning Systems, i.e. systems that should allow to anticipate the possible effects and give the possibility to react consequently (i.e. issue evacuation of areas at risk), this must be done in very short time (minutes) to be effective. In reality, the above estimation includes several uncertainty factors which make the prediction extremely difficult. The quality of the very first estimations of the seismic parameters is not very precise: the uncertainty in the determination of the seismic components (location, magnitude and depth) decreases with time because as time passes it is possible to use more and more seismic signals and the event characterization becomes more precise. On the other hand other parameters that are necessary to establish for the performance of a calculation (i.e. fault mechanism) are difficult to estimate accurately also after hours (and in some cases remain unknown) and therefore this uncertainty remains in the estimated impact evaluations; when a quick tsunami calculation is necessary (early warning systems) the possibility to include any possible future variation of the conditions to establish the "worst case scenario" is particularly important. The consequence is that the number of uncertain parameters is so large that it is not easy to assess the relative importance of each of them and their effect on the predicted results. In general the complexity of system computer codes is generated by the multitude of different models which are assembled into a single program to give the global response for a particular phenomenon. Each of these model has associated a determined uncertainty coming from the application of that model to single cases and/or separated effect test cases. The difficulty in the prediction of a Tsunami calculation response is additionally increased by the not perfect knowledge of the initial and boundary conditions so that the response can change even with small variations of the input. The paper analyses a number of potential events in the Mediterranean Sea and in the Atlantic Ocean and for each of them a large number of calculations is performed (Monte Carlo simulation) in order to identify the relative importance of each of the uncertain parameter that is adopted. It is shown that even if after several hours the variation on the estimate is reduces, still remains and in some cases it can lead to different conclusions if this information is used as alerting method. The cases considered are: a mild event in the Hellenic arc (Mag. 6.9), a relatively medium event in Algeria (Mag. 7.2) and a quite relevant event in the Gulf of Cadiz (Mag. 8.2).

  12. Laboratory parameter-based machine learning model for excluding non-alcoholic fatty liver disease (NAFLD) in the general population.

    PubMed

    Yip, T C-F; Ma, A J; Wong, V W-S; Tse, Y-K; Chan, H L-Y; Yuen, P-C; Wong, G L-H

    2017-08-01

    Non-alcoholic fatty liver disease (NAFLD) affects 20%-40% of the general population in developed countries and is an increasingly important cause of hepatocellular carcinoma. Electronic medical records facilitate large-scale epidemiological studies, existing NAFLD scores often require clinical and anthropometric parameters that may not be captured in those databases. To develop and validate a laboratory parameter-based machine learning model to detect NAFLD for the general population. We randomly divided 922 subjects from a population screening study into training and validation groups; NAFLD was diagnosed by proton-magnetic resonance spectroscopy. On the basis of machine learning from 23 routine clinical and laboratory parameters after elastic net regulation, we evaluated the logistic regression, ridge regression, AdaBoost and decision tree models. The areas under receiver-operating characteristic curve (AUROC) of models in validation group were compared. Six predictors including alanine aminotransferase, high-density lipoprotein cholesterol, triglyceride, haemoglobin A 1c , white blood cell count and the presence of hypertension were selected. The NAFLD ridge score achieved AUROC of 0.87 (95% CI 0.83-0.90) and 0.88 (0.84-0.91) in the training and validation groups respectively. Using dual cut-offs of 0.24 and 0.44, NAFLD ridge score achieved 92% (86%-96%) sensitivity and 90% (86%-93%) specificity with corresponding negative and positive predictive values of 96% (91%-98%) and 69% (59%-78%), and 87% of overall accuracy among 70% of classifiable subjects in the validation group; 30% of subjects remained indeterminate. NAFLD ridge score is a simple and robust reference comparable to existing NAFLD scores to exclude NAFLD patients in epidemiological studies. © 2017 John Wiley & Sons Ltd.

  13. Interactions of donor sources and media influence the histo-morphological quality of full-thickness skin models.

    PubMed

    Lange, Julia; Weil, Frederik; Riegler, Christoph; Groeber, Florian; Rebhan, Silke; Kurdyn, Szymon; Alb, Miriam; Kneitz, Hermann; Gelbrich, Götz; Walles, Heike; Mielke, Stephan

    2016-10-01

    Human artificial skin models are increasingly employed as non-animal test platforms for research and medical purposes. However, the overall histopathological quality of such models may vary significantly. Therefore, the effects of manufacturing protocols and donor sources on the quality of skin models built-up from fibroblasts and keratinocytes derived from juvenile foreskins is studied. Histo-morphological parameters such as epidermal thickness, number of epidermal cell layers, dermal thickness, dermo-epidermal adhesion and absence of cellular nuclei in the corneal layer are obtained and scored accordingly. In total, 144 full-thickness skin models derived from 16 different donors, built-up in triplicates using three different culture conditions were successfully generated. In univariate analysis both media and donor age affected the quality of skin models significantly. Both parameters remained statistically significant in multivariate analyses. Performing general linear model analyses we could show that individual medium-donor-interactions influence the quality. These observations suggest that the optimal choice of media may differ from donor to donor and coincides with findings where significant inter-individual variations of growth rates in keratinocytes and fibroblasts have been described. Thus, the consideration of individual medium-donor-interactions may improve the overall quality of human organ models thereby forming a reproducible test platform for sophisticated clinical research. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Accuracy limit of rigid 3-point water models

    NASA Astrophysics Data System (ADS)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  15. USING A PHENOMENOLOGICAL MODEL TO TEST THE COINCIDENCE PROBLEM OF DARK ENERGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Yun; Zhu Zonghong; Alcaniz, J. S.

    2010-03-01

    By assuming a phenomenological form for the ratio of the dark energy and matter densities rho{sub X} {proportional_to} rho{sub m} a {sup x}i, we discuss the cosmic coincidence problem in light of current observational data. Here, xi is a key parameter to denote the severity of the coincidence problem. In this scenario, xi = 3 and xi = 0 correspond to LAMBDACDM and the self-similar solution without the coincidence problem, respectively. Hence, any solution with a scaling parameter 0 < xi < 3 makes the coincidence problem less severe. In addition, the standard cosmology without interaction between dark energy andmore » dark matter is characterized by xi + 3omega{sub X} = 0, where omega{sub X} is the equation of state of the dark energy component, whereas the inequality xi + 3omega{sub X} {ne} 0 represents non-standard cosmology. We place observational constraints on the parameters (OMEGA{sub X,0}, omega{sub X}, xi) of this model, where OMEGA{sub X,0} is the present value of density parameter of dark energy OMEGA{sub X}, by using the Constitution Set (397 supernovae of type Ia data, hereafter SNeIa), the cosmic microwave background shift parameter from the five-year Wilkinson Microwave Anisotropy Probe and the Sloan Digital Sky Survey baryon acoustic peak. Combining the three samples, we get OMEGA{sub X,0} = 0.72 +- 0.02, omega{sub X} = -0.98 +- 0.07, and xi = 3.06 +- 0.35 at 68.3% confidence level. The result shows that the LAMBDACDM model still remains a good fit to the recent observational data, and the coincidence problem indeed exists and is quite severe, in the framework of this simple phenomenological model. We further constrain the model with the transition redshift (deceleration/acceleration). It shows that if the transition from deceleration to acceleration happens at the redshift z > 0.73, within the framework of this model, we can conclude that the interaction between dark energy and dark matter is necessary.« less

  16. Inheritance of astigmatism: evidence for a major autosomal dominant locus.

    PubMed Central

    Clementi, M; Angi, M; Forabosco, P; Di Gianantonio, E; Tenconi, R

    1998-01-01

    Although astigmatism is a frequent refractive error, its mode of inheritance remains uncertain. Complex segregation analysis was performed, by the POINTER and COMDS programs, with data from a geographically well-defined sample of 125 nuclear families of individuals affected by astigmatism. POINTER could not distinguish between alternative genetic models, and only the hypothesis of no familial transmission could be rejected. After inclusion of the severity parameter, COMDS results defined a genetic model for corneal astigmatism and provided evidence for single-major-locus inheritance. These results suggest that genetic linkage studies could be implemented and that they should be limited to multiplex families with severely affected individuals. PMID:9718344

  17. Anisotropic inflation with derivative couplings

    NASA Astrophysics Data System (ADS)

    Holland, Jonathan; Kanno, Sugumi; Zavala, Ivonne

    2018-05-01

    We study anisotropic power-law inflationary solutions when the inflaton and its derivative couple to a vector field. This type of coupling is motivated by D-brane inflationary models, in which the inflaton, and a vector field living on the D-brane, couple disformally (derivatively). We start by studying a phenomenological model where we show the existence of anisotropic solutions and demonstrate their stability via a dynamical system analysis. Compared to the case without a derivative coupling, the anisotropy is reduced and thus can be made consistent with current limits, while the value of the slow-roll parameter remains almost unchanged. We also discuss solutions for more general cases, including D-brane-like couplings.

  18. Linear-stability theory of thermocapillary convection in a model of float-zone crystal growth

    NASA Technical Reports Server (NTRS)

    Neitzel, G. P.; Chang, K.-T.; Jankowski, D. F.; Mittelmann, H. D.

    1992-01-01

    Linear-stability theory has been applied to a basic state of thermocapillary convection in a model half-zone to determine values of the Marangoni number above which instability is guaranteed. The basic state must be determined numerically since the half-zone is of finite, O(1) aspect ratio with two-dimensional flow and temperature fields. This, in turn, means that the governing equations for disturbance quantities will remain partial differential equations. The disturbance equations are treated by a staggered-grid discretization scheme. Results are presented for a variety of parameters of interest in the problem, including both terrestrial and microgravity cases.

  19. Supersymmetry without prejudice at the LHC

    NASA Astrophysics Data System (ADS)

    Conley, John A.; Gainer, James S.; Hewett, JoAnne L.; Le, My Phuong; Rizzo, Thomas G.

    2011-07-01

    The discovery and exploration of Supersymmetry in a model-independent fashion will be a daunting task due to the large number of soft-breaking parameters in the MSSM. In this paper, we explore the capability of the ATLAS detector at the LHC (sqrt{s}=14 TeV, 1 fb-1) to find SUSY within the 19-dimensional pMSSM subspace of the MSSM using their standard transverse missing energy and long-lived particle searches that were essentially designed for mSUGRA. To this end, we employ a set of ˜71k previously generated model points in the 19-dimensional parameter space that satisfy all of the existing experimental and theoretical constraints. Employing ATLAS-generated SM backgrounds and following their approach in each of 11 missing energy analyses as closely as possible, we explore all of these 71k model points for a possible SUSY signal. To test our analysis procedure, we first verify that we faithfully reproduce the published ATLAS results for the signal distributions for their benchmark mSUGRA model points. We then show that, requiring all sparticle masses to lie below 1(3) TeV, almost all (two-thirds) of the pMSSM model points are discovered with a significance S>5 in at least one of these 11 analyses assuming a 50% systematic error on the SM background. If this systematic error can be reduced to only 20% then this parameter space coverage is increased. These results are indicative that the ATLAS SUSY search strategy is robust under a broad class of Supersymmetric models. We then explore in detail the properties of the kinematically accessible model points which remain unobservable by these search analyses in order to ascertain problematic cases which may arise in general SUSY searches.

  20. Seasonal Influenza Forecasting in Real Time Using the Incidence Decay With Exponential Adjustment Model

    PubMed Central

    Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan

    2017-01-01

    Abstract Background Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. Methods We used the previously described “incidence decay with exponential adjustment” (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015–2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. Results The 2015–2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R0 approximately 1.4 for all fits). Lower R0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. Conclusions A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance. PMID:29497629

  1. Onset of mortality increase with age and age trajectories of mortality from all diseases in the four Nordic countries.

    PubMed

    Dolejs, Josef; Marešová, Petra

    2017-01-01

    The answer to the question "At what age does aging begin?" is tightly related to the question "Where is the onset of mortality increase with age?" Age affects mortality rates from all diseases differently than it affects mortality rates from nonbiological causes. Mortality increase with age in adult populations has been modeled by many authors, and little attention has been given to mortality decrease with age after birth. Nonbiological causes are excluded, and the category "all diseases" is studied. It is analyzed in Denmark, Finland, Norway, and Sweden during the period 1994-2011, and all possible models are screened. Age trajectories of mortality are analyzed separately: before the age category where mortality reaches its minimal value and after the age category. Resulting age trajectories from all diseases showed a strong minimum, which was hidden in total mortality. The inverse proportion between mortality and age fitted in 54 of 58 cases before mortality minimum. The Gompertz model with two parameters fitted as mortality increased with age in 17 of 58 cases after mortality minimum, and the Gompertz model with a small positive quadratic term fitted data in the remaining 41 cases. The mean age where mortality reached minimal value was 8 (95% confidence interval 7.05-8.95) years. The figures depict an age where the human population has a minimal risk of death from biological causes. Inverse proportion and the Gompertz model fitted data on both sides of the mortality minimum, and three parameters determined the shape of the age-mortality trajectory. Life expectancy should be determined by the two standard Gompertz parameters and also by the single parameter in the model c/x. All-disease mortality represents an alternative tool to study the impact of age. All results are based on published data.

  2. Predicting the heat of vaporization of iron at high temperatures using time-resolved laser-induced incandescence and Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Sipkens, Timothy A.; Hadwin, Paul J.; Grauer, Samuel J.; Daun, Kyle J.

    2018-03-01

    Competing theories have been proposed to account for how the latent heat of vaporization of liquid iron varies with temperature, but experimental confirmation remains elusive, particularly at high temperatures. We propose time-resolved laser-induced incandescence measurements on iron nanoparticles combined with Bayesian model plausibility, as a novel method for evaluating these relationships. Our approach scores the explanatory power of candidate models, accounting for parameter uncertainty, model complexity, measurement noise, and goodness-of-fit. The approach is first validated with simulated data and then applied to experimental data for iron nanoparticles in argon. Our results justify the use of Román's equation to account for the temperature dependence of the latent heat of vaporization of liquid iron.

  3. Acceleration of ultrahigh-energy cosmic rays in starburst superwinds

    NASA Astrophysics Data System (ADS)

    Anchordoqui, Luis Alfredo

    2018-03-01

    The sources of ultrahigh-energy cosmic rays (UHECRs) have been stubbornly elusive. However, the latest report of the Pierre Auger Observatory provides a compelling indication for a possible correlation between the arrival directions of UHECRs and nearby starburst galaxies. We argue that if starbursts are sources of UHECRs, then particle acceleration in the large-scale terminal shock of the superwind that flows from the starburst engine represents the best known concept model in the market. We investigate new constraints on the model and readjust free parameters accordingly. We show that UHECR acceleration above about 1 011 GeV remains consistent with observation. We also show that the model could accommodate hard source spectra as required by Auger data. We demonstrate how neutrino emission can be used as a discriminator among acceleration models.

  4. Slow secondary relaxation in a free-energy landscape model for relaxation in glass-forming liquids

    NASA Astrophysics Data System (ADS)

    Diezemann, Gregor; Mohanty, Udayan; Oppenheim, Irwin

    1999-02-01

    Within the framework of a free-energy landscape model for the relaxation in supercooled liquids the primary (α) relaxation is modeled by transitions among different free-energy minima. The secondary (β) relaxation then corresponds to intraminima relaxation. We consider a simple model for the reorientational motions of the molecules associated with both processes and calculate the dielectric susceptibility as well as the spin-lattice relaxation times. The parameters of the model can be chosen in a way that both quantities show a behavior similar to that observed in experimental studies on supercooled liquids. In particular we find that it is not possible to obtain a crossing of the time scales associated with α and β relaxation. In our model these processes always merge at high temperatures and the α process remains above the merging temperature. The relation to other models is discussed.

  5. Numerical Modeling System for Shoreline Change.

    DTIC Science & Technology

    1986-10-01

    waves and currents remains essentially unchanged, the behavior of a beach fill can be estimated (James 1975; Shore Protection Manual (SPM) 1984... Htp K( 0 ) KR(cxtp, Dip, D) Ks(D) / Ks(Dtp) (15) S.. .G Io Go -ZVI / 4-9 where KD is the diffraction coefficient, 8 is the geometric angle for a line...angle to the x-axis. For the value of the longshore sand transport parameter, K1 in Eq. (5a), Komar and Inman (1979) and the Shore Protection Manual

  6. Clustering and optimal arrangement of enzymes in reaction-diffusion systems.

    PubMed

    Buchner, Alexander; Tostevin, Filipe; Gerland, Ulrich

    2013-05-17

    Enzymes within biochemical pathways are often colocalized, yet the consequences of specific spatial enzyme arrangements remain poorly understood. We study the impact of enzyme arrangement on reaction efficiency within a reaction-diffusion model. The optimal arrangement transitions from a cluster to a distributed profile as a single parameter, which controls the probability of reaction versus diffusive loss of pathway intermediates, is varied. We introduce the concept of enzyme exposure to explain how this transition arises from the stochastic nature of molecular reactions and diffusion.

  7. Framework for Understanding Structural Errors (FUSE): A modular framework to diagnose differences between hydrological models

    USGS Publications Warehouse

    Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.

    2008-01-01

    The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.

  8. Temperature and pressure correlation for volume of gas hydrates with crystal structures sI and sII

    NASA Astrophysics Data System (ADS)

    Vinš, Václav; Jäger, Andreas; Hielscher, Sebastian; Span, Roland; Hrubý, Jan; Breitkopf, Cornelia

    The temperature and pressure correlations for the volume of gas hydrates forming crystal structures sI and sII developed in previous study [Fluid Phase Equilib. 427 (2016) 268-281], focused on the modeling of pure gas hydrates relevant in CCS (carbon capture and storage), were revised and modified for the modeling of mixed hydrates in this study. A universal reference state at temperature of 273.15 K and pressure of 1 Pa is used in the new correlation. Coefficients for the thermal expansion together with the reference lattice parameter were simultaneously correlated to both the temperature data and the pressure data for the lattice parameter. A two-stage Levenberg Marquardt algorithm was employed for the parameter optimization. The pressure dependence described in terms of the bulk modulus remained unchanged compared to the original study. A constant value for the bulk modulus B0 = 10 GPa was employed for all selected hydrate formers. The new correlation is in good agreement with the experimental data over wide temperature and pressure ranges from 0 K to 293 K and from 0 to 2000 MPa, respectively. Compared to the original correlation used for the modeling of pure gas hydrates the new correlation provides significantly better agreement with the experimental data for sI hydrates. The results of the new correlation are comparable to the results of the old correlation in case of sII hydrates. In addition, the new correlation is suitable for modeling of mixed hydrates.

  9. Dynamical Behavior of Meteor in AN Atmosphere: Theory vs Observations

    NASA Astrophysics Data System (ADS)

    Gritsevich, Maria

    Up to now the only quantities which directly follow from the available meteor observations are its brightness, the height above sea level, the length along the trajectory, and as a consequence its velocity as a function of time. Other important parameters like meteoroid's mass, its shape, bulk and grain density, temperature remain unknown and should be found based on physical theories and special experiments. In this study I will consider modern methods for evaluating meteoroid parameters from observational data, and some of their applications. The study in particular takes an approach in modelling the meteoroids' mass and other properties from the aerodynamical point of view, e.g. from the rate of body deceleration in the atmosphere as opposed to conventionally used luminosity [1]. An analytical model of the atmospheric entry is calculated for registered meteors using published observational data and evaluating parameters describing drag, ablation and rotation rate of meteoroid along the luminous segment of the trajectory. One of the special features of this approach is the possibility of considering a change in body shape during its motion in the atmosphere. The correct mathematical modelling of meteor events is necessary for further studies of consequences for collisions of cosmic bodies with the Earth [2]. It also helps us to estimate the key parameters of the meteoroids, including deceleration, pre-entry mass, terminal mass, ablation coefficient, effective destruction enthalpy, and heat-transfer coefficient. With this information, one can use models for the dust influx onto Earth to estimate the number of meteors detected by a camera of a given sensitivity. References 1. Gritsevich M. I. Determination of Parameters of Meteor Bodies based on Flight Obser-vational Data // Advances in Space Research, 44, p. 323-334, 2009. 2. Gritsevich M. I., Stulov V. P. and Turchak L. I. Classification of Consequences for Col-lisions of Natural Cosmic Bodies with the Earth // Doklady Physics, 54, p. 499-503, 2009.

  10. Fate of Large-Scale Structure in Modified Gravity After GW170817 and GRB170817A

    NASA Astrophysics Data System (ADS)

    Amendola, Luca; Kunz, Martin; Saltas, Ippocratis D.; Sawicki, Ignacy

    2018-03-01

    The coincident detection of gravitational waves (GW) and a gamma-ray burst from a merger of neutron stars has placed an extremely stringent bound on the speed of GWs. We showed previously that the presence of gravitational slip (η ) in cosmology is intimately tied to modifications of GW propagation. This new constraint implies that the only remaining viable source of gravitational slip is a conformal coupling to gravity in scalar-tensor theories, while viable vector-tensor theories cannot now generate gravitational slip at all. We discuss structure formation in the remaining viable models, demonstrating that (i) the dark-matter growth rate must now be at least as fast as in general relativity (GR), with the possible exception of that beyond the Horndeski model, and (ii) if there is any scale dependence at all in the slip parameter, it is such that it takes the GR value at large scales. We show a consistency relation that must be violated if gravity is modified.

  11. Mechanical characterization of atherosclerotic arteries using finite-element modeling: feasibility study on mock arteries.

    PubMed

    Pazos, Valérie; Mongrain, Rosaire; Tardif, Jean-Claude

    2010-06-01

    Clinical studies on lipid-lowering therapy have shown that changing the composition of lipid pools reduced significantly the risk of cardiac events associated with plaque rupture. It has been shown also that changing the composition of the lipid pool affects its mechanical properties. However, knowledge about the mechanical properties of human atherosclerotic lesions remains limited due to the difficulty of the experiments. This paper aims to assess the feasibility of characterizing a lipid pool embedded in the wall of a pressurized vessel using finite-element simulations and an optimization algorithm. Finite-element simulations of inflation experiments were used together with nonlinear least squares algorithm to estimate the material model parameters of the wall and of the inclusion. An optimal fit of the simulated experiment and the real experiment was sought with the parameter estimation algorithm. The method was first tested on a single-layer polyvinyl alcohol (PVA) cryogel stenotic vessel, and then, applied on a double-layered PVA cryogel stenotic vessel with a lipid inclusion.

  12. On the existence of vapor-liquid phase transition in dusty plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kundu, M.; Sen, A.; Ganesh, R.

    2014-10-15

    The phenomenon of phase transition in a dusty-plasma system (DPS) has attracted some attention in the past. Earlier Farouki and Hamaguchi [J. Chem. Phys. 101, 9876 (1994)] have demonstrated the existence of a liquid to solid transition in DPS where the dust particles interact through a Yukawa potential. However, the question of the existence of a vapor-liquid (VL) transition in such a system remains unanswered and relatively unexplored so far. We have investigated this problem by performing extensive molecular dynamics simulations which show that the VL transition does not have a critical curve in the pressure versus volume diagram formore » a large range of the Yukawa screening parameter κ and the Coulomb coupling parameter Γ. Thus, the VL phase transition is found to be super-critical, meaning that this transition is continuous in the dusty plasma model given by Farouki and Hamaguchi. We provide an approximate analytic explanation of this finding by means of a simple model calculation.« less

  13. Study of a dry room in a battery manufacturing plant using a process model

    NASA Astrophysics Data System (ADS)

    Ahmed, Shabbir; Nelson, Paul A.; Dees, Dennis W.

    2016-09-01

    The manufacture of lithium ion batteries requires some processing steps to be carried out in a dry room, where the moisture content should remain below 100 parts per million. The design and operation of such a dry room adds to the cost of the battery. This paper studied the humidity management of the air to and from the dry room to understand the impact of design and operating parameters on the energy demand and the cost contribution towards the battery manufacturing cost. The study was conducted with the help of a process model for a dry room with a volume of 16,000 cubic meters. For a defined base case scenario it was found that the dry room operation has an energy demand of approximately 400 kW. The paper explores some tradeoffs in design and operating parameters by looking at the humidity reduction by quenching the make-up air vs. at the desiccant wheel, and the impact of the heat recovery from the desiccant regeneration cycle.

  14. Multi-Dielectric Brownian Dynamics and Design-Space-Exploration Studies of Permeation in Ion Channels.

    PubMed

    Siksik, May; Krishnamurthy, Vikram

    2017-09-01

    This paper proposes a multi-dielectric Brownian dynamics simulation framework for design-space-exploration (DSE) studies of ion-channel permeation. The goal of such DSE studies is to estimate the channel modeling-parameters that minimize the mean-squared error between the simulated and expected "permeation characteristics." To address this computational challenge, we use a methodology based on statistical inference that utilizes the knowledge of channel structure to prune the design space. We demonstrate the proposed framework and DSE methodology using a case study based on the KcsA ion channel, in which the design space is successfully reduced from a 6-D space to a 2-D space. Our results show that the channel dielectric map computed using the framework matches with that computed directly using molecular dynamics with an error of 7%. Finally, the scalability and resolution of the model used are explored, and it is shown that the memory requirements needed for DSE remain constant as the number of parameters (degree of heterogeneity) increases.

  15. Alternative approaches to predicting methane emissions from dairy cows.

    PubMed

    Mills, J A N; Kebreab, E; Yates, C M; Crompton, L A; Cammell, S B; Dhanoa, M S; Agnew, R E; France, J

    2003-12-01

    Previous attempts to apply statistical models, which correlate nutrient intake with methane production, have been of limited value where predictions are obtained for nutrient intakes and diet types outside those used in model construction. Dynamic mechanistic models have proved more suitable for extrapolation, but they remain computationally expensive and are not applied easily in practical situations. The first objective of this research focused on employing conventional techniques to generate statistical models of methane production appropriate to United Kingdom dairy systems. The second objective was to evaluate these models and a model published previously using both United Kingdom and North American data sets. Thirdly, nonlinear models were considered as alternatives to the conventional linear regressions. The United Kingdom calorimetry data used to construct the linear models also were used to develop the three nonlinear alternatives that were all of modified Mitscherlich (monomolecular) form. Of the linear models tested, an equation from the literature proved most reliable across the full range of evaluation data (root mean square prediction error = 21.3%). However, the Mitscherlich models demonstrated the greatest degree of adaptability across diet types and intake level. The most successful model for simulating the independent data was a modified Mitscherlich equation with the steepness parameter set to represent dietary starch-to-ADF ratio (root mean square prediction error = 20.6%). However, when such data were unavailable, simpler Mitscherlich forms relating dry matter or metabolizable energy intake to methane production remained better alternatives relative to their linear counterparts.

  16. Chandra Contaminant Migration Model

    NASA Technical Reports Server (NTRS)

    Swartz, Douglas A.; O'Dell, Steve L.

    2014-01-01

    High volatility cleans OBFs and low volatility produces a high build-up at OBF centers; only a narrow (factor of 2 or less) volatility range produces the observed spatial pattern. Simulations predict less accumulation above outer S-array CCDs; this may explain, in part, gratings/imaging C/MnL discrepancies. Simulations produce a change in center accumulation due solely to DH heater ON/OFF temperature change; but a 2nd contaminant and perhaps a change in source rate is also required. Emissivity E may depend on thickness; another model parameter. Additional physics, e.g., surface migration, is not warranted at this time. At t approx. 14 yrs, model produced 0.22 grams of contaminant, 0.085 grams remaining within ACIS cavity; 7 percent (6mg) on OBFs.

  17. Complex networks: Effect of subtle changes in nature of randomness

    NASA Astrophysics Data System (ADS)

    Goswami, Sanchari; Biswas, Soham; Sen, Parongama

    2011-03-01

    In two different classes of network models, namely, the Watts Strogatz type and the Euclidean type, subtle changes have been introduced in the randomness. In the Watts Strogatz type network, rewiring has been done in different ways and although the qualitative results remain the same, finite differences in the exponents are observed. In the Euclidean type networks, where at least one finite phase transition occurs, two models differing in a similar way have been considered. The results show a possible shift in one of the phase transition points but no change in the values of the exponents. The WS and Euclidean type models are equivalent for extreme values of the parameters; we compare their behaviour for intermediate values.

  18. Determination of in vivo mechanical properties of long bones from their impedance response curves

    NASA Technical Reports Server (NTRS)

    Borders, S. G.

    1981-01-01

    A mathematical model consisting of a uniform, linear, visco-elastic, Euler-Bernoulli beam to represent the ulna or tibia of the vibrating forearm or leg system is developed. The skin and tissue compressed between the probe and bone is represented by a spring in series with the beam. The remaining skin and tissue surrounding the bone is represented by a visco-elastic foundation with mass. An extensive parametric study is carried out to determine the effect of each parameter of the mathematical model on its impedance response. A system identification algorithm is developed and programmed on a digital computer to determine the parametric values of the model which best simulate the data obtained from an impedance test.

  19. DFTB Parameters for the Periodic Table, Part 2: Energies and Energy Gradients from Hydrogen to Calcium.

    PubMed

    Oliveira, Augusto F; Philipsen, Pier; Heine, Thomas

    2015-11-10

    In the first part of this series, we presented a parametrization strategy to obtain high-quality electronic band structures on the basis of density-functional-based tight-binding (DFTB) calculations and published a parameter set called QUASINANO2013.1. Here, we extend our parametrization effort to include the remaining terms that are needed to compute the total energy and its gradient, commonly referred to as repulsive potential. Instead of parametrizing these terms as a two-body potential, we calculate them explicitly from the DFTB analogues of the Kohn-Sham total energy expression. This strategy requires only two further numerical parameters per element. Thus, the atomic configuration and four real numbers per element are sufficient to define the DFTB model at this level of parametrization. The QUASINANO2015 parameter set allows the calculation of energy, structure, and electronic structure of all systems composed of elements ranging from H to Ca. Extensive benchmarks show that the overall accuracy of QUASINANO2015 is comparable to that of well-established methods, including PM7 and hand-tuned DFTB parameter sets, while coverage of a much larger range of chemical systems is available.

  20. Resolution analysis of marine seismic full waveform data by Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Ray, A.; Sekar, A.; Hoversten, G. M.; Albertin, U.

    2015-12-01

    The Bayesian posterior density function (PDF) of earth models that fit full waveform seismic data convey information on the uncertainty with which the elastic model parameters are resolved. In this work, we apply the trans-dimensional reversible jump Markov Chain Monte Carlo method (RJ-MCMC) for the 1D inversion of noisy synthetic full-waveform seismic data in the frequency-wavenumber domain. While seismic full waveform inversion (FWI) is a powerful method for characterizing subsurface elastic parameters, the uncertainty in the inverted models has remained poorly known, if at all and is highly initial model dependent. The Bayesian method we use is trans-dimensional in that the number of model layers is not fixed, and flexible such that the layer boundaries are free to move around. The resulting parameterization does not require regularization to stabilize the inversion. Depth resolution is traded off with the number of layers, providing an estimate of uncertainty in elastic parameters (compressional and shear velocities Vp and Vs as well as density) with depth. We find that in the absence of additional constraints, Bayesian inversion can result in a wide range of posterior PDFs on Vp, Vs and density. These PDFs range from being clustered around the true model, to those that contain little resolution of any particular features other than those in the near surface, depending on the particular data and target geometry. We present results for a suite of different frequencies and offset ranges, examining the differences in the posterior model densities thus derived. Though these results are for a 1D earth, they are applicable to areas with simple, layered geology and provide valuable insight into the resolving capabilities of FWI, as well as highlight the challenges in solving a highly non-linear problem. The RJ-MCMC method also presents a tantalizing possibility for extension to 2D and 3D Bayesian inversion of full waveform seismic data in the future, as it objectively tackles the problem of model selection (i.e., the number of layers or cells for parameterization), which could ease the computational burden of evaluating forward models with many parameters.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Haixia; Li, Bo; Huang, Zhenghua

    How the solar corona is heated to high temperatures remains an unsolved mystery in solar physics. In the present study we analyze observations of 50 whole active region loops taken with the Extreme-ultraviolet Imaging Spectrometer on board the Hinode satellite. Eleven loops were classified as cool loops (<1 MK) and 39 as warm loops (1–2 MK). We study their plasma parameters, such as densities, temperatures, filling factors, nonthermal velocities, and Doppler velocities. We combine spectroscopic analysis with linear force-free magnetic field extrapolation to derive the 3D structure and positioning of the loops, their lengths and heights, and the magnetic fieldmore » strength along the loops. We use density-sensitive line pairs from Fe xii, Fe xiii, Si x, and Mg vii ions to obtain electron densities by taking special care of intensity background subtraction. The emission measure loci method is used to obtain the loop temperatures. We find that the loops are nearly isothermal along the line of sight. Their filling factors are between 8% and 89%. We also compare the observed parameters with the theoretical Rosner–Tucker–Vaiana (RTV) scaling law. We find that most of the loops are in an overpressure state relative to the RTV predictions. In a follow-up study, we will report a heating model of a parallel-cascade-based mechanism and will compare the model parameters with the loop plasma and structural parameters derived here.« less

  2. Beyond Control Panels: Direct Manipulation for Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Bradel, Lauren; North, Chris

    2013-07-19

    Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less

  3. Multiscale digital Arabidopsis predicts individual organ and whole-organism growth.

    PubMed

    Chew, Yin Hoon; Wenden, Bénédicte; Flis, Anna; Mengin, Virginie; Taylor, Jasper; Davey, Christopher L; Tindal, Christopher; Thomas, Howard; Ougham, Helen J; de Reffye, Philippe; Stitt, Mark; Williams, Mathew; Muetzelfeldt, Robert; Halliday, Karen J; Millar, Andrew J

    2014-09-30

    Understanding how dynamic molecular networks affect whole-organism physiology, analogous to mapping genotype to phenotype, remains a key challenge in biology. Quantitative models that represent processes at multiple scales and link understanding from several research domains can help to tackle this problem. Such integrated models are more common in crop science and ecophysiology than in the research communities that elucidate molecular networks. Several laboratories have modeled particular aspects of growth in Arabidopsis thaliana, but it was unclear whether these existing models could productively be combined. We test this approach by constructing a multiscale model of Arabidopsis rosette growth. Four existing models were integrated with minimal parameter modification (leaf water content and one flowering parameter used measured data). The resulting framework model links genetic regulation and biochemical dynamics to events at the organ and whole-plant levels, helping to understand the combined effects of endogenous and environmental regulators on Arabidopsis growth. The framework model was validated and tested with metabolic, physiological, and biomass data from two laboratories, for five photoperiods, three accessions, and a transgenic line, highlighting the plasticity of plant growth strategies. The model was extended to include stochastic development. Model simulations gave insight into the developmental control of leaf production and provided a quantitative explanation for the pleiotropic developmental phenotype caused by overexpression of miR156, which was an open question. Modular, multiscale models, assembling knowledge from systems biology to ecophysiology, will help to understand and to engineer plant behavior from the genome to the field.

  4. Estimating Soil and Root Parameters of Biofuel Crops using a Hydrogeophysical Inversion

    NASA Astrophysics Data System (ADS)

    Kuhl, A.; Kendall, A. D.; Van Dam, R. L.; Hyndman, D. W.

    2017-12-01

    Transpiration is the dominant pathway for continental water exchange to the atmosphere, and therefore a crucial aspect of modeling water balances at many scales. The root water uptake dynamics that control transpiration are dependent on soil water availability, as well as the root distribution. However, the root distribution is determined by many factors beyond the plant species alone, including climate conditions and soil texture. Despite the significant contribution of transpiration to global water fluxes, modelling the complex critical zone processes that drive root water uptake remains a challenge. Geophysical tools such as electrical resistivity (ER), have been shown to be highly sensitive to water dynamics in the unsaturated zone. ER data can be temporally and spatially robust, covering large areas or long time periods non-invasively, which is an advantage over in-situ methods. Previous studies have shown the value of using hydrogeophysical inversions to estimate soil properties. Others have used hydrological inversions to estimate both soil properties and root distribution parameters. In this study, we combine these two approaches to create a coupled hydrogeophysical inversion that estimates root and retention curve parameters for a HYDRUS model. To test the feasibility of this new approach, we estimated daily water fluxes and root growth for several biofuel crops at a long-term ecological research site in Southwest Michigan, using monthly ER data from 2009 through 2011. Time domain reflectometry data at seven depths was used to validate modeled soil moisture estimates throughout the model period. This hydrogeophysical inversion method shows promise for improving root distribution and transpiration estimates across a wide variety of settings.

  5. Evaluating growth of the Porcupine Caribou Herd using a stochastic model

    USGS Publications Warehouse

    Walsh, Noreen E.; Griffith, Brad; McCabe, Thomas R.

    1995-01-01

    Estimates of the relative effects of demographic parameters on population rates of change, and of the level of natural variation in these parameters, are necessary to address potential effects of perturbations on populations. We used a stochastic model, based on survival and reproduction estimates of the Porcupine Caribou (Rangifer tarandus granti) Herd (PCH), during 1983-89 and 1989-92 to obtain distributions of potential population rates of change (r). The distribution of r produced by 1,000 trajectories of our simulation model (1983-89, r̄ = 0.013; 1989-92, r̄ = 0.003) encompassed the rate of increase calculated from an independent series of photo-survey data over the same years (1983-89, r = 0.048; 1989-92, r = -0.035). Changes in adult female survival had the largest effect on r, followed by changes in calf survival. We hypothesized that petroleum development on calving grounds, or changes in calving and post-calving habitats due to global climate change, would affect model input parameters. A decline in annual adult female survival from 0.871 to 0.847, or a decline in annual calf survival from 0.518 to 0.472, would be sufficient to cause a declining population, if all other input estimates remained the same. We then used these lower survival rates, in conjunction with our estimated amount of among-year variation, to determine a range of resulting population trajectories. Stochastic models can be used to better understand dynamics of populations, optimize sampling investment, and evaluate potential effects of various factors on population growth.

  6. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  7. Preparing computers for affective communication: a psychophysiological concept and preliminary results.

    PubMed

    Whang, Min Cheol; Lim, Joa Sang; Boucsein, Wolfram

    Despite rapid advances in technology, computers remain incapable of responding to human emotions. An exploratory study was conducted to find out what physiological parameters might be useful to differentiate among 4 emotional states, based on 2 dimensions: pleasantness versus unpleasantness and arousal versus relaxation. The 4 emotions were induced by exposing 26 undergraduate students to different combinations of olfactory and auditory stimuli, selected in a pretest from 12 stimuli by subjective ratings of arousal and valence. Changes in electroencephalographic (EEG), heart rate variability, and electrodermal measures were used to differentiate the 4 emotions. EEG activity separates pleasantness from unpleasantness only in the aroused but not in the relaxed domain, where electrodermal parameters are the differentiating ones. All three classes of parameters contribute to a separation between arousal and relaxation in the positive valence domain, whereas the latency of the electrodermal response is the only differentiating parameter in the negative domain. We discuss how such a psychophysiological approach may be incorporated into a systemic model of a computer responsive to affective communication from the user.

  8. Mixture class recovery in GMM under varying degrees of class separation: frequentist versus Bayesian estimation.

    PubMed

    Depaoli, Sarah

    2013-06-01

    Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  9. Terrestrial Sagnac delay constraining modified gravity models

    NASA Astrophysics Data System (ADS)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  10. Temperature responses of individual soil organic matter components

    NASA Astrophysics Data System (ADS)

    Feng, Xiaojuan; Simpson, Myrna J.

    2008-09-01

    Temperature responses of soil organic matter (SOM) remain unclear partly due to its chemical and compositional heterogeneity. In this study, the decomposition of SOM from two grassland soils was investigated in a 1-year laboratory incubation at six different temperatures. SOM was separated into solvent extractable compounds, suberin- and cutin-derived compounds, and lignin-derived monomers by solvent extraction, base hydrolysis, and CuO oxidation, respectively. These SOM components have distinct chemical structures and stabilities and their decomposition patterns over the course of the experiment were fitted with a two-pool exponential decay model. The stability of SOM components was also assessed using geochemical parameters and kinetic parameters derived from model fitting. Compared with the solvent extractable compounds, a low percentage of lignin monomers partitioned into the labile SOM pool. Suberin- and cutin-derived compounds were poorly fitted by the decay model, and their recalcitrance was shown by the geochemical degradation parameter (ω - C16/∑C16), which was observed to stabilize during the incubation. The temperature sensitivity of decomposition, expressed as Q10, was derived from the relationship between temperature and SOM decay rates. SOM components exhibited varying temperature responses and the decomposition of lignin monomers exhibited higher Q10 values than the decomposition of solvent extractable compounds. Our study shows that Q10 values derived from soil respiration measurements may not be reliable indicators of temperature responses of individual SOM components.

  11. EnKF with closed-eye period - bridging intermittent model structural errors in soil hydrology

    NASA Astrophysics Data System (ADS)

    Bauser, Hannes H.; Jaumann, Stefan; Berg, Daniel; Roth, Kurt

    2017-04-01

    The representation of soil water movement exposes uncertainties in all model components, namely dynamics, forcing, subscale physics and the state itself. Especially model structural errors in the description of the dynamics are difficult to represent and can lead to an inconsistent estimation of the other components. We address the challenge of a consistent aggregation of information for a manageable specific hydraulic situation: a 1D soil profile with TDR-measured water contents during a time period of less than 2 months. We assess the uncertainties for this situation and detect initial condition, soil hydraulic parameters, small-scale heterogeneity, upper boundary condition, and (during rain events) the local equilibrium assumption by the Richards equation as the most important ones. We employ an iterative Ensemble Kalman Filter (EnKF) with an augmented state. Based on a single rain event, we are able to reduce all uncertainties directly, except for the intermittent violation of the local equilibrium assumption. We detect these times by analyzing the temporal evolution of estimated parameters. By introducing a closed-eye period - during which we do not estimate parameters, but only guide the state based on measurements - we can bridge these times. The introduced closed-eye period ensured constant parameters, suggesting that they resemble the believed true material properties. The closed-eye period improves predictions during periods when the local equilibrium assumption is met, but consequently worsens predictions when the assumption is violated. Such a prediction requires a description of the dynamics during local non-equilibrium phases, which remains an open challenge.

  12. An accessible method for implementing hierarchical models with spatio-temporal abundance data

    USGS Publications Warehouse

    Ross, Beth E.; Hooten, Melvin B.; Koons, David N.

    2012-01-01

    A common goal in ecology and wildlife management is to determine the causes of variation in population dynamics over long periods of time and across large spatial scales. Many assumptions must nevertheless be overcome to make appropriate inference about spatio-temporal variation in population dynamics, such as autocorrelation among data points, excess zeros, and observation error in count data. To address these issues, many scientists and statisticians have recommended the use of Bayesian hierarchical models. Unfortunately, hierarchical statistical models remain somewhat difficult to use because of the necessary quantitative background needed to implement them, or because of the computational demands of using Markov Chain Monte Carlo algorithms to estimate parameters. Fortunately, new tools have recently been developed that make it more feasible for wildlife biologists to fit sophisticated hierarchical Bayesian models (i.e., Integrated Nested Laplace Approximation, ‘INLA’). We present a case study using two important game species in North America, the lesser and greater scaup, to demonstrate how INLA can be used to estimate the parameters in a hierarchical model that decouples observation error from process variation, and accounts for unknown sources of excess zeros as well as spatial and temporal dependence in the data. Ultimately, our goal was to make unbiased inference about spatial variation in population trends over time.

  13. Nonstationarities in Catchment Response According to Basin and Rainfall Characteristics: Application to Korean Watershed

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; Kim, Jin-Guk; Jung, Il-Won

    2015-04-01

    It must be acknowledged that application of rainfall-runoff models to simulate rainfall-runoff processes are successful in gauged watershed. However, there still remain some issues that will need to be further discussed. In particular, the quantitive representation of nonstationarity issue in basin response (e.g. concentration time, storage coefficient and roughness) along with ungauged watershed needs to be studied. In this regard, this study aims to investigate nonstationarity in basin response so as to potentially provide useful information in simulating runoff processes in ungauged watershed. For this purpose, HEC-1 rainfall-runoff model was mainly utilized. In addition, this study combined HEC-1 model with Bayesian statistical model to estimate uncertainty of the parameters which is called Bayesian HEC-1 (BHEC-1). The proposed rainfall-runofall model is applied to various catchments along with various rainfall patterns to understand nonstationarities in catchment response. Further discussion about the nonstationarity in catchment response and possible regionalization of the parameters for ungauged watershed are discussed. KEYWORDS: Nonstationary, Catchment response, Uncertainty, Bayesian Acknowledgement This research was supported by a Grant (13SCIPA01) from Smart Civil Infrastructure Research Program funded by the Ministry of Land, Infrastructure and Transport (MOLIT) of Korea government and the Korea Agency for Infrastructure Technology Advancement (KAIA).

  14. Investigation, sensitivity analysis, and multi-objective optimization of effective parameters on temperature and force in robotic drilling cortical bone.

    PubMed

    Tahmasbi, Vahid; Ghoreishi, Majid; Zolfaghari, Mojtaba

    2017-11-01

    The bone drilling process is very prominent in orthopedic surgeries and in the repair of bone fractures. It is also very common in dentistry and bone sampling operations. Due to the complexity of bone and the sensitivity of the process, bone drilling is one of the most important and sensitive processes in biomedical engineering. Orthopedic surgeries can be improved using robotic systems and mechatronic tools. The most crucial problem during drilling is an unwanted increase in process temperature (higher than 47 °C), which causes thermal osteonecrosis or cell death and local burning of the bone tissue. Moreover, imposing higher forces to the bone may lead to breaking or cracking and consequently cause serious damage. In this study, a mathematical second-order linear regression model as a function of tool drilling speed, feed rate, tool diameter, and their effective interactions is introduced to predict temperature and force during the bone drilling process. This model can determine the maximum speed of surgery that remains within an acceptable temperature range. Moreover, for the first time, using designed experiments, the bone drilling process was modeled, and the drilling speed, feed rate, and tool diameter were optimized. Then, using response surface methodology and applying a multi-objective optimization, drilling force was minimized to sustain an acceptable temperature range without damaging the bone or the surrounding tissue. In addition, for the first time, Sobol statistical sensitivity analysis is used to ascertain the effect of process input parameters on process temperature and force. The results show that among all effective input parameters, tool rotational speed, feed rate, and tool diameter have the highest influence on process temperature and force, respectively. The behavior of each output parameters with variation in each input parameter is further investigated. Finally, a multi-objective optimization has been performed considering all the aforementioned parameters. This optimization yielded a set of data that can considerably improve orthopedic osteosynthesis outcomes.

  15. Improved Determination of the Myelin Water Fraction in Human Brain using Magnetic Resonance Imaging through Bayesian Analysis of mcDESPOT

    PubMed Central

    Bouhrara, Mustapha; Spencer, Richard G.

    2015-01-01

    Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810

  16. Simultaneous measurement and integrated analysis of analgesia and respiration after an intravenous morphine infusion.

    PubMed

    Dahan, Albert; Romberg, Raymonda; Teppema, Luc; Sarton, Elise; Bijl, Hans; Olofsen, Erik

    2004-11-01

    To study the influence of morphine on chemical control of breathing relative to the analgesic properties of morphine, the authors quantified morphine-induced analgesia and respiratory depression in a single group of healthy volunteers. Both respiratory and pain measurements were performed over single 24-h time spans. Eight subjects (four men, four women) received a 90-s intravenous morphine infusion; eight others (four men, four women) received a 90-s placebo infusion. At regular time intervals, respiratory variables (breathing at a fixed end-tidal partial pressure of carbon dioxide of 50 mmHg and the isocapnic acute hypoxic response), pain tolerance (derived from a transcutaneous electrical acute pain model), and arterial blood samples were obtained. Data acquisition continued for 24 h. Population pharmacokinetic (sigmoid Emax)-pharmacodynamic models were applied to the respiratory and pain data. The models are characterized by potency parameters, shape parameters (gamma), and blood-effect site equilibration half-lives. All collected data were analyzed simultaneously using the statistical program NONMEM. Placebo had no systematic effect on analgesic or respiratory variables. Morphine potency parameter and blood-effect site equilibration half-life did not differ significantly among the three measured effect parameters (P > 0.01). The integrated NONMEM analysis yielded a potency parameter of 32 +/- 1.4 nm (typical value +/- SE) and a blood-effect site equilibration half-life of 4.4 +/- 0.3 h. Parameter gamma was 1 for hypercapnic and hypoxic breathing but 2.4 +/- 0.7 for analgesia (P < 0.01). Our data indicate that systems involved in morphine-induced analgesia and respiratory depression share important pharmacodynamic characteristics. This suggests similarities in central mu-opioid analgesic and respiratory pathways (e.g., similarities in mu-opioid receptors and G proteins). The clinical implication of this study is that after morphine administration, despite lack of good pain relief, moderate to severe respiratory depression remains possible.

  17. Inferring epidemiological parameters from phylogenetic information for the HIV-1 epidemic among MSM

    NASA Astrophysics Data System (ADS)

    Quax, Rick; van de Vijver, David A. M. C.; Frentz, Dineke; Sloot, Peter M. A.

    2013-09-01

    The HIV-1 epidemic in Europe is primarily sustained by a dynamic topology of sexual interactions among MSM who have individual immune systems and behavior. This epidemiological process shapes the phylogeny of the virus population. Both fields of epidemic modeling and phylogenetics have a long history, however it remains difficult to use phylogenetic data to infer epidemiological parameters such as the structure of the sexual network and the per-act infectiousness. This is because phylogenetic data is necessarily incomplete and ambiguous. Here we show that the cluster-size distribution indeed contains information about epidemiological parameters using detailed numberical experiments. We simulate the HIV epidemic among MSM many times using the Monte Carlo method with all parameter values and their ranges taken from literature. For each simulation and the corresponding set of parameter values we calculate the likelihood of reproducing an observed cluster-size distribution. The result is an estimated likelihood distribution of all parameters from the phylogenetic data, in particular the structure of the sexual network, the per-act infectiousness, and the risk behavior reduction upon diagnosis. These likelihood distributions encode the knowledge provided by the observed cluster-size distrbution, which we quantify using information theory. Our work suggests that the growing body of genetic data of patients can be exploited to understand the underlying epidemiological process.

  18. Mitigating effects of vaccination on influenza outbreaks given constraints in stockpile size and daily administration capacity

    PubMed Central

    2011-01-01

    Background Influenza viruses are a major cause of morbidity and mortality worldwide. Vaccination remains a powerful tool for preventing or mitigating influenza outbreaks. Yet, vaccine supplies and daily administration capacities are limited, even in developed countries. Understanding how such constraints can alter the mitigating effects of vaccination is a crucial part of influenza preparedness plans. Mathematical models provide tools for government and medical officials to assess the impact of different vaccination strategies and plan accordingly. However, many existing models of vaccination employ several questionable assumptions, including a rate of vaccination proportional to the population at each point in time. Methods We present a SIR-like model that explicitly takes into account vaccine supply and the number of vaccines administered per day and places data-informed limits on these parameters. We refer to this as the non-proportional model of vaccination and compare it to the proportional scheme typically found in the literature. Results The proportional and non-proportional models behave similarly for a few different vaccination scenarios. However, there are parameter regimes involving the vaccination campaign duration and daily supply limit for which the non-proportional model predicts smaller epidemics that peak later, but may last longer, than those of the proportional model. We also use the non-proportional model to predict the mitigating effects of variably timed vaccination campaigns for different levels of vaccination coverage, using specific constraints on daily administration capacity. Conclusions The non-proportional model of vaccination is a theoretical improvement that provides more accurate predictions of the mitigating effects of vaccination on influenza outbreaks than the proportional model. In addition, parameters such as vaccine supply and daily administration limit can be easily adjusted to simulate conditions in developed and developing nations with a wide variety of financial and medical resources. Finally, the model can be used by government and medical officials to create customized pandemic preparedness plans based on the supply and administration constraints of specific communities. PMID:21806800

  19. Hidden Markov Item Response Theory Models for Responses and Response Times.

    PubMed

    Molenaar, Dylan; Oberski, Daniel; Vermunt, Jeroen; De Boeck, Paul

    2016-01-01

    Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.

  20. Global tilt and lumbar lordosis index: two parameters correlating with health-related quality of life scores-but how do they truly impact disability?

    PubMed

    Boissière, Louis; Takemoto, Mitsuru; Bourghli, Anouar; Vital, Jean-Marc; Pellisé, Ferran; Alanay, Ahmet; Yilgor, Caglar; Acaroglu, Emre; Perez-Grueso, Francisco Javier; Kleinstück, Frank; Obeid, Ibrahim

    2017-04-01

    Many radiological parameters have been reported to correlate with patient's disability including sagittal vertical axis (SVA), pelvic tilt (PT), and pelvic incidence minus lumbar lordosis (PI-LL). European literature reports other parameters such as lumbar lordosis index (LLI) and the global tilt (GT). If most parameters correlate with health-related quality of life scores (HRQLs), their impact on disability remains unclear. This study aimed to validate these parameters by investigating their correlation with HRQLs. It also aimed to evaluate the relationship between each of these sagittal parameters and HRQLs to fully understand the impact in adult spinal deformity management. A retrospective review of a multicenter, prospective database was carried out. The database inclusion criteria were adults (>18 years old) presenting any of the following radiographic parameters: scoliosis (Cobb ≥20°), SVA ≥5 cm, thoracic kyphosis ≥60° or PT ≥25°. All patients with complete data at baseline were included. Health-related quality of life scores, demographic variables (DVs), and radiographic parameters were collected at baseline. Differences in HRQLs among groups of each DV were assessed with analyses of variance. Correlations between radiographic variables and HRQLs were assessed using the Spearman rank correlation. Multivariate linear regression models were fitted for each of the HRQLs (Oswestry Disability Index [ODI], Scoliosis Research Society-22 subtotal score, or physical component summaries) with sagittal parameters and covariants as independent variables. A p<.05 value was considered statistically significant. Among a total of 755 included patients (mean age, 52.1 years), 431 were non-surgical candidates and 324 were surgical candidates. Global tilt and LLI significantly correlated with HRQLs (r=0.4 and -0.3, respectively) for univariate analysis. Demographic variables such as age, gender, body mass index, past surgery, and surgical or non-surgical candidate were significant predictors of ODI score. The likelihood ratio tests for the addition of the sagittal parameters showed that SVA, GT, T1 sagittal tilt, PI-LL, and LLI were statistically significant predictors for ODI score even adjusted for covariates. The differences of R 2 values from Model 1 were 1.5% at maximum, indicating that the addition of sagittal parameters to the reference model increased only 1.5% of the variance of ODI explained by the models. GT and LLI appear to be independent radiographic parameters impacting ODI variance. If most of the parameters described in the literature are correlated with ODI, the impact of these radiographic parameters is less than 2% of ODI variance, whereas 40% are explained by DVs. The importance of radiographic parameters lies more on their purpose to describe and understand the malalignment mechanisms than their univariate correlation with HRQLs. Copyright © 2016 Elsevier Inc. All rights reserved.

Top