Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.
2003-01-01
Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
NWP model forecast skill optimization via closure parameter variations
NASA Astrophysics Data System (ADS)
Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.
2012-04-01
We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.
Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.
Glöckner, Andreas; Pachur, Thorsten
2012-04-01
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
2017-01-01
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
NASA Astrophysics Data System (ADS)
Qian, Xiaoshan
2018-01-01
The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants
Yu, Zheng-Yong; Liu, Qiang; Liu, Yunhan
2017-01-01
Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi–Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings. PMID:28792487
Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants.
Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan
2017-08-09
Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi-Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings.
Wen, Jessica; Koo, Soh Myoung; Lape, Nancy
2018-02-01
While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Using sensitivity analysis in model calibration efforts
Tiedeman, Claire; Hill, Mary C.
2003-01-01
In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.
Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza
2015-09-15
The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory
ERIC Educational Resources Information Center
Glockner, Andreas; Pachur, Thorsten
2012-01-01
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…
Seismic activity prediction using computational intelligence techniques in northern Pakistan
NASA Astrophysics Data System (ADS)
Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat
2017-10-01
Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.
A predictive model for biomimetic plate type broadband frequency sensor
NASA Astrophysics Data System (ADS)
Ahmed, Riaz U.; Banerjee, Sourav
2016-04-01
In this work, predictive model for a bio-inspired broadband frequency sensor is developed. Broadband frequency sensing is essential in many domains of science and technology. One great example of such sensor is human cochlea, where it senses a frequency band of 20 Hz to 20 KHz. Developing broadband sensor adopting the physics of human cochlea has found tremendous interest in recent years. Although few experimental studies have been reported, a true predictive model to design such sensors is missing. A predictive model is utmost necessary for accurate design of selective broadband sensors that are capable of sensing very selective band of frequencies. Hence, in this study, we proposed a novel predictive model for the cochlea-inspired broadband sensor, aiming to select the frequency band and model parameters predictively. Tapered plate geometry is considered mimicking the real shape of the basilar membrane in the human cochlea. The predictive model is intended to develop flexible enough that can be employed in a wide variety of scientific domains. To do that, the predictive model is developed in such a way that, it can not only handle homogeneous but also any functionally graded model parameters. Additionally, the predictive model is capable of managing various types of boundary conditions. It has been found that, using the homogeneous model parameters, it is possible to sense a specific frequency band from a specific portion (B) of the model length (L). It is also possible to alter the attributes of `B' using functionally graded model parameters, which confirms the predictive frequency selection ability of the developed model.
Prediction of compressibility parameters of the soils using artificial neural network.
Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan
2016-01-01
The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.
NASA Astrophysics Data System (ADS)
Brannan, K. M.; Somor, A.
2016-12-01
A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.
Universally Sloppy Parameter Sensitivities in Systems Biology Models
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-01-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568
Universally sloppy parameter sensitivities in systems biology models.
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-10-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Xiao, Chuncai; Hao, Kuangrong; Ding, Yongsheng
2014-12-30
This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM) and improved particle swarm optimization (IPSO) algorithm (SVM-IPSO). In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN), the basic particle swarm optimization (PSO) method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO) method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.
NASA Astrophysics Data System (ADS)
Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.
2017-07-01
Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
NASA Astrophysics Data System (ADS)
Roy, Swagata; Biswas, Srija; Babu, K. Arun; Mandal, Sumantra
2018-05-01
A novel constitutive model has been developed for predicting flow responses of super-austenitic stainless steel over a wide range of strains (0.05-0.6), temperatures (1173-1423 K) and strain rates (0.001-1 s-1). Further, the predictability of this new model has been compared with the existing Johnson-Cook (JC) and modified Zerilli-Armstrong (M-ZA) model. The JC model is not befitted for flow prediction as it is found to be exhibiting very high ( 36%) average absolute error (δ) and low ( 0.92) correlation coefficient (R). On the contrary, the M-ZA model has demonstrated relatively lower δ ( 13%) and higher R ( 0.96) for flow prediction. The incorporation of couplings of processing parameters in M-ZA model has led to exhibit better prediction than JC model. However, the flow analyses of the studied alloy have revealed the additional synergistic influences of strain and strain rate as well as strain, temperature, and strain rate apart from those considered in M-ZA model. Hence, the new phenomenological model has been formulated incorporating all the individual and synergistic effects of processing parameters and a `strain-shifting' parameter. The proposed model predicted the flow behavior of the alloy with much better correlation and generalization than M-ZA model as substantiated by its lower δ ( 7.9%) and higher R ( 0.99) of prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juxiu Tong; Bill X. Hu; Hai Huang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
NASA Astrophysics Data System (ADS)
Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.
2015-04-01
This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.
2013-12-01
Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.
NASA Astrophysics Data System (ADS)
Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.
2011-03-01
Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.
Edge Modeling by Two Blur Parameters in Varying Contrasts.
Seo, Suyoung
2018-06-01
This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.
Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh
2016-01-01
This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.
Can We Predict Patient Wait Time?
Pianykh, Oleg S; Rosenthal, Daniel I
2015-10-01
The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
The Rangeland Hydrology and Erosion Model: A Dynamic Approach for Predicting Soil Loss on Rangelands
NASA Astrophysics Data System (ADS)
Hernandez, Mariano; Nearing, Mark A.; Al-Hamdan, Osama Z.; Pierson, Frederick B.; Armendariz, Gerardo; Weltz, Mark A.; Spaeth, Kenneth E.; Williams, C. Jason; Nouwakpo, Sayjro K.; Goodrich, David C.; Unkrich, Carl L.; Nichols, Mary H.; Holifield Collins, Chandra D.
2017-11-01
In this study, we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed against data collected from 23 runoff and sediment events in a shrub-dominated semiarid watershed in Arizona, USA. To evaluate the model, two sets of primary model parameters were determined using the RHEM V2.3 and RHEM V1.0 parameter estimation equations. Testing of the parameters indicated that RHEM V2.3 parameter estimation equations provided a 76% improvement over RHEM V1.0 parameter estimation equations. Second, the RHEM V2.3 model was calibrated to measurements from the watershed. The parameters estimated by the new equations were within the lowest and highest values of the calibrated parameter set. These results suggest that the new parameter estimation equations can be applied for this environment to predict sediment yield at the hillslope scale. Furthermore, we also applied the RHEM V2.3 to demonstrate the response of the model as a function of foliar cover and ground cover for 124 data points across Arizona and New Mexico. The dependence of average sediment yield on surface ground cover was moderately stronger than that on foliar cover. These results demonstrate that RHEM V2.3 predicts runoff volume, peak runoff, and sediment yield with sufficient accuracy for broad application to assess and manage rangeland systems.
NASA Astrophysics Data System (ADS)
Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.
2016-11-01
Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.
[Development of an analyzing system for soil parameters based on NIR spectroscopy].
Zheng, Li-Hua; Li, Min-Zan; Sun, Hong
2009-10-01
A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.
Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods
NASA Astrophysics Data System (ADS)
Davis, A. D.
2015-12-01
The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.
Xu, Suxin; Chen, Jiangang; Wang, Bijia; Yang, Yiqi
2015-11-15
Two predictive models were presented for the adsorption affinities and diffusion coefficients of disperse dyes in polylactic acid matrix. Quantitative structure-sorption behavior relationship would not only provide insights into sorption process, but also enable rational engineering for desired properties. The thermodynamic and kinetic parameters for three disperse dyes were measured. The predictive model for adsorption affinity was based on two linear relationships derived by interpreting the experimental measurements with molecular structural parameters and compensation effect: ΔH° vs. dye size and ΔS° vs. ΔH°. Similarly, the predictive model for diffusion coefficient was based on two derived linear relationships: activation energy of diffusion vs. dye size and logarithm of pre-exponential factor vs. activation energy of diffusion. The only required parameters for both models are temperature and solvent accessible surface area of the dye molecule. These two predictive models were validated by testing the adsorption and diffusion properties of new disperse dyes. The models offer fairly good predictive ability. The linkage between structural parameter of disperse dyes and sorption behaviors might be generalized and extended to other similar polymer-penetrant systems. Copyright © 2015 Elsevier Inc. All rights reserved.
Hararuk, Oleksandra; Smith, Matthew J; Luo, Yiqi
2015-06-01
Long-term carbon (C) cycle feedbacks to climate depend on the future dynamics of soil organic carbon (SOC). Current models show low predictive accuracy at simulating contemporary SOC pools, which can be improved through parameter estimation. However, major uncertainty remains in global soil responses to climate change, particularly uncertainty in how the activity of soil microbial communities will respond. To date, the role of microbes in SOC dynamics has been implicitly described by decay rate constants in most conventional global carbon cycle models. Explicitly including microbial biomass dynamics into C cycle model formulations has shown potential to improve model predictive performance when assessed against global SOC databases. This study aimed to data-constrained parameters of two soil microbial models, evaluate the improvements in performance of those calibrated models in predicting contemporary carbon stocks, and compare the SOC responses to climate change and their uncertainties between microbial and conventional models. Microbial models with calibrated parameters explained 51% of variability in the observed total SOC, whereas a calibrated conventional model explained 41%. The microbial models, when forced with climate and soil carbon input predictions from the 5th Coupled Model Intercomparison Project (CMIP5), produced stronger soil C responses to 95 years of climate change than any of the 11 CMIP5 models. The calibrated microbial models predicted between 8% (2-pool model) and 11% (4-pool model) soil C losses compared with CMIP5 model projections which ranged from a 7% loss to a 22.6% gain. Lastly, we observed unrealistic oscillatory SOC dynamics in the 2-pool microbial model. The 4-pool model also produced oscillations, but they were less prominent and could be avoided, depending on the parameter values. © 2014 John Wiley & Sons Ltd.
A4 flavour model for Dirac neutrinos: Type I and inverse seesaw
NASA Astrophysics Data System (ADS)
Borah, Debasish; Karmakar, Biswajit
2018-05-01
We propose two different seesaw models namely, type I and inverse seesaw to realise light Dirac neutrinos within the framework of A4 discrete flavour symmetry. The additional fields and their transformations under the flavour symmetries are chosen in such a way that naturally predicts the hierarchies of different elements of the seesaw mass matrices in these two types of seesaw mechanisms. For generic choices of flavon alignments, both the models predict normal hierarchical light neutrino masses with the atmospheric mixing angle in the lower octant. Apart from predicting interesting correlations between different neutrino parameters as well as between neutrino and model parameters, the model also predicts the leptonic Dirac CP phase to lie in a specific range - π / 3 to π / 3. While the type I seesaw model predicts smaller values of absolute neutrino mass, the inverse seesaw predictions for the absolute neutrino masses can saturate the cosmological upper bound on sum of absolute neutrino masses for certain choices of model parameters.
A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks
NASA Astrophysics Data System (ADS)
Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan
2018-01-01
Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.
Effect of correlated observation error on parameters, predictions, and uncertainty
Tiedeman, Claire; Green, Christopher T.
2013-01-01
Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.
NASA Astrophysics Data System (ADS)
Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra
2016-07-01
Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.
Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation
NASA Astrophysics Data System (ADS)
Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.
2002-05-01
This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.
A generalized procedure for the prediction of multicomponent adsorption equilibria
Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas
2015-04-07
Prediction of multicomponent adsorption equilibria has been investigated for several decades. While there are theories available to predict the adsorption behavior of ideal mixtures, there are few purely predictive theories to account for nonidealities in real systems. Most models available for dealing with nonidealities contain interaction parameters that must be obtained through correlation with binary-mixture data. However, as the number of components in a system grows, the number of parameters needed to be obtained increases exponentially. Here, a generalized procedure is proposed, as an extension of the predictive real adsorbed solution theory, for determining the parameters of any activity model,more » for any number of components, without correlation. This procedure is then combined with the adsorbed solution theory to predict the adsorption behavior of mixtures. As this method can be applied to any isotherm model and any activity model, it is referred to as the generalized predictive adsorbed solution theory.« less
2014-01-01
This paper examined the efficiency of multivariate linear regression (MLR) and artificial neural network (ANN) models in prediction of two major water quality parameters in a wastewater treatment plant. Biochemical oxygen demand (BOD) and chemical oxygen demand (COD) as well as indirect indicators of organic matters are representative parameters for sewer water quality. Performance of the ANN models was evaluated using coefficient of correlation (r), root mean square error (RMSE) and bias values. The computed values of BOD and COD by model, ANN method and regression analysis were in close agreement with their respective measured values. Results showed that the ANN performance model was better than the MLR model. Comparative indices of the optimized ANN with input values of temperature (T), pH, total suspended solid (TSS) and total suspended (TS) for prediction of BOD was RMSE = 25.1 mg/L, r = 0.83 and for prediction of COD was RMSE = 49.4 mg/L, r = 0.81. It was found that the ANN model could be employed successfully in estimating the BOD and COD in the inlet of wastewater biochemical treatment plants. Moreover, sensitive examination results showed that pH parameter have more effect on BOD and COD predicting to another parameters. Also, both implemented models have predicted BOD better than COD. PMID:24456676
Inverse modeling with RZWQM2 to predict water quality
Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.
2011-01-01
This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.
Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer
2008-01-01
Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188
Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.
2004-01-01
This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan
2017-05-08
As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.
Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan
2017-01-01
As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models. PMID:28772873
Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H
2016-12-15
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
Parameter Selection Methods in Inverse Problem Formulation
2010-11-03
clinical data and used for prediction and a model for the reaction of the cardiovascular system to an ergometric workload. Key Words: Parameter selection...model for HIV dynamics which has been successfully validated with clinical data and used for prediction and a model for the reaction of the...recently developed in-host model for HIV dynamics which has been successfully validated with clinical data and used for prediction [4, 8]; b) a global
Muscle Synergies May Improve Optimization Prediction of Knee Contact Forces During Walking
Walter, Jonathan P.; Kinney, Allison L.; Banks, Scott A.; D'Lima, Darryl D.; Besier, Thor F.; Lloyd, David G.; Fregly, Benjamin J.
2014-01-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values. PMID:24402438
Muscle synergies may improve optimization prediction of knee contact forces during walking.
Walter, Jonathan P; Kinney, Allison L; Banks, Scott A; D'Lima, Darryl D; Besier, Thor F; Lloyd, David G; Fregly, Benjamin J
2014-02-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values.
Some Empirical Evidence for Latent Trait Model Selection.
ERIC Educational Resources Information Center
Hutten, Leah R.
The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…
Performance of ANFIS versus MLP-NN dissolved oxygen prediction models in water quality monitoring.
Najah, A; El-Shafie, A; Karim, O A; El-Shafie, Amr H
2014-02-01
We discuss the accuracy and performance of the adaptive neuro-fuzzy inference system (ANFIS) in training and prediction of dissolved oxygen (DO) concentrations. The model was used to analyze historical data generated through continuous monitoring of water quality parameters at several stations on the Johor River to predict DO concentrations. Four water quality parameters were selected for ANFIS modeling, including temperature, pH, nitrate (NO3) concentration, and ammoniacal nitrogen concentration (NH3-NL). Sensitivity analysis was performed to evaluate the effects of the input parameters. The inputs with the greatest effect were those related to oxygen content (NO3) or oxygen demand (NH3-NL). Temperature was the parameter with the least effect, whereas pH provided the lowest contribution to the proposed model. To evaluate the performance of the model, three statistical indices were used: the coefficient of determination (R (2)), the mean absolute prediction error, and the correlation coefficient. The performance of the ANFIS model was compared with an artificial neural network model. The ANFIS model was capable of providing greater accuracy, particularly in the case of extreme events.
A comparison of different functions for predicted protein model quality assessment.
Li, Juan; Fang, Huisheng
2016-07-01
In protein structure prediction, a considerable number of models are usually produced by either the Template-Based Method (TBM) or the ab initio prediction. The purpose of this study is to find the critical parameter in assessing the quality of the predicted models. A non-redundant template library was developed and 138 target sequences were modeled. The target sequences were all distant from the proteins in the template library and were aligned with template library proteins on the basis of the transformation matrix. The quality of each model was first assessed with QMEAN and its six parameters, which are C_β interaction energy (C_beta), all-atom pairwise energy (PE), solvation energy (SE), torsion angle energy (TAE), secondary structure agreement (SSA), and solvent accessibility agreement (SAE). Finally, the alignment score (score) was also used to assess the quality of model. Hence, a total of eight parameters (i.e., QMEAN, C_beta, PE, SE, TAE, SSA, SAE, score) were independently used to assess the quality of each model. The results indicate that SSA is the best parameter to estimate the quality of the model.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
Predicting loop–helix tertiary structural contacts in RNA pseudoknots
Cao, Song; Giedroc, David P.; Chen, Shi-Jie
2010-01-01
Tertiary interactions between loops and helical stems play critical roles in the biological function of many RNA pseudoknots. However, quantitative predictions for RNA tertiary interactions remain elusive. Here we report a statistical mechanical model for the prediction of noncanonical loop–stem base-pairing interactions in RNA pseudoknots. Central to the model is the evaluation of the conformational entropy for the pseudoknotted folds with defined loop–stem tertiary structural contacts. We develop an RNA virtual bond-based conformational model (Vfold model), which permits a rigorous computation of the conformational entropy for a given fold that contains loop–stem tertiary contacts. With the entropy parameters predicted from the Vfold model and the energy parameters for the tertiary contacts as inserted parameters, we can then predict the RNA folding thermodynamics, from which we can extract the tertiary contact thermodynamic parameters from theory–experimental comparisons. These comparisons reveal a contact enthalpy (ΔH) of −14 kcal/mol and a contact entropy (ΔS) of −38 cal/mol/K for a protonated C+•(G–C) base triple at pH 7.0, and (ΔH = −7 kcal/mol, ΔS = −19 cal/mol/K) for an unprotonated base triple. Tests of the model for a series of pseudoknots show good theory–experiment agreement. Based on the extracted energy parameters for the tertiary structural contacts, the model enables predictions for the structure, stability, and folding pathways for RNA pseudoknots with known or postulated loop–stem tertiary contacts from the nucleotide sequence alone. PMID:20100813
Convergence in parameters and predictions using computational experimental design.
Hagen, David R; White, Jacob K; Tidor, Bruce
2013-08-06
Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.
Application of a time-magnitude prediction model for earthquakes
NASA Astrophysics Data System (ADS)
An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He
2007-06-01
In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.
Hsu, Ling-Yuan; Chen, Tsung-Lin
2012-11-13
This paper presents a vehicle dynamics prediction system, which consists of a sensor fusion system and a vehicle parameter identification system. This sensor fusion system can obtain the six degree-of-freedom vehicle dynamics and two road angles without using a vehicle model. The vehicle parameter identification system uses the vehicle dynamics from the sensor fusion system to identify ten vehicle parameters in real time, including vehicle mass, moment of inertial, and road friction coefficients. With above two systems, the future vehicle dynamics is predicted by using a vehicle dynamics model, obtained from the parameter identification system, to propagate with time the current vehicle state values, obtained from the sensor fusion system. Comparing with most existing literatures in this field, the proposed approach improves the prediction accuracy both by incorporating more vehicle dynamics to the prediction system and by on-line identification to minimize the vehicle modeling errors. Simulation results show that the proposed method successfully predicts the vehicle dynamics in a left-hand turn event and a rollover event. The prediction inaccuracy is 0.51% in a left-hand turn event and 27.3% in a rollover event.
Hsu, Ling-Yuan; Chen, Tsung-Lin
2012-01-01
This paper presents a vehicle dynamics prediction system, which consists of a sensor fusion system and a vehicle parameter identification system. This sensor fusion system can obtain the six degree-of-freedom vehicle dynamics and two road angles without using a vehicle model. The vehicle parameter identification system uses the vehicle dynamics from the sensor fusion system to identify ten vehicle parameters in real time, including vehicle mass, moment of inertial, and road friction coefficients. With above two systems, the future vehicle dynamics is predicted by using a vehicle dynamics model, obtained from the parameter identification system, to propagate with time the current vehicle state values, obtained from the sensor fusion system. Comparing with most existing literatures in this field, the proposed approach improves the prediction accuracy both by incorporating more vehicle dynamics to the prediction system and by on-line identification to minimize the vehicle modeling errors. Simulation results show that the proposed method successfully predicts the vehicle dynamics in a left-hand turn event and a rollover event. The prediction inaccuracy is 0.51% in a left-hand turn event and 27.3% in a rollover event. PMID:23202231
NASA Astrophysics Data System (ADS)
Zhu, Shun-Peng; Huang, Hong-Zhong; Li, Haiqing; Sun, Rui; Zuo, Ming J.
2011-06-01
Based on ductility exhaustion theory and the generalized energy-based damage parameter, a new viscosity-based life prediction model is introduced to account for the mean strain/stress effects in the low cycle fatigue regime. The loading waveform parameters and cyclic hardening effects are also incorporated within this model. It is assumed that damage accrues by means of viscous flow and ductility consumption is only related to plastic strain and creep strain under high temperature low cycle fatigue conditions. In the developed model, dynamic viscosity is used to describe the flow behavior. This model provides a better prediction of Superalloy GH4133's fatigue behavior when compared to Goswami's ductility model and the generalized damage parameter. Under non-zero mean strain conditions, moreover, the proposed model provides more accurate predictions of Superalloy GH4133's fatigue behavior than that with zero mean strains.
SU-F-R-51: Radiomics in CT Perfusion Maps of Head and Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nesteruk, M; Riesterer, O; Veit-Haibach, P
2016-06-15
Purpose: The aim of this study was to test the predictive value of radiomics features of CT perfusion (CTP) for tumor control, based on a preselection of radiomics features in a robustness study. Methods: 11 patients with head and neck cancer (HNC) and 11 patients with lung cancer were included in the robustness study to preselect stable radiomics parameters. Data from 36 HNC patients treated with definitive radiochemotherapy (median follow-up 30 months) was used to build a predictive model based on these parameters. All patients underwent pre-treatment CTP. 315 texture parameters were computed for three perfusion maps: blood volume, bloodmore » flow and mean transit time. The variability of texture parameters was tested with respect to non-standardizable perfusion computation factors (noise level and artery contouring) using intraclass correlation coefficients (ICC). The parameter with the highest ICC in the correlated group of parameters (inter-parameter Spearman correlations) was tested for its predictive value. The final model to predict tumor control was built using multivariate Cox regression analysis with backward selection of the variables. For comparison, a predictive model based on tumor volume was created. Results: Ten parameters were found to be stable in both HNC and lung cancer regarding potentially non-standardizable factors after the correction for inter-parameter correlations. In the multivariate backward selection of the variables, blood flow entropy showed a highly significant impact on tumor control (p=0.03) with concordance index (CI) of 0.76. Blood flow entropy was significantly lower in the patient group with controlled tumors at 18 months (p<0.1). The new model showed a higher concordance index compared to the tumor volume model (CI=0.68). Conclusion: The preselection of variables in the robustness study allowed building a predictive radiomics-based model of tumor control in HNC despite a small patient cohort. This model was found to be superior to the volume-based model. The project was supported by the KFSP Tumor Oxygenation of the University of Zurich, by a grant of the Center for Clinical Research, University and University Hospital Zurich and by a research grant from Merck (Schweiz) AG.« less
An approach to adjustment of relativistic mean field model parameters
NASA Astrophysics Data System (ADS)
Bayram, Tuncay; Akkoyun, Serkan
2017-09-01
The Relativistic Mean Field (RMF) model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN) method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs) of 58Ni and 208Pb have been found in agreement with the literature values.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
NASA Astrophysics Data System (ADS)
Patnaik, S.; Biswal, B.; Sharma, V. C.
2017-12-01
River flow varies greatly in space and time, and the single biggest challenge for hydrologists and ecologists around the world is the fact that most rivers are either ungauged or poorly gauged. Although it is relatively easier to predict long-term average flow of a river using the `universal' zero-parameter Budyko model, lack of data hinders short-term flow prediction at ungauged locations using traditional hydrological models as they require observed flow data for model calibration. Flow prediction in ungauged basins thus requires a dynamic 'zero-parameter' hydrological model. One way to achieve this is to regionalize a dynamic hydrological model's parameters. However, a regionalization method based zero-parameter dynamic hydrological model is not `universal'. An alternative attempt was made recently to develop a zero-parameter dynamic model by defining an instantaneous dryness index as a function of antecedent rainfall and solar energy inputs with the help of a decay function and using the original Budyko function. The model was tested first in 63 US catchments and later in 50 Indian catchments. The median Nash-Sutcliffe efficiency (NSE) was found to be close to 0.4 in both the cases. Although improvements need to be incorporated in order to use the model for reliable prediction, the main aim of this study was to rather understand hydrological processes. The overall results here seem to suggest that the dynamic zero-parameter Budyko model is `universal.' In other words natural catchments around the world are strikingly similar to each other in the way they respond to hydrologic inputs; we thus need to focus more on utilizing catchment similarities in hydrological modelling instead of over parameterizing our models.
NASA Astrophysics Data System (ADS)
Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.
2011-12-01
A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo
Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less
A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins
NASA Astrophysics Data System (ADS)
Gronewold, A.; Alameddine, I.; Anderson, R. M.
2009-12-01
Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.
Wieske, Luuk; Witteveen, Esther; Verhamme, Camiel; Dettling-Ihnenfeldt, Daniela S; van der Schaaf, Marike; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2014-01-01
An early diagnosis of Intensive Care Unit-acquired weakness (ICU-AW) using muscle strength assessment is not possible in most critically ill patients. We hypothesized that development of ICU-AW can be predicted reliably two days after ICU admission, using patient characteristics, early available clinical parameters, laboratory results and use of medication as parameters. Newly admitted ICU patients mechanically ventilated ≥2 days were included in this prospective observational cohort study. Manual muscle strength was measured according to the Medical Research Council (MRC) scale, when patients were awake and attentive. ICU-AW was defined as an average MRC score <4. A prediction model was developed by selecting predictors from an a-priori defined set of candidate predictors, based on known risk factors. Discriminative performance of the prediction model was evaluated, validated internally and compared to the APACHE IV and SOFA score. Of 212 included patients, 103 developed ICU-AW. Highest lactate levels, treatment with any aminoglycoside in the first two days after admission and age were selected as predictors. The area under the receiver operating characteristic curve of the prediction model was 0.71 after internal validation. The new prediction model improved discrimination compared to the APACHE IV and the SOFA score. The new early prediction model for ICU-AW using a set of 3 easily available parameters has fair discriminative performance. This model needs external validation.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
McBride, Devin W.; Rodgers, Victor G. J.
2013-01-01
The activity coefficient is largely considered an empirical parameter that was traditionally introduced to correct the non-ideality observed in thermodynamic systems such as osmotic pressure. Here, the activity coefficient of free-solvent is related to physically realistic parameters and a mathematical expression is developed to directly predict the activity coefficients of free-solvent, for aqueous protein solutions up to near-saturation concentrations. The model is based on the free-solvent model, which has previously been shown to provide excellent prediction of the osmotic pressure of concentrated and crowded globular proteins in aqueous solutions up to near-saturation concentrations. Thus, this model uses only the independently determined, physically realizable quantities: mole fraction, solvent accessible surface area, and ion binding, in its prediction. Predictions are presented for the activity coefficients of free-solvent for near-saturated protein solutions containing either bovine serum albumin or hemoglobin. As a verification step, the predictability of the model for the activity coefficient of sucrose solutions was evaluated. The predicted activity coefficients of free-solvent are compared to the calculated activity coefficients of free-solvent based on osmotic pressure data. It is observed that the predicted activity coefficients are increasingly dependent on the solute-solvent parameters as the protein concentration increases to near-saturation concentrations. PMID:24324733
Zhou, Weichen; Ma, Yanyun; Zhang, Jun; Hu, Jingyi; Zhang, Menghan; Wang, Yi; Li, Yi; Wu, Lijun; Pan, Yida; Zhang, Yitong; Zhang, Xiaonan; Zhang, Xinxin; Zhang, Zhanqing; Zhang, Jiming; Li, Hai; Lu, Lungen; Jin, Li; Wang, Jiucun; Yuan, Zhenghong; Liu, Jie
2017-11-01
Liver biopsy is the gold standard to assess pathological features (eg inflammation grades) for hepatitis B virus-infected patients although it is invasive and traumatic; meanwhile, several gene profiles of chronic hepatitis B (CHB) have been separately described in relatively small hepatitis B virus (HBV)-infected samples. We aimed to analyse correlations among inflammation grades, gene expressions and clinical parameters (serum alanine amino transaminase, aspartate amino transaminase and HBV-DNA) in large-scale CHB samples and to predict inflammation grades by using clinical parameters and/or gene expressions. We analysed gene expressions with three clinical parameters in 122 CHB samples by an improved regression model. Principal component analysis and machine-learning methods including Random Forest, K-nearest neighbour and support vector machine were used for analysis and further diagnosis models. Six normal samples were conducted to validate the predictive model. Significant genes related to clinical parameters were found enriching in the immune system, interferon-stimulated, regulation of cytokine production, anti-apoptosis, and etc. A panel of these genes with clinical parameters can effectively predict binary classifications of inflammation grade (area under the ROC curve [AUC]: 0.88, 95% confidence interval [CI]: 0.77-0.93), validated by normal samples. A panel with only clinical parameters was also valuable (AUC: 0.78, 95% CI: 0.65-0.86), indicating that liquid biopsy method for detecting the pathology of CHB is possible. This is the first study to systematically elucidate the relationships among gene expressions, clinical parameters and pathological inflammation grades in CHB, and to build models predicting inflammation grades by gene expressions and/or clinical parameters as well. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.
2015-01-01
Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.
A comparative study of kinetic and connectionist modeling for shelf-life prediction of Basundi mix.
Ruhil, A P; Singh, R R B; Jain, D K; Patel, A A; Patil, G R
2011-04-01
A ready-to-reconstitute formulation of Basundi, a popular Indian dairy dessert was subjected to storage at various temperatures (10, 25 and 40 °C) and deteriorative changes in the Basundi mix were monitored using quality indices like pH, hydroxyl methyl furfural (HMF), bulk density (BD) and insolubility index (II). The multiple regression equations and the Arrhenius functions that describe the parameters' dependence on temperature for the four physico-chemical parameters were integrated to develop mathematical models for predicting sensory quality of Basundi mix. Connectionist model using multilayer feed forward neural network with back propagation algorithm was also developed for predicting the storage life of the product employing artificial neural network (ANN) tool box of MATLAB software. The quality indices served as the input parameters whereas the output parameters were the sensorily evaluated flavour and total sensory score. A total of 140 observations were used and the prediction performance was judged on the basis of per cent root mean square error. The results obtained from the two approaches were compared. Relatively lower magnitudes of percent root mean square error for both the sensory parameters indicated that the connectionist models were better fitted than kinetic models for predicting storage life.
NASA Astrophysics Data System (ADS)
Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael
2014-05-01
Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global search and Bayesian inference schemes.
Thermodynamic characterization of tandem mismatches found in naturally occurring RNA
Christiansen, Martha E.; Znosko, Brent M.
2009-01-01
Although all sequence symmetric tandem mismatches and some sequence asymmetric tandem mismatches have been thermodynamically characterized and a model has been proposed to predict the stability of previously unmeasured sequence asymmetric tandem mismatches [Christiansen,M.E. and Znosko,B.M. (2008) Biochemistry, 47, 4329–4336], experimental thermodynamic data for frequently occurring tandem mismatches is lacking. Since experimental data is preferred over a predictive model, the thermodynamic parameters for 25 frequently occurring tandem mismatches were determined. These new experimental values, on average, are 1.0 kcal/mol different from the values predicted for these mismatches using the previous model. The data for the sequence asymmetric tandem mismatches reported here were then combined with the data for 72 sequence asymmetric tandem mismatches that were published previously, and the parameters used to predict the thermodynamics of previously unmeasured sequence asymmetric tandem mismatches were updated. The average absolute difference between the measured values and the values predicted using these updated parameters is 0.5 kcal/mol. This updated model improves the prediction for tandem mismatches that were predicted rather poorly by the previous model. This new experimental data and updated predictive model allow for more accurate calculations of the free energy of RNA duplexes containing tandem mismatches, and, furthermore, should allow for improved prediction of secondary structure from sequence. PMID:19509311
Serrancolí, Gil; Kinney, Allison L.; Fregly, Benjamin J.; Font-Llagunes, Josep M.
2016-01-01
Though walking impairments are prevalent in society, clinical treatments are often ineffective at restoring lost function. For this reason, researchers have begun to explore the use of patient-specific computational walking models to develop more effective treatments. However, the accuracy with which models can predict internal body forces in muscles and across joints depends on how well relevant model parameter values can be calibrated for the patient. This study investigated how knowledge of internal knee contact forces affects calibration of neuromusculoskeletal model parameter values and subsequent prediction of internal knee contact and leg muscle forces during walking. Model calibration was performed using a novel two-level optimization procedure applied to six normal walking trials from the Fourth Grand Challenge Competition to Predict In Vivo Knee Loads. The outer-level optimization adjusted time-invariant model parameter values to minimize passive muscle forces, reserve actuator moments, and model parameter value changes with (Approach A) and without (Approach B) tracking of experimental knee contact forces. Using the current guess for model parameter values but no knee contact force information, the inner-level optimization predicted time-varying muscle activations that were close to experimental muscle synergy patterns and consistent with the experimental inverse dynamic loads (both approaches). For all the six gait trials, Approach A predicted knee contact forces with high accuracy for both compartments (average correlation coefficient r = 0.99 and root mean square error (RMSE) = 52.6 N medial; average r = 0.95 and RMSE = 56.6 N lateral). In contrast, Approach B overpredicted contact force magnitude for both compartments (average RMSE = 323 N medial and 348 N lateral) and poorly matched contact force shape for the lateral compartment (average r = 0.90 medial and −0.10 lateral). Approach B had statistically higher lateral muscle forces and lateral optimal muscle fiber lengths but lower medial, central, and lateral normalized muscle fiber lengths compared to Approach A. These findings suggest that poorly calibrated model parameter values may be a major factor limiting the ability of neuromusculoskeletal models to predict knee contact and leg muscle forces accurately for walking. PMID:27210105
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calcaterra, J.R.; Johnson, W.S.; Neu, R.W.
1997-12-31
Several methodologies have been developed to predict the lives of titanium matrix composites (TMCs) subjected to thermomechanical fatigue (TMF). This paper reviews and compares five life prediction models developed at NASA-LaRC. Wright Laboratories, based on a dingle parameter, the fiber stress in the load-carrying, or 0{degree}, direction. The two other models, both developed at Wright Labs. are multi-parameter models. These can account for long-term damage, which is beyond the scope of the single-parameter models, but this benefit is offset by the additional complexity of the methodologies. Each of the methodologies was used to model data generated at NASA-LeRC. Wright Labs.more » and Georgia Tech for the SCS-6/Timetal 21-S material system. VISCOPLY, a micromechanical stress analysis code, was used to determine the constituent stress state for each test and was used for each model to maintain consistency. The predictive capabilities of the models are compared, and the ability of each model to accurately predict the responses of tests dominated by differing damage mechanisms is addressed.« less
Model identification using stochastic differential equation grey-box models in diabetes.
Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik
2013-03-01
The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.
Handling the unknown soil hydraulic parameters in data assimilation for unsaturated flow problems
NASA Astrophysics Data System (ADS)
Lange, Natascha; Erdal, Daniel; Neuweiler, Insa
2017-04-01
Model predictions of flow in the unsaturated zone require the soil hydraulic parameters. However, these parameters cannot be determined easily in applications, in particular if observations are indirect and cover only a small range of possible states. Correlation of parameters or their correlation in the range of states that are observed is a problem, as different parameter combinations may reproduce approximately the same measured water content. In field campaigns this problem can be helped by adding more measurement devices. Often, observation networks are designed to feed models for long term prediction purposes (i.e. for weather forecasting). A popular way of making predictions with such kind of observations are data assimilation methods, like the ensemble Kalman filter (Evensen, 1994). These methods can be used for parameter estimation if the unknown parameters are included in the state vector and updated along with the model states. Given the difficulties related to estimation of the soil hydraulic parameters in general, it is questionable, though, whether these methods can really be used for parameter estimation under natural conditions. Therefore, we investigate the ability of the ensemble Kalman filter to estimate the soil hydraulic parameters. We use synthetic identical twin-experiments to guarantee full knowledge of the model and the true parameters. We use the van Genuchten model to describe the soil water retention and relative permeability functions. This model is unfortunately prone to the above mentioned pseudo-correlations of parameters. Therefore, we also test the simpler Russo Gardner model, which is less affected by that problem, in our experiments. The total number of unknown parameters is varied by considering different layers of soil. Besides, we study the influence of the parameter updates on the water content predictions. We test different iterative filter approaches and compare different observation strategies for parameter identification. Considering heterogeneous soils, we discuss the representativeness of different observation types to be used for the assimilation. G. Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5):10143-10162, 1994
Generalized Polynomial Chaos Based Uncertainty Quantification for Planning MRgLITT Procedures
Fahrenholtz, S.; Stafford, R. J.; Maier, F.; Hazle, J. D.; Fuentes, D.
2014-01-01
Purpose A generalized polynomial chaos (gPC) method is used to incorporate constitutive parameter uncertainties within the Pennes representation of bioheat transfer phenomena. The stochastic temperature predictions of the mathematical model are critically evaluated against MR thermometry data for planning MR-guided Laser Induced Thermal Therapies (MRgLITT). Methods Pennes bioheat transfer model coupled with a diffusion theory approximation of laser tissue interaction was implemented as the underlying deterministic kernel. A probabilistic sensitivity study was used to identify parameters that provide the most variance in temperature output. Confidence intervals of the temperature predictions are compared to MR temperature imaging (MRTI) obtained during phantom and in vivo canine (n=4) MRgLITT experiments. The gPC predictions were quantitatively compared to MRTI data using probabilistic linear and temporal profiles as well as 2-D 60 °C isotherms. Results Within the range of physically meaningful constitutive values relevant to the ablative temperature regime of MRgLITT, the sensitivity study indicated that the optical parameters, particularly the anisotropy factor, created the most variance in the stochastic model's output temperature prediction. Further, within the statistical sense considered, a nonlinear model of the temperature and damage dependent perfusion, absorption, and scattering is captured within the confidence intervals of the linear gPC method. Multivariate stochastic model predictions using parameters with the dominant sensitivities show good agreement with experimental MRTI data. Conclusions Given parameter uncertainties and mathematical modeling approximations of the Pennes bioheat model, the statistical framework demonstrates conservative estimates of the therapeutic heating and has potential for use as a computational prediction tool for thermal therapy planning. PMID:23692295
Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...
Estimates of the ionization association and dissociation constant (pKa) are vital to modeling the pharmacokinetic behavior of chemicals in vivo. Methodologies for the prediction of compound sequestration in specific tissues using partition coefficients require a parameter that ch...
Application of GA-SVM method with parameter optimization for landslide development prediction
NASA Astrophysics Data System (ADS)
Li, X. Z.; Kong, J. M.
2013-10-01
Prediction of landslide development process is always a hot issue in landslide research. So far, many methods for landslide displacement series prediction have been proposed. Support vector machine (SVM) has been proved to be a novel algorithm with good performance. However, the performance strongly depends on the right selection of the parameters (C and γ) of SVM model. In this study, we presented an application of GA-SVM method with parameter optimization in landslide displacement rate prediction. We selected a typical large-scale landslide in some hydro - electrical engineering area of Southwest China as a case. On the basis of analyzing the basic characteristics and monitoring data of the landslide, a single-factor GA-SVM model and a multi-factor GA-SVM model of the landslide were built. Moreover, the models were compared with single-factor and multi-factor SVM models of the landslide. The results show that, the four models have high prediction accuracies, but the accuracies of GA-SVM models are slightly higher than those of SVM models and the accuracies of multi-factor models are slightly higher than those of single-factor models for the landslide prediction. The accuracy of the multi-factor GA-SVM models is the highest, with the smallest RSME of 0.0009 and the biggest RI of 0.9992.
Marschollek, M; Nemitz, G; Gietzelt, M; Wolf, K H; Meyer Zu Schwabedissen, H; Haux, R
2009-08-01
Falls are among the predominant causes for morbidity and mortality in elderly persons and occur most often in geriatric clinics. Despite several studies that have identified parameters associated with elderly patients' fall risk, prediction models -- e.g., based on geriatric assessment data -- are currently not used on a regular basis. Furthermore, technical aids to objectively assess mobility-associated parameters are currently not used. To assess group differences in clinical as well as common geriatric assessment data and sensory gait measurements between fallers and non-fallers in a geriatric sample, and to derive and compare two prediction models based on assessment data alone (model #1) and added sensory measurement data (model #2). For a sample of n=110 geriatric in-patients (81 women, 29 men) the following fall risk-associated assessments were performed: Timed 'Up & Go' (TUG) test, STRATIFY score and Barthel index. During the TUG test the subjects wore a triaxial accelerometer, and sensory gait parameters were extracted from the data recorded. Group differences between fallers (n=26) and non-fallers (n=84) were compared using Student's t-test. Two classification tree prediction models were computed and compared. Significant differences between the two groups were found for the following parameters: time to complete the TUG test, transfer item (Barthel), recent falls (STRATIFY), pelvic sway while walking and step length. Prediction model #1 (using common assessment data only) showed a sensitivity of 38.5% and a specificity of 97.6%, prediction model #2 (assessment data plus sensory gait parameters) performed with 57.7% and 100%, respectively. Significant differences between fallers and non-fallers among geriatric in-patients can be detected for several assessment subscores as well as parameters recorded by simple accelerometric measurements during a common mobility test. Existing geriatric assessment data may be used for falls prediction on a regular basis. Adding sensory data improves the specificity of our test markedly.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.
The significance of parameter uncertainties for the prediction of offshore pile driving noise.
Lippert, Tristan; von Estorff, Otto
2014-11-01
Due to the construction of offshore wind farms and its potential effect on marine wildlife, the numerical prediction of pile driving noise over long ranges has recently gained importance. In this contribution, a coupled finite element/wavenumber integration model for noise prediction is presented and validated by measurements. The ocean environment, especially the sea bottom, can only be characterized with limited accuracy in terms of input parameters for the numerical model at hand. Therefore the effect of these parameter uncertainties on the prediction of sound pressure levels (SPLs) in the water column is investigated by a probabilistic approach. In fact, a variation of the bottom material parameters by means of Monte-Carlo simulations shows significant effects on the predicted SPLs. A sensitivity analysis of the model with respect to the single quantities is performed, as well as a global variation. Based on the latter, the probability distribution of the SPLs at an exemplary receiver position is evaluated and compared to measurements. The aim of this procedure is to develop a model to reliably predict an interval for the SPLs, by quantifying the degree of uncertainty of the SPLs with the MC simulations.
Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.
2015-01-01
Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972
Physical and numerical studies of a fracture system model
NASA Astrophysics Data System (ADS)
Piggott, Andrew R.; Elsworth, Derek
1989-03-01
Physical and numerical studies of transient flow in a model of discretely fractured rock are presented. The physical model is a thermal analogue to fractured media flow consisting of idealized disc-shaped fractures. The numerical model is used to predict the behavior of the physical model. The use of different insulating materials to encase the physical model allows the effects of differing leakage magnitudes to be examined. A procedure for determining appropriate leakage parameters is documented. These parameters are used in forward analysis to predict the thermal response of the physical model. Knowledge of the leakage parameters and of the temporal variation of boundary conditions are shown to be essential to an accurate prediction. Favorable agreement is illustrated between numerical and physical results. The physical model provides a data source for the benchmarking of alternative numerical algorithms.
NASA Astrophysics Data System (ADS)
Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.
2017-07-01
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poludniowski, Gavin G.; Evans, Philip M.
2013-04-15
Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii)more » suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.« less
An evaluation of the predictive capabilities of CTRW and MRMT
NASA Astrophysics Data System (ADS)
Fiori, Aldo; Zarlenga, Antonio; Gotovac, Hrvoje; Jankovic, Igor; Cvetkovic, Vladimir; Dagan, Gedeon
2016-04-01
The prediction capability of two approximate models of non-Fickian transport in highly heterogeneous aquifers is checked by comparison with accurate numerical simulations, for mean uniform flow of velocity U. The two models considered are the MRMT (Multi Rate Mass Transfer) and CTRW (Continuous Time Random Walk) models. Both circumvent the need to solve the flow and transport equations by using proxy models, which provide the BTC μ(x,t) depending on a vector a of unknown 5 parameters. Although underlain by different conceptualisations, the two models have a similar mathematical structure. The proponents of the models suggest using field transport experiments at a small scale to calibrate a, toward predicting transport at larger scale. The strategy was tested with the aid of accurate numerical simulations in two and three dimensions from the literature. First, the 5 parameter values were calibrated by using the simulated μ at a control plane close to the injection one and subsequently using these same parameters for predicting μ at further 10 control planes. It is found that the two methods perform equally well, though the parameters identification is nonunique, with a large set of parameters providing similar fitting. Also, errors in the determination of the mean eulerian velocity may lead to significant shifts of the predicted BTC. It is found that the simulated BTCs satisfy Markovianity: they can be found as n-fold convolutions of a "kernel", in line with the models' main assumption.
Bogard, Matthieu; Ravel, Catherine; Paux, Etienne; Bordes, Jacques; Balfourier, François; Chapman, Scott C.; Le Gouis, Jacques; Allard, Vincent
2014-01-01
Prediction of wheat phenology facilitates the selection of cultivars with specific adaptations to a particular environment. However, while QTL analysis for heading date can identify major genes controlling phenology, the results are limited to the environments and genotypes tested. Moreover, while ecophysiological models allow accurate predictions in new environments, they may require substantial phenotypic data to parameterize each genotype. Also, the model parameters are rarely related to all underlying genes, and all the possible allelic combinations that could be obtained by breeding cannot be tested with models. In this study, a QTL-based model is proposed to predict heading date in bread wheat (Triticum aestivum L.). Two parameters of an ecophysiological model (V sat and P base, representing genotype vernalization requirements and photoperiod sensitivity, respectively) were optimized for 210 genotypes grown in 10 contrasting location × sowing date combinations. Multiple linear regression models predicting V sat and P base with 11 and 12 associated genetic markers accounted for 71 and 68% of the variance of these parameters, respectively. QTL-based V sat and P base estimates were able to predict heading date of an independent validation data set (88 genotypes in six location × sowing date combinations) with a root mean square error of prediction of 5 to 8.6 days, explaining 48 to 63% of the variation for heading date. The QTL-based model proposed in this study may be used for agronomic purposes and to assist breeders in suggesting locally adapted ideotypes for wheat phenology. PMID:25148833
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.; Doherty, J.
2011-12-01
Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.
Basal glycogenolysis in mouse skeletal muscle: in vitro model predicts in vivo fluxes
NASA Technical Reports Server (NTRS)
Lambeth, Melissa J.; Kushmerick, Martin J.; Marcinek, David J.; Conley, Kevin E.
2002-01-01
A previously published mammalian kinetic model of skeletal muscle glycogenolysis, consisting of literature in vitro parameters, was modified by substituting mouse specific Vmax values. The model demonstrates that glycogen breakdown to lactate is under ATPase control. Our criteria to test whether in vitro parameters could reproduce in vivo dynamics was the ability of the model to fit phosphocreatine (PCr) and inorganic phosphate (Pi) dynamic NMR data from ischemic basal mouse hindlimbs and predict biochemically-assayed lactate concentrations. Fitting was accomplished by optimizing four parameters--the ATPase rate coefficient, fraction of activated glycogen phosphorylase, and the equilibrium constants of creatine kinase and adenylate kinase (due to the absence of pH in the model). The optimized parameter values were physiologically reasonable, the resultant model fit the [PCr] and [Pi] timecourses well, and the model predicted the final measured lactate concentration. This result demonstrates that additional features of in vivo enzyme binding are not necessary for quantitative description of glycogenolytic dynamics.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.
The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. In order to provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraftmore » were used as the predictors. Furthermore, we selected model explanatory parameters from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L≥4.8 and L ≥5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≤ L ≤ 6, while for the GEO flux prediction, the K P index is better than Dst. Finally, a test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.« less
Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.; ...
2015-12-22
The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. In order to provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraftmore » were used as the predictors. Furthermore, we selected model explanatory parameters from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L≥4.8 and L ≥5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≤ L ≤ 6, while for the GEO flux prediction, the K P index is better than Dst. Finally, a test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.« less
NASA Astrophysics Data System (ADS)
Sakaguchi, Kaori; Nagatsuma, Tsutomu; Reeves, Geoffrey D.; Spence, Harlan E.
2015-12-01
The Van Allen radiation belts surrounding the Earth are filled with MeV-energy electrons. This region poses ionizing radiation risks for spacecraft that operate within it, including those in geostationary orbit (GEO) and medium Earth orbit. To provide alerts of electron flux enhancements, 16 prediction models of the electron log-flux variation throughout the equatorial outer radiation belt as a function of the McIlwain L parameter were developed using the multivariate autoregressive model and Kalman filter. Measurements of omnidirectional 2.3 MeV electron flux from the Van Allen Probes mission as well as >2 MeV electrons from the GOES 15 spacecraft were used as the predictors. Model explanatory parameters were selected from solar wind parameters, the electron log-flux at GEO, and geomagnetic indices. For the innermost region of the outer radiation belt, the electron flux is best predicted by using the Dst index as the sole input parameter. For the central to outermost regions, at L ≧ 4.8 and L ≧ 5.6, the electron flux is predicted most accurately by including also the solar wind velocity and then the dynamic pressure, respectively. The Dst index is the best overall single parameter for predicting at 3 ≦ L ≦ 6, while for the GEO flux prediction, the KP index is better than Dst. A test calculation demonstrates that the model successfully predicts the timing and location of the flux maximum as much as 2 days in advance and that the electron flux decreases faster with time at higher L values, both model features consistent with the actually observed behavior.
Robust human body model injury prediction in simulated side impact crashes.
Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D
2016-01-01
This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Inverse modeling with RZWQM2 to predict water quality
USDA-ARS?s Scientific Manuscript database
Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Prediction of Flutter Boundary Using Flutter Margin for The Discrete-Time System
NASA Astrophysics Data System (ADS)
Dwi Saputra, Angga; Wibawa Purabaya, R.
2018-04-01
Flutter testing in a wind tunnel is generally conducted at subcritical speeds to avoid damages. Hence, The flutter speed has to be predicted from the behavior some of its stability criteria estimated against the dynamic pressure or flight speed. Therefore, it is quite important for a reliable flutter prediction method to estimates flutter boundary. This paper summarizes the flutter testing of a wing cantilever model in a wind tunnel. The model has two degree of freedom; they are bending and torsion modes. The flutter test was conducted in a subsonic wind tunnel. The dynamic data responses was measured by two accelerometers that were mounted on leading edge and center of wing tip. The measurement was repeated while the wind speed increased. The dynamic responses were used to determine the parameter flutter margin for the discrete-time system. The flutter boundary of the model was estimated using extrapolation of the parameter flutter margin against the dynamic pressure. The parameter flutter margin for the discrete-time system has a better performance for flutter prediction than the modal parameters. A model with two degree freedom and experiencing classical flutter, the parameter flutter margin for the discrete-time system gives a satisfying result in prediction of flutter boundary on subsonic wind tunnel test.
NASA Astrophysics Data System (ADS)
Davis, A. D.; Heimbach, P.; Marzouk, Y.
2017-12-01
We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.
Modeling polyvinyl chloride Plasma Modification by Neural Networks
NASA Astrophysics Data System (ADS)
Wang, Changquan
2018-03-01
Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.
A strategy to establish Food Safety Model Repositories.
Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M
2015-07-02
Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.
A General Approach for Specifying Informative Prior Distributions for PBPK Model Parameters
Characterization of uncertainty in model predictions is receiving more interest as more models are being used in applications that are critical to human health. For models in which parameters reflect biological characteristics, it is often possible to provide estimates of paramet...
Safta, C.; Ricciuto, Daniel M.; Sargsyan, Khachik; ...
2015-07-01
In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employedmore » in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less
Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.; ...
2017-07-26
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less
Physical and mathematical modelling of ladle metallurgy operations. [steelmaking
NASA Technical Reports Server (NTRS)
El-Kaddah, N.; Szekely, J.
1982-01-01
Experimental measurements are reported, on the velocity fields and turbulence parameters on a water model of an argon stirred ladle. These velocity measurements are complemented by direct heat transfer measurements, obtained by studying the rate at which ice rods immersed into the system melt, at various locations. The theoretical work undertaken involved the use of the turbulence Navier-Stokes equations in conjunction with the kappa-epsilon model to predict the local velocity fields and the maps of the turbulence parameters. Theoretical predictions were in reasonably good agreement with the experimentally measured velocity fields; the agreement between the predicted and the measured turbulence parameters was less perfect, but still satisfactory. The implications of these findings to the modelling of ladle metallurgical operations are discussed.
NASA Technical Reports Server (NTRS)
Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.
2013-01-01
The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
Zhang, Yingying; Wang, Juncheng; Vorontsov, A M; Hou, Guangli; Nikanorova, M N; Wang, Hongliang
2014-01-01
The international marine ecological safety monitoring demonstration station in the Yellow Sea was developed as a collaborative project between China and Russia. It is a nonprofit technical workstation designed as a facility for marine scientific research for public welfare. By undertaking long-term monitoring of the marine environment and automatic data collection, this station will provide valuable information for marine ecological protection and disaster prevention and reduction. The results of some initial research by scientists at the research station into predictive modeling of marine ecological environments and early warning are described in this paper. Marine ecological processes are influenced by many factors including hydrological and meteorological conditions, biological factors, and human activities. Consequently, it is very difficult to incorporate all these influences and their interactions in a deterministic or analysis model. A prediction model integrating a time series prediction approach with neural network nonlinear modeling is proposed for marine ecological parameters. The model explores the natural fluctuations in marine ecological parameters by learning from the latest observed data automatically, and then predicting future values of the parameter. The model is updated in a "rolling" fashion with new observed data from the monitoring station. Prediction experiments results showed that the neural network prediction model based on time series data is effective for marine ecological prediction and can be used for the development of early warning systems.
Gupta, Jasmine; Nunes, Cletus; Vyas, Shyam; Jonnalagadda, Sriramakamal
2011-03-10
The objectives of this study were (i) to develop a computational model based on molecular dynamics technique to predict the miscibility of indomethacin in carriers (polyethylene oxide, glucose, and sucrose) and (ii) to experimentally verify the in silico predictions by characterizing the drug-carrier mixtures using thermoanalytical techniques. Molecular dynamics (MD) simulations were performed using the COMPASS force field, and the cohesive energy density and the solubility parameters were determined for the model compounds. The magnitude of difference in the solubility parameters of drug and carrier is indicative of their miscibility. The MD simulations predicted indomethacin to be miscible with polyethylene oxide and to be borderline miscible with sucrose and immiscible with glucose. The solubility parameter values obtained using the MD simulations values were in reasonable agreement with those calculated using group contribution methods. Differential scanning calorimetry showed melting point depression of polyethylene oxide with increasing levels of indomethacin accompanied by peak broadening, confirming miscibility. In contrast, thermal analysis of blends of indomethacin with sucrose and glucose verified general immiscibility. The findings demonstrate that molecular modeling is a powerful technique for determining the solubility parameters and predicting miscibility of pharmaceutical compounds. © 2011 American Chemical Society
Ding, Jinliang; Chai, Tianyou; Wang, Hong
2011-03-01
This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.
NASA Astrophysics Data System (ADS)
Babakhani, Peyman; Bridge, Jonathan; Doong, Ruey-an; Phenrat, Tanapon
2017-06-01
The continuing rapid expansion of industrial and consumer processes based on nanoparticles (NP) necessitates a robust model for delineating their fate and transport in groundwater. An ability to reliably specify the full parameter set for prediction of NP transport using continuum models is crucial. In this paper we report the reanalysis of a data set of 493 published column experiment outcomes together with their continuum modeling results. Experimental properties were parameterized into 20 factors which are commonly available. They were then used to predict five key continuum model parameters as well as the effluent concentration via artificial neural network (ANN)-based correlations. The Partial Derivatives (PaD) technique and Monte Carlo method were used for the analysis of sensitivities and model-produced uncertainties, respectively. The outcomes shed light on several controversial relationships between the parameters, e.g., it was revealed that the trend of Katt with average pore water velocity was positive. The resulting correlations, despite being developed based on a "black-box" technique (ANN), were able to explain the effects of theoretical parameters such as critical deposition concentration (CDC), even though these parameters were not explicitly considered in the model. Porous media heterogeneity was considered as a parameter for the first time and showed sensitivities higher than those of dispersivity. The model performance was validated well against subsets of the experimental data and was compared with current models. The robustness of the correlation matrices was not completely satisfactory, since they failed to predict the experimental breakthrough curves (BTCs) at extreme values of ionic strengths.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less
Predicting in ungauged basins using a parsimonious rainfall-runoff model
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Olav Peerebom, Ivar; Nilsson, Anna
2015-04-01
Prediction in ungauged basins is a demanding, but necessary test for hydrological model structures. Ideally, the relationship between model parameters and catchment characteristics (CC) should be hydrologically justifiable. Many studies, however, report on failure to obtain significant correlations between model parameters and CCs. Under the hypothesis that the lack of correlations stems from non-identifiability of model parameters caused by overparameterization, the relatively new parameter parsimonious DDD (Distance Distribution Dynamics) model was tested for predictions in ungauged basins in Norway. In DDD, the capacity of the subsurface water reservoir M is the only parameter to be calibrated whereas the runoff dynamics is completely parameterised from observed characteristics derived from GIS and runoff recession analysis. Water is conveyed through the soils to the river network by waves with celerities determined by the level of saturation in the catchment. The distributions of distances between points in the catchment to the nearest river reach and of the river network give, together with the celerities, distributions of travel times, and, consequently unit hydrographs. DDD has 6 parameters less to calibrate in the runoff module than, for example, the well-known Swedish HBV model. In this study, multiple regression equations relating CCs and model parameters were trained from 84 calibrated catchments located all over Norway and all model parameters showed significant correlations with catchment characteristics. The significant correlation coefficients (with p- value < 0.05) ranged from 0.22-0.55. The suitability of DDD for predictions in ungauged basins was tested for 17 catchments not used to estimate the multiple regression equations. For 10 of the 17 catchments, deviations in Nash-Suthcliffe Efficiency (NSE) criteria between the calibrated and regionalised model were less than 0.1. The median NSE for the regionalised DDD for the 17 catchments, for two different time series was 0.66 and 0.72. Deviations in NSE between calibrated and regionalised models are well explained by the deviations between calibrated and regressed parameters describing spatial snow distribution and snowmelt, respectively. This latter result indicates the topic for further improvements in the model structure of DDD.
Estimation of the viscosities of liquid binary alloys
NASA Astrophysics Data System (ADS)
Wu, Min; Su, Xiang-Yu
2018-01-01
As one of the most important physical and chemical properties, viscosity plays a critical role in physics and materials as a key parameter to quantitatively understanding the fluid transport process and reaction kinetics in metallurgical process design. Experimental and theoretical studies on liquid metals are problematic. Today, there are many empirical and semi-empirical models available with which to evaluate the viscosity of liquid metals and alloys. However, the parameter of mixed energy in these models is not easily determined, and most predictive models have been poorly applied. In the present study, a new thermodynamic parameter Δ G is proposed to predict liquid alloy viscosity. The prediction equation depends on basic physical and thermodynamic parameters, namely density, melting temperature, absolute atomic mass, electro-negativity, electron density, molar volume, Pauling radius, and mixing enthalpy. Our results show that the liquid alloy viscosity predicted using the proposed model is closely in line with the experimental values. In addition, if the component radius difference is greater than 0.03 nm at a certain temperature, the atomic size factor has a significant effect on the interaction of the binary liquid metal atoms. The proposed thermodynamic parameter Δ G also facilitates the study of other physical properties of liquid metals.
Evaluation of the precipitation-runoff modeling system, Beaver Creek basin, Kentucky
Bower, D.E.
1985-01-01
The Precipitation Runoff Modeling System (PRMS) was evaluated with data from Cane branch and Helton Branch in the Beaver Creek basin of Kentucky. Because of previous studies, 10.6 years of record were available to establish a data base for the basin including 60 storms for Cane Branch and 50 storms for Helton Branch. The model was calibrated initially using data from the 1956-58 water years. Runoff predicted by the model was 94.7% of the observed runoff at Cane Branch (mined area) and 96.9% at Helton Branch (unmined area). After the model and data base were modified, the model was refitted to the 1956-58 data for Helton Branch. It then predicted 98.6% of the runoff for the 10.6-year period. The model parameters from Helton Branch were then used to simulate the Cane Branch runoff and discharge. The model predicted 102.6% of the observed runoff at Cane Branch for the 10.6 years. The simulations produced reasonable storm volumes and peak discharges. Sensitivity analysis of model parameters indicated the parameters associated with soil moisture are the most sensitive. The model was used to predict sediment concentration and daily sediment load for selected storm periods. The sediment computations indicated the model can be used to predict sediment concentrations during storm events. (USGS)
One-Dimensional Simulations for Spall in Metals with Intra- and Inter-grain failure models
NASA Astrophysics Data System (ADS)
Ferri, Brian; Dwivedi, Sunil; McDowell, David
2017-06-01
The objective of the present work is to model spall failure in metals with coupled effect of intra-grain and inter-grain failure mechanisms. The two mechanisms are modeled by a void nucleation, growth, and coalescence (VNGC) model and contact-cohesive model respectively. Both models were implemented in a 1-D code to simulate spall in 6061-T6 aluminum at two impact velocities. The parameters of the VNGC model without inter-grain failure and parameters of the cohesive model without intra-grain failure were first determined to obtain pull-back velocity profiles in agreement with experimental data. With the same impact velocities, the same sets of parameters did not predict the velocity profiles when both mechanisms were simultaneously activated. A sensitivity study was performed to predict spall under combined mechanisms by varying critical stress in the VNGC model and maximum traction in the cohesive model. The study provided possible sets of the two parameters leading to spall. Results will be presented comparing the predicted velocity profile with experimental data using one such set of parameters for the combined intra-grain and inter-grain failures during spall. Work supported by HDTRA1-12-1-0004 gran and by the School of Mechanical Engineering GTA.
NASA Astrophysics Data System (ADS)
Liu, Jia; Li, Jing; Zhang, Zhong-ping
2013-04-01
In this article, a fatigue damage parameter is proposed to assess the multiaxial fatigue lives of ductile metals based on the critical plane concept: Fatigue crack initiation is controlled by the maximum shear strain, and the other important effect in the fatigue damage process is the normal strain and stress. This fatigue damage parameter introduces a stress-correlated factor, which describes the degree of the non-proportional cyclic hardening. Besides, a three-parameter multiaxial fatigue criterion is used to correlate the fatigue lifetime of metallic materials with the proposed damage parameter. Under the uniaxial loading, this three-parameter model reduces to the recently developed Zhang's model for predicting the uniaxial fatigue crack initiation life. The accuracy and reliability of this three-parameter model are checked against the experimental data found in literature through testing six different ductile metals under various strain paths with zero/non-zero mean stress.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Bianchi, Marco
2018-03-01
Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 < αPL < 4). The PL exponent tends to lower values as the tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple mechanistic upscaling model based on the PLCO formulation is able to predict the ensemble of BTCs from the stochastic transport simulations without the need of any fitted parameters. The model embeds the constant αCO = 1 and relies on a stratified description of the transport mechanisms to estimate λ. The PL fails to reproduce the ensemble of BTCs at late time, while the LOG model provides consistent results as the PLCO model, however without a clear mechanistic link between physical properties and model parameters. It is concluded that, while all parametric models may work equally well (or equally wrong) for the empirical fitting of the experimental BTCs tails due to the effects of subsampling, for predictive purposes this is not true. A careful selection of the proper heavily tailed models and corresponding parameters is required to ensure physically-based transport predictions.
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
NASA Astrophysics Data System (ADS)
Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John
2001-01-01
For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Bauerle, William L.; Bowden, Joseph D.
2011-01-01
A spatially explicit mechanistic model, MAESTRA, was used to separate key parameters affecting transpiration to provide insights into the most influential parameters for accurate predictions of within-crown and within-canopy transpiration. Once validated among Acer rubrum L. genotypes, model responses to different parameterization scenarios were scaled up to stand transpiration (expressed per unit leaf area) to assess how transpiration might be affected by the spatial distribution of foliage properties. For example, when physiological differences were accounted for, differences in leaf width among A. rubrum L. genotypes resulted in a 25% difference in transpiration. An in silico within-canopy sensitivity analysis was conducted over the range of genotype parameter variation observed and under different climate forcing conditions. The analysis revealed that seven of 16 leaf traits had a ≥5% impact on transpiration predictions. Under sparse foliage conditions, comparisons of the present findings with previous studies were in agreement that parameters such as the maximum Rubisco-limited rate of photosynthesis can explain ∼20% of the variability in predicted transpiration. However, the spatial analysis shows how such parameters can decrease or change in importance below the uppermost canopy layer. Alternatively, model sensitivity to leaf width and minimum stomatal conductance was continuous along a vertical canopy depth profile. Foremost, transpiration sensitivity to an observed range of morphological and physiological parameters is examined and the spatial sensitivity of transpiration model predictions to vertical variations in microclimate and foliage density is identified to reduce the uncertainty of current transpiration predictions. PMID:21617246
Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich
2016-07-01
A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Modeling of venturi scrubber efficiency
NASA Astrophysics Data System (ADS)
Crowder, Jerry W.; Noll, Kenneth E.; Davis, Wayne T.
The parameters affecting venturi scrubber performance have been rationally examined and modifications to the current modeling theory have been developed. The modified model has been validated with available experimental data for a range of throat gas velocities, liquid-to-gas ratios and particle diameters and is used to study the effect of some design parameters on collection efficiency. Most striking among the observations is the prediction of a new design parameter termed the minimum contactor length. Also noted is the prediction of little effect on collection efficiency with increasing liquid-to-gas ratio above about 2ℓ m-3. Indeed, for some cases a decrease in collection efficiency is predicted for liquid rates above this value.
Culture and Social Relationship as Factors of Affecting Communicative Non-verbal Behaviors
NASA Astrophysics Data System (ADS)
Akhter Lipi, Afia; Nakano, Yukiko; Rehm, Mathias
The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes of agent's nonverbal behaviors in HAI. As the first step, a comparative corpus analysis is done for two cultures in two specific social relationships. Next, by integrating the cultural and social parameters factors with the empirical data from corpus analysis, we establish a model that predicts posture. The predictions from our model successfully demonstrate that both cultural background and social relationship moderate communicative non-verbal behaviors.
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
External Evaluation of Two Fluconazole Infant Population Pharmacokinetic Models
Hwang, Michael F.; Beechinor, Ryan J.; Wade, Kelly C.; Benjamin, Daniel K.; Smith, P. Brian; Hornik, Christoph P.; Capparelli, Edmund V.; Duara, Shahnaz; Kennedy, Kathleen A.; Cohen-Wolkowiez, Michael
2017-01-01
ABSTRACT Fluconazole is an antifungal agent used for the treatment of invasive candidiasis, a leading cause of morbidity and mortality in premature infants. Population pharmacokinetic (PK) models of fluconazole in infants have been previously published by Wade et al. (Antimicrob Agents Chemother 52:4043–4049, 2008, https://doi.org/10.1128/AAC.00569-08) and Momper et al. (Antimicrob Agents Chemother 60:5539–5545, 2016, https://doi.org/10.1128/AAC.00963-16). Here we report the results of the first external evaluation of the predictive performance of both models. We used patient-level data from both studies to externally evaluate both PK models. The predictive performance of each model was evaluated using the model prediction error (PE), mean prediction error (MPE), mean absolute prediction error (MAPE), prediction-corrected visual predictive check (pcVPC), and normalized prediction distribution errors (NPDE). The values of the parameters of each model were reestimated using both the external and merged data sets. When evaluated with the external data set, the model proposed by Wade et al. showed lower median PE, MPE, and MAPE (0.429 μg/ml, 41.9%, and 57.6%, respectively) than the model proposed by Momper et al. (2.45 μg/ml, 188%, and 195%, respectively). The values of the majority of reestimated parameters were within 20% of their respective original parameter values for all model evaluations. Our analysis determined that though both models are robust, the model proposed by Wade et al. had greater accuracy and precision than the model proposed by Momper et al., likely because it was derived from a patient population with a wider age range. This study highlights the importance of the external evaluation of infant population PK models. PMID:28893774
Nonlinear ARMA models for the D(st) index and their physical interpretation
NASA Technical Reports Server (NTRS)
Vassiliadis, D.; Klimas, A. J.; Baker, D. N.
1996-01-01
Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.
QCD nature of dark energy at finite temperature: Cosmological implications
NASA Astrophysics Data System (ADS)
Azizi, K.; Katırcı, N.
2016-05-01
The Veneziano ghost field has been proposed as an alternative source of dark energy, whose energy density is consistent with the cosmological observations. In this model, the energy density of the QCD ghost field is expressed in terms of QCD degrees of freedom at zero temperature. We extend this model to finite temperature to search the model predictions from late time to early universe. We depict the variations of QCD parameters entering the calculations, dark energy density, equation of state, Hubble and deceleration parameters on temperature from zero to a critical temperature. We compare our results with the observations and theoretical predictions existing at different eras. It is found that this model safely defines the universe from quark condensation up to now and its predictions are not in tension with those of the standard cosmology. The EoS parameter of dark energy is dynamical and evolves from -1/3 in the presence of radiation to -1 at late time. The finite temperature ghost dark energy predictions on the Hubble parameter well fit to those of Λ CDM and observations at late time.
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
NASA Astrophysics Data System (ADS)
Duc-Toan, Nguyen; Tien-Long, Banh; Young-Suk, Kim; Dong-Won, Jung
2011-08-01
In this study, a modified Johnson-Cook (J-C) model and an innovated method to determine (J-C) material parameters are proposed to predict more correctly stress-strain curve for tensile tests in elevated temperatures. A MATLAB tool is used to determine material parameters by fitting a curve to follow Ludwick's hardening law at various elevated temperatures. Those hardening law parameters are then utilized to determine modified (J-C) model material parameters. The modified (J-C) model shows the better prediction compared to the conventional one. As the first verification, an FEM tensile test simulation based on the isotropic hardening model for boron sheet steel at elevated temperatures was carried out via a user-material subroutine, using an explicit finite element code, and compared with the measurements. The temperature decrease of all elements due to the air cooling process was then calculated when considering the modified (J-C) model and coded to VUMAT subroutine for tensile test simulation of cooling process. The modified (J-C) model showed the good agreement between the simulation results and the corresponding experiments. The second investigation was applied for V-bending spring-back prediction of magnesium alloy sheets at elevated temperatures. Here, the combination of proposed J-C model with modified hardening law considering the unusual plastic behaviour for magnesium alloy sheet was adopted for FEM simulation of V-bending spring-back prediction and shown the good comparability with corresponding experiments.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
USDA-ARS?s Scientific Manuscript database
Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...
Modeling Brain Dynamics in Brain Tumor Patients Using the Virtual Brain.
Aerts, Hannelore; Schirner, Michael; Jeurissen, Ben; Van Roost, Dirk; Achten, Eric; Ritter, Petra; Marinazzo, Daniele
2018-01-01
Presurgical planning for brain tumor resection aims at delineating eloquent tissue in the vicinity of the lesion to spare during surgery. To this end, noninvasive neuroimaging techniques such as functional MRI and diffusion-weighted imaging fiber tracking are currently employed. However, taking into account this information is often still insufficient, as the complex nonlinear dynamics of the brain impede straightforward prediction of functional outcome after surgical intervention. Large-scale brain network modeling carries the potential to bridge this gap by integrating neuroimaging data with biophysically based models to predict collective brain dynamics. As a first step in this direction, an appropriate computational model has to be selected, after which suitable model parameter values have to be determined. To this end, we simulated large-scale brain dynamics in 25 human brain tumor patients and 11 human control participants using The Virtual Brain, an open-source neuroinformatics platform. Local and global model parameters of the Reduced Wong-Wang model were individually optimized and compared between brain tumor patients and control subjects. In addition, the relationship between model parameters and structural network topology and cognitive performance was assessed. Results showed (1) significantly improved prediction accuracy of individual functional connectivity when using individually optimized model parameters; (2) local model parameters that can differentiate between regions directly affected by a tumor, regions distant from a tumor, and regions in a healthy brain; and (3) interesting associations between individually optimized model parameters and structural network topology and cognitive performance.
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
Prediction of mortality rates using a model with stochastic parameters
NASA Astrophysics Data System (ADS)
Tan, Chon Sern; Pooi, Ah Hin
2016-10-01
Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.
NASA Astrophysics Data System (ADS)
Torki-Harchegani, Mehdi; Ghanbarian, Davoud; Sadeghi, Morteza
2015-08-01
To design new dryers or improve existing drying equipments, accurate values of mass transfer parameters is of great importance. In this study, an experimental and theoretical investigation of drying whole lemons was carried out. The whole lemons were dried in a convective hot air dryer at different air temperatures (50, 60 and 75 °C) and a constant air velocity (1 m s-1). In theoretical consideration, three moisture transfer models including Dincer and Dost model, Bi- G correlation approach and conventional solution of Fick's second law of diffusion were used to determine moisture transfer parameters and predict dimensionless moisture content curves. The predicted results were then compared with the experimental data and the higher degree of prediction accuracy was achieved by the Dincer and Dost model.
Active Control of Interface Shape During the Crystal Growth of Lead Bromide
NASA Technical Reports Server (NTRS)
Duval, W. M. B.; Batur, C.; Singh, N. B.
2003-01-01
A thermal model for predicting and designing the furnace temperature profile was developed and used for the crystal growth of lead bromide. The model gives the ampoule temperature as a function of the furnace temperature, thermal conductivity, heat transfer coefficients, and ampoule dimensions as variable parameters. Crystal interface curvature was derived from the model and it was compared with the predicted curvature for a particular furnace temperature and growth parameters. Large crystals of lead bromide were grown and it was observed that interface shape was in agreement with the shape predicted by this model.
METHODOLOGIES FOR CALIBRATION AND PREDICTIVE ANALYSIS OF A WATERSHED MODEL
The use of a fitted-parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can l...
SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Iyengar, P
2016-06-15
Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less
Predicting network modules of cell cycle regulators using relative protein abundance statistics.
Oguz, Cihan; Watson, Layne T; Baumann, William T; Tyson, John J
2017-02-28
Parameter estimation in systems biology is typically done by enforcing experimental observations through an objective function as the parameter space of a model is explored by numerical simulations. Past studies have shown that one usually finds a set of "feasible" parameter vectors that fit the available experimental data equally well, and that these alternative vectors can make different predictions under novel experimental conditions. In this study, we characterize the feasible region of a complex model of the budding yeast cell cycle under a large set of discrete experimental constraints in order to test whether the statistical features of relative protein abundance predictions are influenced by the topology of the cell cycle regulatory network. Using differential evolution, we generate an ensemble of feasible parameter vectors that reproduce the phenotypes (viable or inviable) of wild-type yeast cells and 110 mutant strains. We use this ensemble to predict the phenotypes of 129 mutant strains for which experimental data is not available. We identify 86 novel mutants that are predicted to be viable and then rank the cell cycle proteins in terms of their contributions to cumulative variability of relative protein abundance predictions. Proteins involved in "regulation of cell size" and "regulation of G1/S transition" contribute most to predictive variability, whereas proteins involved in "positive regulation of transcription involved in exit from mitosis," "mitotic spindle assembly checkpoint" and "negative regulation of cyclin-dependent protein kinase by cyclin degradation" contribute the least. These results suggest that the statistics of these predictions may be generating patterns specific to individual network modules (START, S/G2/M, and EXIT). To test this hypothesis, we develop random forest models for predicting the network modules of cell cycle regulators using relative abundance statistics as model inputs. Predictive performance is assessed by the areas under receiver operating characteristics curves (AUC). Our models generate an AUC range of 0.83-0.87 as opposed to randomized models with AUC values around 0.50. By using differential evolution and random forest modeling, we show that the model prediction statistics generate distinct network module-specific patterns within the cell cycle network.
Charge-coupled-device X-ray detector performance model
NASA Technical Reports Server (NTRS)
Bautz, M. W.; Berman, G. E.; Doty, J. P.; Ricker, G. R.
1987-01-01
A model that predicts the performance characteristics of CCD detectors being developed for use in X-ray imaging is presented. The model accounts for the interactions of both X-rays and charged particles with the CCD and simulates the transport and loss of charge in the detector. Predicted performance parameters include detective and net quantum efficiencies, split-event probability, and a parameter characterizing the effective thickness presented by the detector to cosmic-ray protons. The predicted performance of two CCDs of different epitaxial layer thicknesses is compared. The model predicts that in each device incomplete recovery of the charge liberated by a photon of energy between 0.1 and 10 keV is very likely to be accompanied by charge splitting between adjacent pixels. The implications of the model predictions for CCD data processing algorithms are briefly discussed.
NASA Astrophysics Data System (ADS)
Ahmed, Riaz; Banerjee, Sourav
2018-02-01
In this article, an extremely versatile predictive model for a newly developed Basilar meta-Membrane (BM2) sensors is reported with variable engineering parameters that contribute to it's frequency selection capabilities. The predictive model reported herein is for advancement over existing method by incorporating versatile and nonhomogeneous (e.g. functionally graded) model parameters that could not only exploit the possibilities of creating complex combinations of broadband frequency sensors but also explain the unique unexplained physical phenomenon that prevails in BM2, e.g. tailgating waves. In recent years, few notable attempts were made to fabricate the artificial basilar membrane, mimicking the mechanics of the human cochlea within a very short range of frequencies. To explain the operation of these sensors a few models were proposed. But, we fundamentally argue the "fabrication to explanation" approach and proposed the model driven predictive design process for the design any (BM2) as broadband sensors. Inspired by the physics of basilar membrane, frequency domain predictive model is proposed where both the material and geometrical parameters can be arbitrarily varied. Broadband frequency is applicable in many fields of science, engineering and technology, such as, sensors for chemical, biological and acoustic applications. With the proposed model, which is three times faster than its FEM counterpart, it is possible to alter the attributes of the selected length of the designed sensor using complex combinations of model parameters, based on target frequency applications. Finally, the tailgating wave peaks in the artificial basilar membranes that prevails in the previously reported experimental studies are also explained using the proposed model.
Tominaga, Koji; Aherne, Julian; Watmough, Shaun A; Alveteg, Mattias; Cosby, Bernard J; Driscoll, Charles T; Posch, Maximilian; Pourmokhtarian, Afshin
2010-12-01
The performance and prediction uncertainty (owing to parameter and structural uncertainties) of four dynamic watershed acidification models (MAGIC, PnET-BGC, SAFE, and VSD) were assessed by systematically applying them to data from the Hubbard Brook Experimental Forest (HBEF), New Hampshire, where long-term records of precipitation and stream chemistry were available. In order to facilitate systematic evaluation, Monte Carlo simulation was used to randomly generate common model input data sets (n = 10,000) from parameter distributions; input data were subsequently translated among models to retain consistency. The model simulations were objectively calibrated against observed data (streamwater: 1963-2004, soil: 1983). The ensemble of calibrated models was used to assess future response of soil and stream chemistry to reduced sulfur deposition at the HBEF. Although both hindcast (1850-1962) and forecast (2005-2100) predictions were qualitatively similar across the four models, the temporal pattern of key indicators of acidification recovery (stream acid neutralizing capacity and soil base saturation) differed substantially. The range in predictions resulted from differences in model structure and their associated posterior parameter distributions. These differences can be accommodated by employing multiple models (ensemble analysis) but have implications for individual model applications.
NASA Astrophysics Data System (ADS)
Alipour, M. H.; Kibler, Kelly M.
2018-02-01
A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.
2012-04-01
This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed. In this study, first order and total effects of the group of precipitation factors FP1- FP4, and the precipitation factor FP5, are calculated separately. First order and total effects of the group FP1- FP4 are much higher than first order and total effects of the factor FP5, which are negligible This situation is due to the fact that the actual value taken by FP5 does not have much influence in the contribution of the glacier zone to the catchment's output discharge, mainly limited by incident solar radiation. In addition to this, first order effects indicate that, in average, nearly 25% of predictive uncertainty could be reduced if the true values of the precipitation factors FPi could be known, but no information was available on the appropriate values for the remaining model parameters. Finally, the total effects of the precipitation factors FP1- FP4 are close to 41% in average, implying that even if the appropriate values for the remaining model parameters could be fixed, predictive uncertainty would be still quite high if the spatial distribution of precipitation remains unknown. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Kulkarni, Chetan S.
2016-01-01
As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.
NASA Astrophysics Data System (ADS)
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.
Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell
2014-01-01
Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.
Carvajal, Guido; Roser, David J; Sisson, Scott A; Keegan, Alexandra; Khan, Stuart J
2015-11-15
Risk management for wastewater treatment and reuse have led to growing interest in understanding and optimising pathogen reduction during biological treatment processes. However, modelling pathogen reduction is often limited by poor characterization of the relationships between variables and incomplete knowledge of removal mechanisms. The aim of this paper was to assess the applicability of Bayesian belief network models to represent associations between pathogen reduction, and operating conditions and monitoring parameters and predict AS performance. Naïve Bayes and semi-naïve Bayes networks were constructed from an activated sludge dataset including operating and monitoring parameters, and removal efficiencies for two pathogens (native Giardia lamblia and seeded Cryptosporidium parvum) and five native microbial indicators (F-RNA bacteriophage, Clostridium perfringens, Escherichia coli, coliforms and enterococci). First we defined the Bayesian network structures for the two pathogen log10 reduction values (LRVs) class nodes discretized into two states (< and ≥ 1 LRV) using two different learning algorithms. Eight metrics, such as Prediction Accuracy (PA) and Area Under the receiver operating Curve (AUC), provided a comparison of model prediction performance, certainty and goodness of fit. This comparison was used to select the optimum models. The optimum Tree Augmented naïve models predicted removal efficiency with high AUC when all system parameters were used simultaneously (AUCs for C. parvum and G. lamblia LRVs of 0.95 and 0.87 respectively). However, metrics for individual system parameters showed only the C. parvum model was reliable. By contrast individual parameters for G. lamblia LRV prediction typically obtained low AUC scores (AUC < 0.81). Useful predictors for C. parvum LRV included solids retention time, turbidity and total coliform LRV. The methodology developed appears applicable for predicting pathogen removal efficiency in water treatment systems generally. Copyright © 2015 Elsevier Ltd. All rights reserved.
Identifiability, reducibility, and adaptability in allosteric macromolecules.
Bohner, Gergő; Venkataraman, Gaurav
2017-05-01
The ability of macromolecules to transduce stimulus information at one site into conformational changes at a distant site, termed "allostery," is vital for cellular signaling. Here, we propose a link between the sensitivity of allosteric macromolecules to their underlying biophysical parameters, the interrelationships between these parameters, and macromolecular adaptability. We demonstrate that the parameters of a canonical model of the mSlo large-conductance Ca 2+ -activated K + (BK) ion channel are non-identifiable with respect to the equilibrium open probability-voltage relationship, a common functional assay. We construct a reduced model with emergent parameters that are identifiable and expressed as combinations of the original mechanistic parameters. These emergent parameters indicate which coordinated changes in mechanistic parameters can leave assay output unchanged. We predict that these coordinated changes are used by allosteric macromolecules to adapt, and we demonstrate how this prediction can be tested experimentally. We show that these predicted parameter compensations are used in the first reported allosteric phenomena: the Bohr effect, by which hemoglobin adapts to varying pH. © 2017 Bohner and Venkataraman.
Identifiability, reducibility, and adaptability in allosteric macromolecules
Bohner, Gergő
2017-01-01
The ability of macromolecules to transduce stimulus information at one site into conformational changes at a distant site, termed “allostery,” is vital for cellular signaling. Here, we propose a link between the sensitivity of allosteric macromolecules to their underlying biophysical parameters, the interrelationships between these parameters, and macromolecular adaptability. We demonstrate that the parameters of a canonical model of the mSlo large-conductance Ca2+-activated K+ (BK) ion channel are non-identifiable with respect to the equilibrium open probability-voltage relationship, a common functional assay. We construct a reduced model with emergent parameters that are identifiable and expressed as combinations of the original mechanistic parameters. These emergent parameters indicate which coordinated changes in mechanistic parameters can leave assay output unchanged. We predict that these coordinated changes are used by allosteric macromolecules to adapt, and we demonstrate how this prediction can be tested experimentally. We show that these predicted parameter compensations are used in the first reported allosteric phenomena: the Bohr effect, by which hemoglobin adapts to varying pH. PMID:28416647
NASA Astrophysics Data System (ADS)
Sahu, Neelesh Kumar; Andhare, Atul B.; Andhale, Sandip; Raju Abraham, Roja
2018-04-01
Present work deals with prediction of surface roughness using cutting parameters along with in-process measured cutting force and tool vibration (acceleration) during turning of Ti-6Al-4V with cubic boron nitride (CBN) inserts. Full factorial design is used for design of experiments using cutting speed, feed rate and depth of cut as design variables. Prediction model for surface roughness is developed using response surface methodology with cutting speed, feed rate, depth of cut, resultant cutting force and acceleration as control variables. Analysis of variance (ANOVA) is performed to find out significant terms in the model. Insignificant terms are removed after performing statistical test using backward elimination approach. Effect of each control variables on surface roughness is also studied. Correlation coefficient (R2 pred) of 99.4% shows that model correctly explains the experiment results and it behaves well even when adjustment is made in factors or new factors are added or eliminated. Validation of model is done with five fresh experiments and measured forces and acceleration values. Average absolute error between RSM model and experimental measured surface roughness is found to be 10.2%. Additionally, an artificial neural network model is also developed for prediction of surface roughness. The prediction results of modified regression model are compared with ANN. It is found that RSM model and ANN (average absolute error 7.5%) are predicting roughness with more than 90% accuracy. From the results obtained it is found that including cutting force and vibration for prediction of surface roughness gives better prediction than considering only cutting parameters. Also, ANN gives better prediction over RSM models.
Ouzounoglou, Eleftherios; Kolokotroni, Eleni; Stanulla, Martin; Stamatakos, Georgios S
2018-02-06
Efficient use of Virtual Physiological Human (VPH)-type models for personalized treatment response prediction purposes requires a precise model parameterization. In the case where the available personalized data are not sufficient to fully determine the parameter values, an appropriate prediction task may be followed. This study, a hybrid combination of computational optimization and machine learning methods with an already developed mechanistic model called the acute lymphoblastic leukaemia (ALL) Oncosimulator which simulates ALL progression and treatment response is presented. These methods are used in order for the parameters of the model to be estimated for retrospective cases and to be predicted for prospective ones. The parameter value prediction is based on a regression model trained on retrospective cases. The proposed Hybrid ALL Oncosimulator system has been evaluated when predicting the pre-phase treatment outcome in ALL. This has been correctly achieved for a significant percentage of patient cases tested (approx. 70% of patients). Moreover, the system is capable of denying the classification of cases for which the results are not trustworthy enough. In that case, potentially misleading predictions for a number of patients are avoided, while the classification accuracy for the remaining patient cases further increases. The results obtained are particularly encouraging regarding the soundness of the proposed methodologies and their relevance to the process of achieving clinical applicability of the proposed Hybrid ALL Oncosimulator system and VPH models in general.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
Seizure prediction in hippocampal and neocortical epilepsy using a model-based approach
Aarabi, Ardalan; He, Bin
2014-01-01
Objectives The aim of this study is to develop a model based seizure prediction method. Methods A neural mass model was used to simulate the macro-scale dynamics of intracranial EEG data. The model was composed of pyramidal cells, excitatory and inhibitory interneurons described through state equations. Twelve model’s parameters were estimated by fitting the model to the power spectral density of intracranial EEG signals and then integrated based on information obtained by investigating changes in the parameters prior to seizures. Twenty-one patients with medically intractable hippocampal and neocortical focal epilepsy were studied. Results Tuned to obtain maximum sensitivity, an average sensitivity of 87.07% and 92.6% with an average false prediction rate of 0.2 and 0.15/h were achieved using maximum seizure occurrence periods of 30 and 50 min and a minimum seizure prediction horizon of 10 s, respectively. Under maximum specificity conditions, the system sensitivity decreased to 82.9% and 90.05% and the false prediction rates were reduced to 0.16 and 0.12/h using maximum seizure occurrence periods of 30 and 50 min, respectively. Conclusions The spatio-temporal changes in the parameters demonstrated patient-specific preictal signatures that could be used for seizure prediction. Significance The present findings suggest that the model-based approach may aid prediction of seizures. PMID:24374087
A multibody knee model with discrete cartilage prediction of tibio-femoral contact mechanics.
Guess, Trent M; Liu, Hongzeng; Bhashyam, Sampath; Thiagarajan, Ganesh
2013-01-01
Combining musculoskeletal simulations with anatomical joint models capable of predicting cartilage contact mechanics would provide a valuable tool for studying the relationships between muscle force and cartilage loading. As a step towards producing multibody musculoskeletal models that include representation of cartilage tissue mechanics, this research developed a subject-specific multibody knee model that represented the tibia plateau cartilage as discrete rigid bodies that interacted with the femur through deformable contacts. Parameters for the compliant contact law were derived using three methods: (1) simplified Hertzian contact theory, (2) simplified elastic foundation contact theory and (3) parameter optimisation from a finite element (FE) solution. The contact parameters and contact friction were evaluated during a simulated walk in a virtual dynamic knee simulator, and the resulting kinematics were compared with measured in vitro kinematics. The effects on predicted contact pressures and cartilage-bone interface shear forces during the simulated walk were also evaluated. The compliant contact stiffness parameters had a statistically significant effect on predicted contact pressures as well as all tibio-femoral motions except flexion-extension. The contact friction was not statistically significant to contact pressures, but was statistically significant to medial-lateral translation and all rotations except flexion-extension. The magnitude of kinematic differences between model formulations was relatively small, but contact pressure predictions were sensitive to model formulation. The developed multibody knee model was computationally efficient and had a computation time 283 times faster than a FE simulation using the same geometries and boundary conditions.
USDA-ARS?s Scientific Manuscript database
Classic rainfall-runoff models usually use historical data to estimate model parameters and mean values of parameters are considered for predictions. However, due to climate changes and human effects, the parameters of model change temporally. To overcome this problem, Normalized Difference Vegetati...
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-01-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-08-01
Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
Power maximization of a point absorber wave energy converter using improved model predictive control
NASA Astrophysics Data System (ADS)
Milani, Farideh; Moghaddam, Reihaneh Kardehi
2017-08-01
This paper considers controlling and maximizing the absorbed power of wave energy converters for irregular waves. With respect to physical constraints of the system, a model predictive control is applied. Irregular waves' behavior is predicted by Kalman filter method. Owing to the great influence of controller parameters on the absorbed power, these parameters are optimized by imperialist competitive algorithm. The results illustrate the method's efficiency in maximizing the extracted power in the presence of unknown excitation force which should be predicted by Kalman filter.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.
2015-11-01
This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.
Predicting responses from Rasch measures.
Linacre, John M
2010-01-01
There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.
A mathematical model for predicting fire spread in wildland fuels
Richard C. Rothermel
1972-01-01
A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.
R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner
2011-01-01
Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...
Deter, Russell L.; Lee, Wesley; Yeo, Lami; Romero, Roberto
2012-01-01
Objectives To characterize 2nd and 3rd trimester fetal growth using Individualized Growth Assessment in a large cohort of fetuses with normal growth outcomes. Methods A prospective longitudinal study of 119 pregnancies was carried out from 18 weeks, MA, to delivery. Measurements of eleven fetal growth parameters were obtained from 3D scans at 3–4 week intervals. Regression analyses were used to determine Start Points [SP] and Rossavik model [P = c (t) k + st] coefficients c, k and s for each parameter in each fetus. Second trimester growth model specification functions were re-established. These functions were used to generate individual growth models and determine predicted s and s-residual [s = pred s + s-resid] values. Actual measurements were compared to predicted growth trajectories obtained from the growth models and Percent Deviations [% Dev = {{actual − predicted}/predicted} × 100] calculated. Age-specific reference standards for this statistic were defined using 2-level statistical modeling for the nine directly measured parameters and estimated weight. Results Rossavik models fit the data for all parameters very well [R2: 99%], with SP’s and k values similar to those found in a much smaller cohort. The c values were strongly related to the 2nd trimester slope [R2: 97%] as was predicted s to estimated c [R2: 95%]. The latter was negative for skeletal parameters and positive for soft tissue parameters. The s-residuals were unrelated to estimated c’s [R2: 0%], and had mean values of zero. Rossavik models predicted 3rd trimester growth with systematic errors close to 0% and random errors [95% range] of 5.7 – 10.9% and 20.0 – 24.3% for one and three dimensional parameters, respectively. Moderate changes in age-specific variability were seen in the 3rd trimester.. Conclusions IGA procedures for evaluating 2nd and 3rd trimester growth are now established based on a large cohort [4–6 fold larger than those used previously], thus permitting more reliable growth assessment with each fetus acting as its own control. New, more rigorously defined, age-specific standards for the evaluation of 3rd trimester growth deviations are now available for 10 anatomical parameters. Our results are also consistent with the predicted s and s-residual being representatives of growth controllers operating through the insulin-like growth factor [IGF] axis. PMID:23962305
Zhang, Yong; Green, Christopher T.; Baeumer, Boris
2014-01-01
Time-nonlocal transport models can describe non-Fickian diffusion observed in geological media, but the physical meaning of parameters can be ambiguous, and most applications are limited to curve-fitting. This study explores methods for predicting the parameters of a temporally tempered Lévy motion (TTLM) model for transient sub-diffusion in mobile–immobile like alluvial settings represented by high-resolution hydrofacies models. The TTLM model is a concise multi-rate mass transfer (MRMT) model that describes a linear mass transfer process where the transfer kinetics and late-time transport behavior are controlled by properties of the host medium, especially the immobile domain. The intrinsic connection between the MRMT and TTLM models helps to estimate the main time-nonlocal parameters in the TTLM model (which are the time scale index, the capacity coefficient, and the truncation parameter) either semi-analytically or empirically from the measurable aquifer properties. Further applications show that the TTLM model captures the observed solute snapshots, the breakthrough curves, and the spatial moments of plumes up to the fourth order. Most importantly, the a priori estimation of the time-nonlocal parameters outside of any breakthrough fitting procedure provides a reliable “blind” prediction of the late-time dynamics of subdiffusion observed in a spectrum of alluvial settings. Predictability of the time-nonlocal parameters may be due to the fact that the late-time subdiffusion is not affected by the exact location of each immobile zone, but rather is controlled by the time spent in immobile blocks surrounding the pathway of solute particles. Results also show that the effective dispersion coefficient has to be fitted due to the scale effect of transport, and the mean velocity can differ from local measurements or volume averages. The link between medium heterogeneity and time-nonlocal parameters will help to improve model predictability for non-Fickian transport in alluvial settings.
NASA Astrophysics Data System (ADS)
Cisneros, Sophia
2013-04-01
We present a new, heuristic, two-parameter model for predicting the rotation curves of disc galaxies. The model is tested on (22) randomly chosen galaxies, represented in 35 data sets. This Lorentz Convolution [LC] model is derived from a non-linear, relativistic solution of a Kerr-type wave equation, where small changes in the photon's frequencies, resulting from the curved space time, are convolved into a sequence of Lorentz transformations. The LC model is parametrized with only the diffuse, luminous stellar and gaseous masses reported with each data set of observations used. The LC model predicts observed rotation curves across a wide range of disk galaxies. The LC model was constructed to occupy the same place in the explanation of rotation curves that Dark Matter does, so that a simple investigation of the relation between luminous and dark matter might be made, via by a parameter (a). We find the parameter (a) to demonstrate interesting structure. We compare the new model prediction to both the NFW model and MOND fits when available.
Cai, Longyan; He, Hong S.; Wu, Zhiwei; Lewis, Benard L.; Liang, Yu
2014-01-01
Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management. PMID:24714164
Using geometry to improve model fitting and experiment design for glacial isostasy
NASA Astrophysics Data System (ADS)
Kachuck, S. B.; Cathles, L. M.
2017-12-01
As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
Gliozzi, T M; Turri, F; Manes, S; Cassinelli, C; Pizzi, F
2017-11-01
Within recent years, there has been growing interest in the prediction of bull fertility through in vitro assessment of semen quality. A model for fertility prediction based on early evaluation of semen quality parameters, to exclude sires with potentially low fertility from breeding programs, would therefore be useful. The aim of the present study was to identify the most suitable parameters that would provide reliable prediction of fertility. Frozen semen from 18 Italian Holstein-Friesian proven bulls was analyzed using computer-assisted semen analysis (CASA) (motility and kinetic parameters) and flow cytometry (FCM) (viability, acrosomal integrity, mitochondrial function, lipid peroxidation, plasma membrane stability and DNA integrity). Bulls were divided into two groups (low and high fertility) based on the estimated relative conception rate (ERCR). Significant differences were found between fertility groups for total motility, active cells, straightness, linearity, viability and percentage of DNA fragmented sperm. Correlations were observed between ERCR and some kinetic parameters, and membrane instability and some DNA integrity indicators. In order to define a model with high relation between semen quality parameters and ERCR, backward stepwise multiple regression analysis was applied. Thus, we obtained a prediction model that explained almost half (R 2=0.47, P<0.05) of the variation in the conception rate and included nine variables: five kinetic parameters measured by CASA (total motility, active cells, beat cross frequency, curvilinear velocity and amplitude of lateral head displacement) and four parameters related to DNA integrity evaluated by FCM (degree of chromatin structure abnormality Alpha-T, extent of chromatin structure abnormality (Alpha-T standard deviation), percentage of DNA fragmented sperm and percentage of sperm with high green fluorescence representative of immature cells). A significant relationship (R 2=0.84, P<0.05) was observed between real and predicted fertility. Once the accuracy of fertility prediction has been confirmed, the model developed in the present study could be used by artificial insemination centers for bull selection or for elimination of poor fertility ejaculates.
Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.
2007-01-01
Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385
Gutierrez-Magness, Angelica L.
2006-01-01
Rapid population increases, agriculture, and industrial practices have been identified as important sources of excessive nutrients and sediments in the Delaware Inland Bays watershed. The amount and effect of excessive nutrients and sediments in the Inland Bays watershed have been well documented by the Delaware Geological Survey, the Delaware Department of Natural Resources and Environmental Control, the U.S. Environmental Protection Agency's National Estuary Program, the Delaware Center for Inland Bays, the University of Delaware, and other agencies. This documentation and data previously were used to develop a hydrologic and water-quality model of the Delaware Inland Bays watershed to simulate nutrients and sediment concentrations and loads, and to calibrate the model by comparing concentrations and streamflow data at six stations in the watershed over a limited period of time (October 1998 through April 2000). Although the model predictions of nutrient and sediment concentrations for the calibrated segments were fairly accurate, the predictions for the 28 ungaged segments located near tidal areas, where stream data were not available, were above the range of values measured in the area. The cooperative study established in 2000 by the Delaware Department of Natural Resources and Environmental Control, the Delaware Geological Survey, and the U.S. Geological Survey was extended to evaluate the model predictions in ungaged segments and to ensure that the model, developed as a planning and management tool, could accurately predict nutrient and sediment concentrations within the measured range of values in the area. The evaluation of the predictions was limited to the period of calibration (1999) of the 2003 model. To develop estimates on ungaged watersheds, parameter values from calibrated segments are transferred to the ungaged segments; however, accurate predictions are unlikely where parameter transference is subject to error. The unexpected nutrient and sediment concentrations simulated with the 2003 model were likely the result of inappropriate criteria for the transference of parameter values. From a model-simulation perspective, it is a common practice to transfer parameter values based on the similarity of soils or the similarity of land-use proportions between segments. For the Inland Bays model, the similarity of soils between segments was used as the basis to transfer parameter values. An alternative approach, which is documented in this report, is based on the similarity of the spatial distribution of the land use between segments and the similarity of land-use proportions, as these can be important factors for the transference of parameter values in lumped models. Previous work determined that the difference in the variation of runoff due to various spatial distributions of land use within a watershed can cause substantialloss of accuracy in the model predictions. The incorporation of the spatial distribution of land use to transfer parameter values from calibrated to uncalibrated segments provided more consistent and rational predictions of flow, especially during the summer, and consequently, predictions of lower nutrient concentrations during the same period. For the segments where the similarity of spatial distribution of land use was not clearly established with a calibrated segment, the similarity of the location of the most impervious areas was also used as a criterion for the transference of parameter values. The model predictions from the 28 ungaged segments were verified through comparison with measured in-stream concentrations from local and nearby streams provided by the Delaware Department of Natural Resources and Environmental Control. Model results indicated that the predicted edge-of-stream total suspended solids loads in the Inland Bays watershed were low in comparison to loads reported for the Eastern Shore of Maryland from the Chesapeake Bay watershed model. The flatness of the ter
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
A Particle and Energy Balance Model of the Orificed Hollow Cathode
NASA Technical Reports Server (NTRS)
Domonkos, Matthew T.
2002-01-01
A particle and energy balance model of orificed hollow cathodes was developed to assist in cathode design. The model presented here is an ensemble of original work by the author and previous work by others. The processes in the orifice region are considered to be one of the primary drivers in determining cathode performance, since the current density was greatest in this volume (up to 1.6 x 10(exp 8) A/m2). The orifice model contains comparatively few free parameters, and its results are used to bound the free parameters for the insert model. Next, the insert region model is presented. The sensitivity of the results to the free parameters is assessed, and variation of the free parameters in the orifice dominates the calculated power consumption and plasma properties. The model predictions are compared to data from a low-current orificed hollow cathode. The predicted power consumption exceeds the experimental results. Estimates of the plasma properties in the insert region overlap Langmuir probe data, and the predicted orifice plasma suggests the presence of one or more double layers. Finally, the model is used to examine the operation of higher current cathodes.
NASA Astrophysics Data System (ADS)
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2016-04-01
In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.
Garitte, B.; Shao, H.; Wang, X. R.; ...
2017-01-09
Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garitte, B.; Shao, H.; Wang, X. R.
Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440
Testing model for prediction system of 1-AU arrival times of CME-associated interplanetary shocks
NASA Astrophysics Data System (ADS)
Ogawa, Tomoya; den, Mitsue; Tanaka, Takashi; Sugihara, Kohta; Takei, Toshifumi; Amo, Hiroyoshi; Watari, Shinichi
We test a model to predict arrival times of interplanetary shock waves associated with coronal mass ejections (CMEs) using a three-dimensional adaptive mesh refinement (AMR) code. The model is used for the prediction system we develop, which has a Web-based user interface and aims at people who is not familiar with operation of computers and numerical simulations or is not researcher. We apply the model to interplanetary CME events. We first choose coronal parameters so that property of background solar wind observed by ACE space craft is reproduced. Then we input CME parameters observed by SOHO/LASCO. Finally we compare the predicted arrival times with observed ones. We describe results of the test and discuss tendency of the model.
Airport Noise Prediction Model -- MOD 7
DOT National Transportation Integrated Search
1978-07-01
The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...
Ohashi, Hidenori; Tamaki, Takanori; Yamaguchi, Takeo
2011-12-29
Molecular collisions, which are the microscopic origin of molecular diffusive motion, are affected by both the molecular surface area and the distance between molecules. Their product can be regarded as the free space around a penetrant molecule defined as the "shell-like free volume" and can be taken as a characteristic of molecular collisions. On the basis of this notion, a new diffusion theory has been developed. The model can predict molecular diffusivity in polymeric systems using only well-defined single-component parameters of molecular volume, molecular surface area, free volume, and pre-exponential factors. By consideration of the physical description of the model, the actual body moved and which neighbor molecules are collided with are the volume and the surface area of the penetrant molecular core. In the present study, a semiempirical quantum chemical calculation was used to calculate both of these parameters. The model and the newly developed parameters offer fairly good predictive ability. © 2011 American Chemical Society
Cotten, Cameron; Reed, Jennifer L
2013-01-30
Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.
2013-01-01
Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254
NASA Astrophysics Data System (ADS)
Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam
2017-11-01
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
Understanding Coupling of Global and Diffuse Solar Radiation with Climatic Variability
NASA Astrophysics Data System (ADS)
Hamdan, Lubna
Global solar radiation data is very important for wide variety of applications and scientific studies. However, this data is not readily available because of the cost of measuring equipment and the tedious maintenance and calibration requirements. Wide variety of models have been introduced by researchers to estimate and/or predict the global solar radiations and its components (direct and diffuse radiation) using other readily obtainable atmospheric parameters. The goal of this research is to understand the coupling of global and diffuse solar radiation with climatic variability, by investigating the relationships between these radiations and atmospheric parameters. For this purpose, we applied multilinear regression analysis on the data of National Solar Radiation Database 1991--2010 Update. The analysis showed that the main atmospheric parameters that affect the amount of global radiation received on earth's surface are cloud cover and relative humidity. Global radiation correlates negatively with both variables. Linear models are excellent approximations for the relationship between atmospheric parameters and global radiation. A linear model with the predictors total cloud cover, relative humidity, and extraterrestrial radiation is able to explain around 98% of the variability in global radiation. For diffuse radiation, the analysis showed that the main atmospheric parameters that affect the amount received on earth's surface are cloud cover and aerosol optical depth. Diffuse radiation correlates positively with both variables. Linear models are very good approximations for the relationship between atmospheric parameters and diffuse radiation. A linear model with the predictors total cloud cover, aerosol optical depth, and extraterrestrial radiation is able to explain around 91% of the variability in diffuse radiation. Prediction analysis showed that the linear models we fitted were able to predict diffuse radiation with efficiency of test adjusted R2 values equal to 0.93, using the data of total cloud cover, aerosol optical depth, relative humidity and extraterrestrial radiation. However, for prediction purposes, using nonlinear terms or nonlinear models might enhance the prediction of diffuse radiation.
The predicted influence of climate change on lesser prairie-chicken reproductive parameters
Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, D.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Nina events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.
The predicted influence of climate change on lesser prairie-chicken reproductive parameters.
Grisham, Blake A; Boal, Clint W; Haukos, David A; Davis, Dawn M; Boydston, Kathy K; Dixon, Charles; Heck, Willard R
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
Predictors of outcome for severe IgA Nephropathy in a multi-ethnic U.S. cohort.
Arroyo, Ana Huerta; Bomback, Andrew S; Butler, Blake; Radhakrishnan, Jai; Herlitz, Leal; Stokes, M Barry; D'Agati, Vivette; Markowitz, Glen S; Appel, Gerald B; Canetta, Pietro A
2015-09-01
Although IgA nephropathy (IgAN) is the leading cause of glomerulonephritis worldwide, there are few large cohorts representative of U.S. Prognosis remains challenging, particularly as more patients are treated with RAAS blockade and immunosuppression. We analyzed a retrospective cohort of IgAN patients followed at Columbia University Medical Center from 1980 to 2010. We evaluated two outcomes - halving of eGFR and ESRD - using three proportional hazards models: 1) a model with only clinical parameters, 2) a model with only histopathologic parameters, and 3) a model combining clinical and histopathologic parameters. Of 154 patients with biopsy-proven IgAN, 126 had follow-up data available and 93 had biopsy slides re-read. Median follow-up was 47 months. The cohort was 64% male, 60% white, and the average age was 34 years at diagnosis. Median (IQR) eGFR and proteinuria at diagnosis were 64.1 (38.0 - 88.7) mL/min/1.73 m2 and 2.7 (1.3 - 4.5) g/day. Over 90% of subjects were treated with RAAS blockade, and over 66% received immunosuppression. In the clinical parameters-only model, baseline eGFR and African-American race predicted both halving of eGFR and ESRD. In the histopathologic parameters-only model, no parameter significantly predicted outcome. In the combined model, baseline eGFR remained the strongest predictor of both halving of eGFR (p = 0.03) and ESRD (p = 0.001), while the presence of IgG by immunofluorescence microscopy also predicted progression to ESRD. In this diverse U.S. IgAN cohort in which the majority of patients received RAAS blockade and immunosuppression, baseline eGFR, African-American race, and co-staining of IgG predicted poor outcome.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less
Larrosa, José Manuel; Moreno-Montañés, Javier; Martinez-de-la-Casa, José María; Polo, Vicente; Velázquez-Villoria, Álvaro; Berrozpe, Clara; García-Granero, Marta
2015-10-01
The purpose of this study was to develop and validate a multivariate predictive model to detect glaucoma by using a combination of retinal nerve fiber layer (RNFL), retinal ganglion cell-inner plexiform (GCIPL), and optic disc parameters measured using spectral-domain optical coherence tomography (OCT). Five hundred eyes from 500 participants and 187 eyes of another 187 participants were included in the study and validation groups, respectively. Patients with glaucoma were classified in five groups based on visual field damage. Sensitivity and specificity of all glaucoma OCT parameters were analyzed. Receiver operating characteristic curves (ROC) and areas under the ROC (AUC) were compared. Three predictive multivariate models (quantitative, qualitative, and combined) that used a combination of the best OCT parameters were constructed. A diagnostic calculator was created using the combined multivariate model. The best AUC parameters were: inferior RNFL, average RNFL, vertical cup/disc ratio, minimal GCIPL, and inferior-temporal GCIPL. Comparisons among the parameters did not show that the GCIPL parameters were better than those of the RNFL in early and advanced glaucoma. The highest AUC was in the combined predictive model (0.937; 95% confidence interval, 0.911-0.957) and was significantly (P = 0.0001) higher than the other isolated parameters considered in early and advanced glaucoma. The validation group displayed similar results to those of the study group. Best GCIPL, RNFL, and optic disc parameters showed a similar ability to detect glaucoma. The combined predictive formula improved the glaucoma detection compared to the best isolated parameters evaluated. The diagnostic calculator obtained good classification from participants in both the study and validation groups.
Optimization of multi-environment trials for genomic selection based on crop models.
Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J
2017-08-01
We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
Small-amplitude acoustics in bulk granular media
NASA Astrophysics Data System (ADS)
Henann, David L.; Valenza, John J., II; Johnson, David L.; Kamrin, Ken
2013-10-01
We propose and validate a three-dimensional continuum modeling approach that predicts small-amplitude acoustic behavior of dense-packed granular media. The model is obtained through a joint experimental and finite-element study focused on the benchmark example of a vibrated container of grains. Using a three-parameter linear viscoelastic constitutive relation, our continuum model is shown to quantitatively predict the effective mass spectra in this geometry, even as geometric parameters for the environment are varied. Further, the model's predictions for the surface displacement field are validated mode-by-mode against experiment. A primary observation is the importance of the boundary condition between grains and the quasirigid walls.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
2012-09-01
make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Taylor, Zeike A; Kirk, Thomas B; Miller, Karol
2007-10-01
The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.
Impact of implementation choices on quantitative predictions of cell-based computational models
NASA Astrophysics Data System (ADS)
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
Carbon dioxide emission prediction using support vector machine
NASA Astrophysics Data System (ADS)
Saleh, Chairul; Rachman Dzakiyullah, Nur; Bayu Nugroho, Jonathan
2016-02-01
In this paper, the SVM model was proposed for predict expenditure of carbon (CO2) emission. The energy consumption such as electrical energy and burning coal is input variable that affect directly increasing of CO2 emissions were conducted to built the model. Our objective is to monitor the CO2 emission based on the electrical energy and burning coal used from the production process. The data electrical energy and burning coal used were obtained from Alcohol Industry in order to training and testing the models. It divided by cross-validation technique into 90% of training data and 10% of testing data. To find the optimal parameters of SVM model was used the trial and error approach on the experiment by adjusting C parameters and Epsilon. The result shows that the SVM model has an optimal parameter on C parameters 0.1 and 0 Epsilon. To measure the error of the model by using Root Mean Square Error (RMSE) with error value as 0.004. The smallest error of the model represents more accurately prediction. As a practice, this paper was contributing for an executive manager in making the effective decision for the business operation were monitoring expenditure of CO2 emission.
Lomnitz, Jason G.; Savageau, Michael A.
2016-01-01
Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
NASA Astrophysics Data System (ADS)
Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar
2018-06-01
Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.
Majnarić-Trtica, Ljiljana; Vitale, Branko
2011-10-01
To introduce systems biology as a conceptual framework for research in family medicine, based on empirical data from a case study on the prediction of influenza vaccination outcomes. This concept is primarily oriented towards planning preventive interventions and includes systematic data recording, a multi-step research protocol and predictive modelling. Factors known to affect responses to influenza vaccination include older age, past exposure to influenza viruses, and chronic diseases; however, constructing useful prediction models remains a challenge, because of the need to identify health parameters that are appropriate for general use in modelling patients' responses. The sample consisted of 93 patients aged 50-89 years (median 69), with multiple medical conditions, who were vaccinated against influenza. Literature searches identified potentially predictive health-related parameters, including age, gender, diagnoses of the main chronic ageing diseases, anthropometric measures, and haematological and biochemical tests. By applying data mining algorithms, patterns were identified in the data set. Candidate health parameters, selected in this way, were then combined with information on past influenza virus exposure to build the prediction model using logistic regression. A highly significant prediction model was obtained, indicating that by using a systems biology approach it is possible to answer unresolved complex medical uncertainties. Adopting this systems biology approach can be expected to be useful in identifying the most appropriate target groups for other preventive programmes.
NASA Astrophysics Data System (ADS)
Mehrdad Mirsanjari, Mir; Mohammadyari, Fatemeh
2018-03-01
Underground water is regarded as considerable water source which is mainly available in arid and semi arid with deficient surface water source. Forecasting of hydrological variables are suitable tools in water resources management. On the other hand, time series concepts is considered efficient means in forecasting process of water management. In this study the data including qualitative parameters (electrical conductivity and sodium adsorption ratio) of 17 underground water wells in Mehran Plain has been used to model the trend of parameters change over time. Using determined model, the qualitative parameters of groundwater is predicted for the next seven years. Data from 2003 to 2016 has been collected and were fitted by AR, MA, ARMA, ARIMA and SARIMA models. Afterward, the best model is determined using information criterion or Akaike (AIC) and correlation coefficient. After modeling parameters, the map of agricultural land use in 2016 and 2023 were generated and the changes between these years were studied. Based on the results, the average of predicted SAR (Sodium Adsorption Rate) in all wells in the year 2023 will increase compared to 2016. EC (Electrical Conductivity) average in the ninth and fifteenth holes and decreases in other wells will be increased. The results indicate that the quality of groundwater for Agriculture Plain Mehran will decline in seven years.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Influences of misprediction costs on solar flare prediction
NASA Astrophysics Data System (ADS)
Huang, Xin; Wang, HuaNing; Dai, XingHua
2012-10-01
The mispredictive costs of flaring and non-flaring samples are different for different applications of solar flare prediction. Hence, solar flare prediction is considered a cost sensitive problem. A cost sensitive solar flare prediction model is built by modifying the basic decision tree algorithm. Inconsistency rate with the exhaustive search strategy is used to determine the optimal combination of magnetic field parameters in an active region. These selected parameters are applied as the inputs of the solar flare prediction model. The performance of the cost sensitive solar flare prediction model is evaluated for the different thresholds of solar flares. It is found that more flaring samples are correctly predicted and more non-flaring samples are wrongly predicted with the increase of the cost for wrongly predicting flaring samples as non-flaring samples, and the larger cost of wrongly predicting flaring samples as non-flaring samples is required for the higher threshold of solar flares. This can be considered as the guide line for choosing proper cost to meet the requirements in different applications.
Mechanistic modelling of drug release from a polymer matrix using magnetic resonance microimaging.
Kaunisto, Erik; Tajarobi, Farhad; Abrahmsen-Alami, Susanna; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders
2013-03-12
In this paper a new model describing drug release from a polymer matrix tablet is presented. The utilization of the model is described as a two step process where, initially, polymer parameters are obtained from a previously published pure polymer dissolution model. The results are then combined with drug parameters obtained from literature data in the new model to predict solvent and drug concentration profiles and polymer and drug release profiles. The modelling approach was applied to the case of a HPMC matrix highly loaded with mannitol (model drug). The results showed that the drug release rate can be successfully predicted, using the suggested modelling approach. However, the model was not able to accurately predict the polymer release profile, possibly due to the sparse amount of usable pure polymer dissolution data. In addition to the case study, a sensitivity analysis of model parameters relevant to drug release was performed. The analysis revealed important information that can be useful in the drug formulation process. Copyright © 2013 Elsevier B.V. All rights reserved.
Bayesian calibration of the Community Land Model using surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lekadir, Karim, E-mail: karim.lekadir@upf.edu; Hoogendoorn, Corné; Armitage, Paul
Purpose: This paper presents a statistical approach for the prediction of trabecular bone parameters from low-resolution multisequence magnetic resonance imaging (MRI) in children, thus addressing the limitations of high-resolution modalities such as HR-pQCT, including the significant exposure of young patients to radiation and the limited applicability of such modalities to peripheral bones in vivo. Methods: A statistical predictive model is constructed from a database of MRI and HR-pQCT datasets, to relate the low-resolution MRI appearance in the cancellous bone to the trabecular parameters extracted from the high-resolution images. The description of the MRI appearance is achieved between subjects by usingmore » a collection of feature descriptors, which describe the texture properties inside the cancellous bone, and which are invariant to the geometry and size of the trabecular areas. The predictive model is built by fitting to the training data a nonlinear partial least square regression between the input MRI features and the output trabecular parameters. Results: Detailed validation based on a sample of 96 datasets shows correlations >0.7 between the trabecular parameters predicted from low-resolution multisequence MRI based on the proposed statistical model and the values extracted from high-resolution HRp-QCT. Conclusions: The obtained results indicate the promise of the proposed predictive technique for the estimation of trabecular parameters in children from multisequence MRI, thus reducing the need for high-resolution radiation-based scans for a fragile population that is under development and growth.« less
Predicting human chronically paralyzed muscle force: a comparison of three mathematical models.
Frey Law, Laura A; Shields, Richard K
2006-03-01
Chronic spinal cord injury (SCI) induces detrimental musculoskeletal adaptations that adversely affect health status, ranging from muscle paralysis and skin ulcerations to osteoporosis. SCI rehabilitative efforts may increasingly focus on preserving the integrity of paralyzed extremities to maximize health quality using electrical stimulation for isometric training and/or functional activities. Subject-specific mathematical muscle models could prove valuable for predicting the forces necessary to achieve therapeutic loading conditions in individuals with paralyzed limbs. Although numerous muscle models are available, three modeling approaches were chosen that can accommodate a variety of stimulation input patterns. To our knowledge, no direct comparisons between models using paralyzed muscle have been reported. The three models include 1) a simple second-order linear model with three parameters and 2) two six-parameter nonlinear models (a second-order nonlinear model and a Hill-derived nonlinear model). Soleus muscle forces from four individuals with complete, chronic SCI were used to optimize each model's parameters (using an increasing and decreasing frequency ramp) and to assess the models' predictive accuracies for constant and variable (doublet) stimulation trains at 5, 10, and 20 Hz in each individual. Despite the large differences in modeling approaches, the mean predicted force errors differed only moderately (8-15% error; P=0.0042), suggesting physiological force can be adequately represented by multiple mathematical constructs. The two nonlinear models predicted specific force characteristics better than the linear model in nearly all stimulation conditions, with minimal differences between the two nonlinear models. Either nonlinear mathematical model can provide reasonable force estimates; individual application needs may dictate the preferred modeling strategy.
Random parameter models for accident prediction on two-lane undivided highways in India.
Dinu, R R; Veeraragavan, A
2011-02-01
Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.
2015-12-01
Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively evaluate progress in magnetospheric modeling.
Chen, Li; Han, Ting-Ting; Li, Tao; Ji, Ya-Qin; Bai, Zhi-Peng; Wang, Bin
2012-07-01
Due to the lack of a prediction model for current wind erosion in China and the slow development for such models, this study aims to predict the wind erosion of soil and the dust emission and develop a prediction model for wind erosion in Tianjin by investigating the structure, parameter systems and the relationships among the parameter systems of the prediction models for wind erosion in typical areas, using the U.S. wind erosion prediction system (WEPS) as reference. Based on the remote sensing technique and the test data, a parameter system was established for the prediction model of wind erosion and dust emission, and a model was developed that was suitable for the prediction of wind erosion and dust emission in Tianjin. Tianjin was divided into 11 080 blocks with a resolution of 1 x 1 km2, among which 7 778 dust emitting blocks were selected. The parameters of the blocks were localized, including longitude, latitude, elevation and direction, etc.. The database files of blocks were localized, including wind file, climate file, soil file and management file. The weps. run file was edited. Based on Microsoft Visualstudio 2008, secondary development was done using C + + language, and the dust fluxes of 7 778 blocks were estimated, including creep and saltation fluxes, suspension fluxes and PM10 fluxes. Based on the parameters of wind tunnel experiments in Inner Mongolia, the soil measurement data and climate data in suburbs of Tianjin, the wind erosion module, wind erosion fluxes, dust emission release modulus and dust release fluxes were calculated for the four seasons and the whole year in suburbs of Tianjin. In 2009, the total creep and saltation fluxes, suspension fluxes and PM10 fluxes in the suburbs of Tianjin were 2.54 x 10(6) t, 1.25 x 10(7) t and 9.04 x 10(5) t, respectively, among which, the parts pointing to the central district were 5.61 x 10(5) t, 2.89 x 10(6) t and 2.03 x 10(5) t, respectively.
NASA Astrophysics Data System (ADS)
Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J.; Pitkänen, M. A.; Holli, K.; Ojala, A. T.; Hyödynmaa, S.; Järvenpää, Ritva; Lind, Bengt K.; Kappas, Constantin
2006-02-01
The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving \\bar{\\bar{D}}|EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
DOE-EPSCOR SPONSORED PROJECT FINAL REPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jianting
Concern over the quality of environmental management and restoration has motivated the model development for predicting water and solute transport in the vadose zone. Soil hydraulic properties are required inputs to subsurface models of water flow and contaminant transport in the vadose zone. Computer models are now routinely used in research and management to predict the movement of water and solutes into and through the vadose zone of soils. Such models can be used successfully only if reliable estimates of the soil hydraulic parameters are available. The hydraulic parameters considered in this project consist of the saturated hydraulic conductivity andmore » four parameters of the water retention curves. To quantify hydraulic parameters for heterogeneous soils is both difficult and time consuming. The overall objective of this project was to better quantify soil hydraulic parameters which are critical in predicting water flows and contaminant transport in the vadose zone through a comprehensive and quantitative study to predict heterogeneous soil hydraulic properties and the associated uncertainties. Systematic and quantitative consideration of the parametric heterogeneity and uncertainty can properly address and further reduce predictive uncertainty for contamination characterization and environmental restoration at DOE-managed sites. We conducted a comprehensive study to assess soil hydraulic parameter heterogeneity and uncertainty. We have addressed a number of important issues related to the soil hydraulic property characterizations. The main focus centered on new methods to characterize anisotropy of unsaturated hydraulic property typical of layered soil formations, uncertainty updating method, and artificial neural network base pedo-transfer functions to predict hydraulic parameters from easily available data. The work also involved upscaling of hydraulic properties applicable to large scale flow and contaminant transport modeling in the vadose zone and geostatistical characterization of hydraulic parameter heterogeneity. The project also examined the validity of the some simple average schemes for unsaturated hydraulic properties widely used in previous studies. A new suite of pedo-transfer functions were developed to improve the predictability of hydraulic parameters. We also explored the concept of tension-dependent hydraulic conductivity anisotropy of unsaturated layered soils. This project strengthens collaboration between researchers at the Desert Research Institute in the EPSCoR State of Nevada and their colleagues at the Pacific Northwest National Laboratory. The results of numerical simulations of a field injection experiment at Hanford site in this project could be used to provide insights to the DOE mission of appropriate contamination characterization and environmental remediation.« less
Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O
2013-03-01
Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.
Predictability of malaria parameters in Sahel under the S4CAST Model.
NASA Astrophysics Data System (ADS)
Diouf, Ibrahima; Rodríguez-Fonseca, Belen; Deme, Abdoulaye; Cisse, Moustapha; Ndione, Jaques-Andre; Gaye, Amadou; Suárez-Moreno, Roberto
2016-04-01
An extensive literature exists documenting the ENSO impacts on infectious diseases, including malaria. Other studies, however, have already focused on cholera, dengue and Rift Valley Fever. This study explores the seasonal predictability of malaria outbreaks over Sahel from previous SSTs of Pacific and Atlantic basins. The SST may be considered as a source of predictability due to its direct influence on rainfall and temperature, thus also other related variables like malaria parameters. In this work, the model has been applied to the study of predictability of the Sahelian malaria parameters from the leading MCA covariability mode in the framework of climate and health issue. The results of this work will be useful for decision makers to better access to climate forecasts and application on malaria transmission risk.
ERIC Educational Resources Information Center
Jastrzembski, Tiffany S.; Charness, Neil
2007-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20;…
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
NASA Technical Reports Server (NTRS)
Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald
2007-01-01
In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
A square-force cohesion model and its extraction from bulk measurements
NASA Astrophysics Data System (ADS)
Liu, Peiyuan; Lamarche, Casey; Kellogg, Kevin; Hrenya, Christine
2017-11-01
Cohesive particles remain poorly understood, with order of magnitude differences exhibited for prior, physical predictions of agglomerate size. A major obstacle lies in the absence of robust models of particle-particle cohesion, thereby precluding accurate prediction of the behavior of cohesive particles. Rigorous cohesion models commonly contain parameters related to surface roughness, to which cohesion shows extreme sensitivity. However, both roughness measurement and its distillation into these model parameters are challenging. Accordingly, we propose a ``square-force'' model, where cohesive force remains constant until a cut-off separation. Via DEM simulations, we demonstrate validity of the square-force model as surrogate of more rigorous models, when its two parameters are selected to match the two key quantities governing dense and dilute granular flows, namely maximum cohesive force and critical cohesive energy, respectively. Perhaps more importantly, we establish a method to extract the parameters in the square-force model via defluidization, due to its ability to isolate the effects of the two parameters. Thus, instead of relying on complicated scans of individual grains, determination of particle-particle cohesion from simple bulk measurements becomes feasible. Dow Corning Corporation.
Curtis L. Vanderschaaf
2008-01-01
Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...
Naghibi Beidokhti, Hamid; Janssen, Dennis; van de Groes, Sebastiaan; Hazrati, Javad; Van den Boogaard, Ton; Verdonschot, Nico
2017-12-08
In finite element (FE) models knee ligaments can represented either by a group of one-dimensional springs, or by three-dimensional continuum elements based on segmentations. Continuum models closer approximate the anatomy, and facilitate ligament wrapping, while spring models are computationally less expensive. The mechanical properties of ligaments can be based on literature, or adjusted specifically for the subject. In the current study we investigated the effect of ligament modelling strategy on the predictive capability of FE models of the human knee joint. The effect of literature-based versus specimen-specific optimized material parameters was evaluated. Experiments were performed on three human cadaver knees, which were modelled in FE models with ligaments represented either using springs, or using continuum representations. In spring representation collateral ligaments were each modelled with three and cruciate ligaments with two single-element bundles. Stiffness parameters and pre-strains were optimized based on laxity tests for both approaches. Validation experiments were conducted to evaluate the outcomes of the FE models. Models (both spring and continuum) with subject-specific properties improved the predicted kinematics and contact outcome parameters. Models incorporating literature-based parameters, and particularly the spring models (with the representations implemented in this study), led to relatively high errors in kinematics and contact pressures. Using a continuum modelling approach resulted in more accurate contact outcome variables than the spring representation with two (cruciate ligaments) and three (collateral ligaments) single-element-bundle representations. However, when the prediction of joint kinematics is of main interest, spring ligament models provide a faster option with acceptable outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.
Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...
2017-01-24
An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.
An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less
Prediction and Computation of Corrosion Rates of A36 Mild Steel in Oilfield Seawater
NASA Astrophysics Data System (ADS)
Paul, Subir; Mondal, Rajdeep
2018-04-01
The parameters which primarily control the corrosion rate and life of steel structures are several and they vary across the different ocean and seawater as well as along the depth. While the effect of single parameter on corrosion behavior is known, the conjoint effects of multiple parameters and the interrelationship among the variables are complex. Millions sets of experiments are required to understand the mechanism of corrosion failure. Statistical modeling such as ANN is one solution that can reduce the number of experimentation. ANN model was developed using 170 sets of experimental data of A35 mild steel in simulated seawater, varying the corrosion influencing parameters SO4 2-, Cl-, HCO3 -,CO3 2-, CO2, O2, pH and temperature as input and the corrosion current as output. About 60% of experimental data were used to train the model, 20% for testing and 20% for validation. The model was developed by programming in Matlab. 80% of the validated data could predict the corrosion rate correctly. Corrosion rates predicted by the ANN model are displayed in 3D graphics which show many interesting phenomenon of the conjoint effects of multiple variables that might throw new ideas of mitigation of corrosion by simply modifying the chemistry of the constituents. The model could predict the corrosion rates of some real systems.
NASA Astrophysics Data System (ADS)
Zhan, Liwei; Li, Chengwei
2017-02-01
A hybrid PSO-SVM-based model is proposed to predict the friction coefficient between aircraft tire and coating. The presented hybrid model combines a support vector machine (SVM) with particle swarm optimization (PSO) technique. SVM has been adopted to solve regression problems successfully. Its regression accuracy is greatly related to optimizing parameters such as the regularization constant C , the parameter gamma γ corresponding to RBF kernel and the epsilon parameter \\varepsilon in the SVM training procedure. However, the friction coefficient which is predicted based on SVM has yet to be explored between aircraft tire and coating. The experiment reveals that drop height and tire rotational speed are the factors affecting friction coefficient. Bearing in mind, the friction coefficient can been predicted using the hybrid PSO-SVM-based model by the measured friction coefficient between aircraft tire and coating. To compare regression accuracy, a grid search (GS) method and a genetic algorithm (GA) are used to optimize the relevant parameters (C , γ and \\varepsilon ), respectively. The regression accuracy could be reflected by the coefficient of determination ({{R}2} ). The result shows that the hybrid PSO-RBF-SVM-based model has better accuracy compared with the GS-RBF-SVM- and GA-RBF-SVM-based models. The agreement of this model (PSO-RBF-SVM) with experiment data confirms its good performance.
NASA Astrophysics Data System (ADS)
Shi, Ming F.; Zhang, Li; Zhu, Xinhai
2016-08-01
The Yoshida nonlinear isotropic/kinematic hardening material model is often selected in forming simulations where an accurate springback prediction is required. Many successful application cases in the industrial scale automotive components using advanced high strength steels (AHSS) have been reported to give better springback predictions. Several issues have been raised recently in the use of the model for higher strength AHSS including the use of two C vs. one C material parameters in the Armstrong and Frederick model (AF model), the original Yoshida model vs. Original Yoshida model with modified hardening law, and constant Young's Modulus vs. decayed Young's Modulus as a function of plastic strain. In this paper, an industrial scale automotive component using 980 MPa strength materials is selected to study the effect of two C and one C material parameters in the AF model on both forming and springback prediction using the Yoshida model with and without the modified hardening law. The effect of decayed Young's Modulus on the springback prediction for AHSS is also evaluated. In addition, the limitations of the material parameters determined from tension and compression tests without multiple cycle tests are also discussed for components undergoing several bending and unbending deformations.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
The Prediction of Item Parameters Based on Classical Test Theory and Latent Trait Theory
ERIC Educational Resources Information Center
Anil, Duygu
2008-01-01
In this study, the prediction power of the item characteristics based on the experts' predictions on conditions try-out practices cannot be applied was examined for item characteristics computed depending on classical test theory and two-parameters logistic model of latent trait theory. The study was carried out on 9914 randomly selected students…
Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L
2017-05-07
In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC = 0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.
NASA Astrophysics Data System (ADS)
Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.
2017-05-01
In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC = 0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.
Fieberg, J.; Jenkins, Kurt J.
2005-01-01
Often landmark conservation decisions are made despite an incomplete knowledge of system behavior and inexact predictions of how complex ecosystems will respond to management actions. For example, predicting the feasibility and likely effects of restoring top-level carnivores such as the gray wolf (Canis lupus) to North American wilderness areas is hampered by incomplete knowledge of the predator-prey system processes and properties. In such cases, global sensitivity measures, such as Sobola?? indices, allow one to quantify the effect of these uncertainties on model predictions. Sobola?? indices are calculated by decomposing the variance in model predictions (due to parameter uncertainty) into main effects of model parameters and their higher order interactions. Model parameters with large sensitivity indices can then be identified for further study in order to improve predictive capabilities. Here, we illustrate the use of Sobola?? sensitivity indices to examine the effect of parameter uncertainty on the predicted decline of elk (Cervus elaphus) population sizes following a hypothetical reintroduction of wolves to Olympic National Park, Washington, USA. The strength of density dependence acting on survival of adult elk and magnitude of predation were the most influential factors controlling elk population size following a simulated wolf reintroduction. In particular, the form of density dependence in natural survival rates and the per-capita predation rate together accounted for over 90% of variation in simulated elk population trends. Additional research on wolf predation rates on elk and natural compensations in prey populations is needed to reliably predict the outcome of predatora??prey system behavior following wolf reintroductions.
Evaluation of hydrodynamic ocean models as a first step in larval dispersal modelling
NASA Astrophysics Data System (ADS)
Vasile, Roxana; Hartmann, Klaas; Hobday, Alistair J.; Oliver, Eric; Tracey, Sean
2018-01-01
Larval dispersal modelling, a powerful tool in studying population connectivity and species distribution, requires accurate estimates of the ocean state, on a high-resolution grid in both space (e.g. 0.5-1 km horizontal grid) and time (e.g. hourly outputs), particularly of current velocities and water temperature. These estimates are usually provided by hydrodynamic models based on which larval trajectories and survival are computed. In this study we assessed the accuracy of two hydrodynamic models around Australia - Bluelink ReANalysis (BRAN) and Hybrid Coordinate Ocean Model (HYCOM) - through comparison with empirical data from the Australian National Moorings Network (ANMN). We evaluated the models' predictions of seawater parameters most relevant to larval dispersal - temperature, u and v velocities and current speed and direction - on the continental shelf where spawning and nursery areas for major fishery species are located. The performance of each model in estimating ocean parameters was found to depend on the parameter investigated and to vary from one geographical region to another. Both BRAN and HYCOM models systematically overestimated the mean water temperature, particularly in the top 140 m of water column, with over 2 °C bias at some of the mooring stations. HYCOM model was more accurate than BRAN for water temperature predictions in the Great Australian Bight and along the east coast of Australia. Skill scores between each model and the in situ observations showed lower accuracy in the models' predictions of u and v ocean current velocities compared to water temperature predictions. For both models, the lowest accuracy in predicting ocean current velocities, speed and direction was observed at 200 m depth. Low accuracy of both model predictions was also observed in the top 10 m of the water column. BRAN had more accurate predictions of both u and v velocities in the upper 50 m of water column at all mooring station locations. While HYCOM predictions of ocean current speed were generally more accurate than BRAN, BRAN predictions of both ocean current speed and direction were more accurate than HYCOM along the southeast coast of Australia and Tasmania. This study identified important inaccuracies in the hydrodynamic models' estimations of the real ocean parameters and on time scales relevant to larval dispersal studies. These findings highlight the importance of the choice and validation of hydrodynamic models, and calls for estimates of such bias to be incorporated in dispersal studies.
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
Estimation of soil hydraulic properties with microwave techniques
NASA Technical Reports Server (NTRS)
Oneill, P. E.; Gurney, R. J.; Camillo, P. J.
1985-01-01
Useful quantitative information about soil properties may be obtained by calibrating energy and moisture balance models with remotely sensed data. A soil physics model solves heat and moisture flux equations in the soil profile and is driven by the surface energy balance. Model generated surface temperature and soil moisture and temperature profiles are then used in a microwave emission model to predict the soil brightness temperature. The model hydraulic parameters are varied until the predicted temperatures agree with the remotely sensed values. This method is used to estimate values for saturated hydraulic conductivity, saturated matrix potential, and a soil texture parameter. The conductivity agreed well with a value measured with an infiltration ring and the other parameters agreed with values in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardiansyah, Deni
2016-09-15
Purpose: The aim of this study was to investigate the accuracy of PET-based treatment planning for predicting the time-integrated activity coefficients (TIACs). Methods: The parameters of a physiologically based pharmacokinetic (PBPK) model were fitted to the biokinetic data of 15 patients to derive assumed true parameters and were used to construct true mathematical patient phantoms (MPPs). Biokinetics of 150 MBq {sup 68}Ga-DOTATATE-PET was simulated with different noise levels [fractional standard deviation (FSD) 10%, 1%, 0.1%, and 0.01%], and seven combinations of measurements at 30 min, 1 h, and 4 h p.i. PBPK model parameters were fitted to the simulated noisymore » PET data using population-based Bayesian parameters to construct predicted MPPs. Therapy simulations were performed as 30 min infusion of {sup 90}Y-DOTATATE of 3.3 GBq in both true and predicted MPPs. Prediction accuracy was then calculated as relative variability v{sub organ} between TIACs from both MPPs. Results: Large variability values of one time-point protocols [e.g., FSD = 1%, 240 min p.i., v{sub kidneys} = (9 ± 6)%, and v{sub tumor} = (27 ± 26)%] show inaccurate prediction. Accurate TIAC prediction of the kidneys was obtained for the case of two measurements (1 and 4 h p.i.), e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 10)%, or three measurements, e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 9)%. Conclusions: {sup 68}Ga-DOTATATE-PET measurements could possibly be used to predict the TIACs of {sup 90}Y-DOTATATE when using a PBPK model and population-based Bayesian parameters. The two time-point measurement at 1 and 4 h p.i. with a noise up to FSD = 1% allows an accurate prediction of the TIACs in kidneys.« less
Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.
Durdu, Omer Faruk
2010-10-01
In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.
Using state-space models to predict the abundance of juvenile and adult sea lice on Atlantic salmon.
Elghafghuf, Adel; Vanderstichel, Raphael; St-Hilaire, Sophie; Stryhn, Henrik
2018-04-11
Sea lice are marine parasites affecting salmon farms, and are considered one of the most costly pests of the salmon aquaculture industry. Infestations of sea lice on farms significantly increase opportunities for the parasite to spread in the surrounding ecosystem, making control of this pest a challenging issue for salmon producers. The complexity of controlling sea lice on salmon farms requires frequent monitoring of the abundance of different sea lice stages over time. Industry-based data sets of counts of lice are amenable to multivariate time-series data analyses. In this study, two sets of multivariate autoregressive state-space models were applied to Chilean sea lice data from six Atlantic salmon production cycles on five isolated farms (at least 20 km seaway distance away from other known active farms), to evaluate the utility of these models for predicting sea lice abundance over time on farms. The models were constructed with different parameter configurations, and the analysis demonstrated large heterogeneity between production cycles for the autoregressive parameter, the effects of chemotherapeutant bath treatments, and the process-error variance. A model allowing for different parameters across production cycles had the best fit and the smallest overall prediction errors. However, pooling information across cycles for the drift and observation error parameters did not substantially affect model performance, thus reducing the number of necessary parameters in the model. Bath treatments had strong but variable effects for reducing sea lice burdens, and these effects were stronger for adult lice than juvenile lice. Our multivariate state-space models were able to handle different sea lice stages and provide predictions for sea lice abundance with reasonable accuracy up to five weeks out. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.
2015-07-01
The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.
Kesorn, Kraisak; Ongruk, Phatsavee; Chompoosri, Jakkrawarn; Phumee, Atchara; Thavara, Usavadee; Tawatsin, Apiwat; Siriyasatien, Padet
2015-01-01
Background In the past few decades, several researchers have proposed highly accurate prediction models that have typically relied on climate parameters. However, climate factors can be unreliable and can lower the effectiveness of prediction when they are applied in locations where climate factors do not differ significantly. The purpose of this study was to improve a dengue surveillance system in areas with similar climate by exploiting the infection rate in the Aedes aegypti mosquito and using the support vector machine (SVM) technique for forecasting the dengue morbidity rate. Methods and Findings Areas with high incidence of dengue outbreaks in central Thailand were studied. The proposed framework consisted of the following three major parts: 1) data integration, 2) model construction, and 3) model evaluation. We discovered that the Ae. aegypti female and larvae mosquito infection rates were significantly positively associated with the morbidity rate. Thus, the increasing infection rate of female mosquitoes and larvae led to a higher number of dengue cases, and the prediction performance increased when those predictors were integrated into a predictive model. In this research, we applied the SVM with the radial basis function (RBF) kernel to forecast the high morbidity rate and take precautions to prevent the development of pervasive dengue epidemics. The experimental results showed that the introduced parameters significantly increased the prediction accuracy to 88.37% when used on the test set data, and these parameters led to the highest performance compared to state-of-the-art forecasting models. Conclusions The infection rates of the Ae. aegypti female mosquitoes and larvae improved the morbidity rate forecasting efficiency better than the climate parameters used in classical frameworks. We demonstrated that the SVM-R-based model has high generalization performance and obtained the highest prediction performance compared to classical models as measured by the accuracy, sensitivity, specificity, and mean absolute error (MAE). PMID:25961289
Retrospective forecast of ETAS model with daily parameters estimate
NASA Astrophysics Data System (ADS)
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
On the predictiveness of single-field inflationary models
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Patil, Subodh P.; Trott, Michael
2014-06-01
We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for A S , r and n s are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in principle) for a slightly larger range of Higgs masses. We comment on the origin of the various UV scales that arise at large field values for the SM Higgs, clarifying cut off scale arguments by further developing the formalism of a non-linear realization of SU L (2) × U(1) in curved space. We discuss the interesting fact that, outside of Higgs Inflation, the effect of a non-minimal coupling to gravity, even in the SM, results in a non-linear EFT for the Higgs sector. Finally, we briefly comment on post BICEP2 attempts to modify the Higgs Inflation scenario.
National Variation in Crop Yield Production Functions
NASA Astrophysics Data System (ADS)
Devineni, N.; Rising, J. A.
2017-12-01
A new multilevel model for yield prediction at the county scale using regional climate covariates is presented in this paper. A new crop specific water deficit index, growing degree days, extreme degree days, and time-trend as an approximation of technology improvements are used as predictors to estimate annual crop yields for each county from 1949 to 2009. Every county in the United States is allowed to have unique parameters describing how these weather predictors are related to yield outcomes. County-specific parameters are further modeled as varying according to climatic characteristics, allowing the prediction of parameters in regions where crops are not currently grown and into the future. The structural relationships between crop yield and regional climate as well as trends are estimated simultaneously. All counties are modeled in a single multilevel model with partial pooling to automatically group and reduce estimation uncertainties. The model captures up to 60% of the variability in crop yields after removing the effect of technology, does well in out of sample predictions and is useful in relating the climate responses to local bioclimatic factors. We apply the predicted growing models in a cost-benefit analysis to identify the most economically productive crop in each county.
NASA Astrophysics Data System (ADS)
Marçais, J.; de Dreuzy, J.-R.; Ginn, T. R.; Rousseau-Gueutin, P.; Leray, S.
2015-06-01
While central in groundwater resources and contaminant fate, Transit Time Distributions (TTDs) are never directly accessible from field measurements but always deduced from a combination of tracer data and more or less involved models. We evaluate the predictive capabilities of approximate distributions (Lumped Parameter Models abbreviated as LPMs) instead of fully developed aquifer models. We develop a generic assessment methodology based on synthetic aquifer models to establish references for observable quantities as tracer concentrations and prediction targets as groundwater renewal times. Candidate LPMs are calibrated on the observable tracer concentrations and used to infer renewal time predictions, which are compared with the reference ones. This methodology is applied to the produced crystalline aquifer of Plœmeur (Brittany, France) where flows leak through a micaschists aquitard to reach a sloping aquifer where they radially converge to the producing well, issuing broad rather than multi-modal TTDs. One, two and three parameters LPMs were calibrated to a corresponding number of simulated reference anthropogenic tracer concentrations (CFC-11, 85Kr and SF6). Extensive statistical analysis over the aquifer shows that a good fit of the anthropogenic tracer concentrations is neither a necessary nor a sufficient condition to reach acceptable predictive capability. Prediction accuracy is however strongly conditioned by the use of a priori relevant LPMs. Only adequate LPM shapes yield unbiased estimations. In the case of Plœmeur, relevant LPMs should have two parameters to capture the mean and the standard deviation of the residence times and cover the first few decades [0; 50 years]. Inverse Gaussian and shifted exponential performed equally well for the wide variety of the reference TTDs from strongly peaked in recharge zones where flows are diverging to broadly distributed in more converging zones. When using two sufficiently different atmospheric tracers like CFC-11 and 85Kr, groundwater renewal time predictions are accurate at 1-5 years for estimating mean transit times of some decades (10-50 years). 1-parameter LPMs calibrated on a single atmospheric tracer lead to substantially larger errors of the order of 10 years, while 3-parameter LPMs calibrated with a third atmospheric tracers (SF6) do not improve the prediction capabilities. Based on a specific site, this study highlights the high predictive capacities of two atmospheric tracers on the same time range with sufficiently different atmospheric concentration chronicles.
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Modelling decremental ramps using 2- and 3-parameter "critical power" models.
Morton, R Hugh; Billat, Veronique
2013-01-01
The "Critical Power" (CP) model of human bioenergetics provides a valuable way to identify both limits of tolerance to exercise and mechanisms that underpin that tolerance. It applies principally to cycling-based exercise, but with suitable adjustments for analogous units it can be applied to other exercise modalities; in particular to incremental ramp exercise. It has not yet been applied to decremental ramps which put heavy early demand on the anaerobic energy supply system. This paper details cycling-based bioenergetics of decremental ramps using 2- and 3-parameter CP models. It derives equations that, for an individual of known CP model parameters, define those combinations of starting intensity and decremental gradient which will or will not lead to exhaustion before ramping to zero; and equations that predict time to exhaustion on those decremental ramps that will. These are further detailed with suitably chosen numerical and graphical illustrations. These equations can be used for parameter estimation from collected data, or to make predictions when parameters are known.
The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide
Folly, Walter Sydney Dutra
2011-01-01
Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431
The threshold bias model: a mathematical model for the nomothetic approach of suicide.
Folly, Walter Sydney Dutra
2011-01-01
Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health.
Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements
NASA Technical Reports Server (NTRS)
Lyatsky, W.; Khazanov, G. V.
2007-01-01
Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.
A prediction model of signal degradation in LMSS for urban areas
NASA Technical Reports Server (NTRS)
Matsudo, Takashi; Minamisono, Kenichi; Karasawa, Yoshio; Shiokawa, Takayasu
1993-01-01
A prediction model of signal degradation in a Land Mobile Satellite Service (LMSS) for urban areas is proposed. This model treats shadowing effects caused by buildings statistically and can predict a Cumulative Distribution Function (CDF) of signal diffraction losses in urban areas as a function of system parameters such as frequency and elevation angle and environmental parameters such as number of building stories and so on. In order to examine the validity of the model, we compared the percentage of locations where diffraction losses were smaller than 6 dB obtained by the CDF with satellite visibility measured by a radiometer. As a result, it was found that this proposed model is useful for estimating the feasibility of providing LMSS in urban areas.
Model parameter uncertainty analysis for an annual field-scale P loss model
NASA Astrophysics Data System (ADS)
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.
Mathematical modeling of a thermovoltaic cell
NASA Technical Reports Server (NTRS)
White, Ralph E.; Kawanami, Makoto
1992-01-01
A new type of battery named 'Vaporvolt' cell is in the early stage of its development. A mathematical model of a CuO/Cu 'Vaporvolt' cell is presented that can be used to predict the potential and the transport behavior of the cell during discharge. A sensitivity analysis of the various transport and electrokinetic parameters indicates which parameters have the most influence on the predicted energy and power density of the 'Vaporvolt' cell. This information can be used to decide which parameters should be optimized or determined more accurately through further modeling or experimental studies. The optimal thicknesses of electrodes and separator, the concentration of the electrolyte, and the current density are determined by maximizing the power density. These parameter sensitivities and optimal design parameter values will help in the development of a better CuO/Cu 'Vaporvolt' cell.
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Ensemble Kalman Filter Data Assimilation in a Solar Dynamo Model
NASA Astrophysics Data System (ADS)
Dikpati, M.
2017-12-01
Despite great advancement in solar dynamo models since the first model by Parker in 1955, there remain many challenges in the quest to build a dynamo-based prediction scheme that can accurately predict the solar cycle features. One of these challenges is to implement modern data assimilation techniques, which have been used in the oceanic and atmospheric prediction models. Development of data assimilation in solar models are in the early stages. Recently, observing system simulation experiments (OSSE's) have been performed using Ensemble Kalman Filter data assimilation, in the framework of Data Assimilation Research Testbed of NCAR (NCAR-DART), for estimating parameters in a solar dynamo model. I will demonstrate how the selection of ensemble size, number of observations, amount of error in observations and the choice of assimilation interval play important role in parameter estimation. I will also show how the results of parameter reconstruction improve when accuracy in low-latitude observations is increased, despite large error in polar region data. I will then describe how implementation of data assimilation in a solar dynamo model can bring more accuracy in the prediction of polar fields in North and South hemispheres during the declining phase of cycle 24. Recent evidence indicates that the strength of the Sun's polar field during the cycle minima might be a reliable predictor for the next sunspot cycle's amplitude; therefore it is crucial to accurately predict the polar field strength and pattern.
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
Predictive control of thermal state of blast furnace
NASA Astrophysics Data System (ADS)
Barbasova, T. A.; Filimonova, A. A.
2018-05-01
The work describes the structure of the model for predictive control of the thermal state of a blast furnace. The proposed model contains the following input parameters: coke rate; theoretical combustion temperature, comprising: natural gas consumption, blasting temperature, humidity, oxygen, blast furnace cooling water; blast furnace gas utilization rate. The output parameter is the cast iron temperature. The results for determining the cast iron temperature were obtained following the identification using the Hammerstein-Wiener model. The result of solving the cast iron temperature stabilization problem was provided for the calculated values of process parameters of the target area of the respective blast furnace operation mode.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.
Extracting falsifiable predictions from sloppy models.
Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P
2007-12-01
Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.
Ratzinger, Franz; Dedeyan, Michel; Rammerstorfer, Matthias; Perkmann, Thomas; Burgmann, Heinz; Makristathis, Athanasios; Dorffner, Georg; Loetsch, Felix; Blacky, Alexander; Ramharter, Michael
2015-01-01
Adequate early empiric antibiotic therapy is pivotal for the outcome of patients with bloodstream infections. In clinical practice the use of surrogate laboratory parameters is frequently proposed to predict underlying bacterial pathogens; however there is no clear evidence for this assumption. In this study, we investigated the discriminatory capacity of predictive models consisting of routinely available laboratory parameters to predict the presence of Gram-positive or Gram-negative bacteremia. Major machine learning algorithms were screened for their capacity to maximize the area under the receiver operating characteristic curve (ROC-AUC) for discriminating between Gram-positive and Gram-negative cases. Data from 23,765 patients with clinically suspected bacteremia were screened and 1,180 bacteremic patients were included in the study. A relative predominance of Gram-negative bacteremia (54.0%), which was more pronounced in females (59.1%), was observed. The final model achieved 0.675 ROC-AUC resulting in 44.57% sensitivity and 79.75% specificity. Various parameters presented a significant difference between both genders. In gender-specific models, the discriminatory potency was slightly improved. The results of this study do not support the use of surrogate laboratory parameters for predicting classes of causative pathogens. In this patient cohort, gender-specific differences in various laboratory parameters were observed, indicating differences in the host response between genders. PMID:26522966
Prediction of breakdown strength of cellulosic insulating materials using artificial neural networks
NASA Astrophysics Data System (ADS)
Singh, Sakshi; Mohsin, M. M.; Masood, Aejaz
In this research work, a few sets of experiments have been performed in high voltage laboratory on various cellulosic insulating materials like diamond-dotted paper, paper phenolic sheets, cotton phenolic sheets, leatheroid, and presspaper, to measure different electrical parameters like breakdown strength, relative permittivity, loss tangent, etc. Considering the dependency of breakdown strength on other physical parameters, different Artificial Neural Network (ANN) models are proposed for the prediction of breakdown strength. The ANN model results are compared with those obtained experimentally and also with the values already predicted from an empirical relation suggested by Swanson and Dall. The reported results indicated that the breakdown strength predicted from the ANN model is in good agreement with the experimental values.
NASA Astrophysics Data System (ADS)
Zhao, Xiuliang; Cheng, Yong; Wang, Limei; Ji, Shaobo
2017-03-01
Accurate combustion parameters are the foundations of effective closed-loop control of engine combustion process. Some combustion parameters, including the start of combustion, the location of peak pressure, the maximum pressure rise rate and its location, can be identified from the engine block vibration signals. These signals often include non-combustion related contributions, which limit the prompt acquisition of the combustion parameters computationally. The main component in these non-combustion related contributions is considered to be caused by the reciprocating inertia force excitation (RIFE) of engine crank train. A mathematical model is established to describe the response of the RIFE. The parameters of the model are recognized with a pattern recognition algorithm, and the response of the RIFE is predicted and then the related contributions are removed from the measured vibration velocity signals. The combustion parameters are extracted from the feature points of the renovated vibration velocity signals. There are angle deviations between the feature points in the vibration velocity signals and those in the cylinder pressure signals. For the start of combustion, a system bias is adopted to correct the deviation and the error bound of the predicted parameters is within 1.1°. To predict the location of the maximum pressure rise rate and the location of the peak pressure, algorithms based on the proportion of high frequency components in the vibration velocity signals are introduced. Tests results show that the two parameters are able to be predicted within 0.7° and 0.8° error bound respectively. The increase from the knee point preceding the peak value point to the peak value in the vibration velocity signals is used to predict the value of the maximum pressure rise rate. Finally, a monitoring frame work is inferred to realize the combustion parameters prediction. Satisfactory prediction for combustion parameters in successive cycles is achieved, which validates the proposed methods.
Improving RNA nearest neighbor parameters for helices by going beyond the two-state model.
Spasic, Aleksandar; Berger, Kyle D; Chen, Jonathan L; Seetin, Matthew G; Turner, Douglas H; Mathews, David H
2018-06-01
RNA folding free energy change nearest neighbor parameters are widely used to predict folding stabilities of secondary structures. They were determined by linear regression to datasets of optical melting experiments on small model systems. Traditionally, the optical melting experiments are analyzed assuming a two-state model, i.e. a structure is either complete or denatured. Experimental evidence, however, shows that structures exist in an ensemble of conformations. Partition functions calculated with existing nearest neighbor parameters predict that secondary structures can be partially denatured, which also directly conflicts with the two-state model. Here, a new approach for determining RNA nearest neighbor parameters is presented. Available optical melting data for 34 Watson-Crick helices were fit directly to a partition function model that allows an ensemble of conformations. Fitting parameters were the enthalpy and entropy changes for helix initiation, terminal AU pairs, stacks of Watson-Crick pairs and disordered internal loops. The resulting set of nearest neighbor parameters shows a 38.5% improvement in the sum of residuals in fitting the experimental melting curves compared to the current literature set.
NASA Astrophysics Data System (ADS)
Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.
2017-12-01
Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.
Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.
2012-12-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
NASA Astrophysics Data System (ADS)
Yaya, Kamel; Bechir, Hocine
2018-05-01
We propose a new hyper-elastic model that is based on the standard invariants of Green-Cauchy. Experimental data reported by Treloar (Trans. Faraday Soc. 40:59, 1944) are used to identify the model parameters. To this end, the data of uni-axial tension and equi-bi-axial tension are used simultaneously. The new model has four material parameters, their identification leads to linear optimisation problem and it is able to predict multi-axial behaviour of rubber-like materials. We show that the response quality of the new model is equivalent to that of the well-known Ogden six parameters model. Thereafter, the new model is implemented in FE code. Then, we investigate the inflation of a rubber balloon with the new model and Ogden models. We compare both the analytic and numerical solutions derived from these models.
A study of hyperelastic models for predicting the mechanical behavior of extensor apparatus.
Elyasi, Nahid; Taheri, Kimia Karimi; Narooei, Keivan; Taheri, Ali Karimi
2017-06-01
In this research, the nonlinear elastic behavior of human extensor apparatus was investigated. To this goal, firstly the best material parameters of hyperelastic strain energy density functions consisting of the Mooney-Rivlin, Ogden, invariants, and general exponential models were derived for the simple tension experimental data. Due to the significance of stress response in other deformation modes of nonlinear models, the calculated parameters were used to study the pure shear and balance biaxial tension behavior of the extensor apparatus. The results indicated that the Mooney-Rivlin model predicts an unstable behavior in the balance biaxial deformation of the extensor apparatus, while the Ogden order 1 represents a stable behavior, although the fitting of experimental data and theoretical model was not satisfactory. However, the Ogden order 6 model was unstable in the simple tension mode and the Ogden order 5 and general exponential models presented accurate and stable results. In order to reduce the material parameters, the invariants model with four material parameters was investigated and this model presented the minimum error and stable behavior in all deformation modes. The ABAQUS Explicit solver was coupled with the VUMAT subroutine code of the invariants model to simulate the mechanical behavior of the central and terminal slips of the extensor apparatus during the passive finger flexion, which is important in the prediction of boutonniere deformity and chronic mallet finger injuries, respectively. Also, to evaluate the adequacy of constitutive models in simulations, the results of the Ogden order 5 were presented. The difference between the predictions was attributed to the better fittings of the invariants model compared with the Ogden model.
Roll paper pilot. [mathematical model for predicting pilot rating of aircraft in roll task
NASA Technical Reports Server (NTRS)
Naylor, F. R.; Dillow, J. D.; Hannen, R. A.
1973-01-01
A mathematical model for predicting the pilot rating of an aircraft in a roll task is described. The model includes: (1) the lateral-directional aircraft equations of motion; (2) a stochastic gust model; (3) a pilot model with two free parameters; and (4) a pilot rating expression that is a function of rms roll angle and the pilot lead time constant. The pilot gain and lead time constant are selected to minimize the pilot rating expression. The pilot parameters are then adjusted to provide a 20% stability margin and the adjusted pilot parameters are used to compute a roll paper pilot rating of the aircraft/gust configuration. The roll paper pilot rating was computed for 25 aircraft/gust configurations. A range of actual ratings from 2 to 9 were encountered and the roll paper pilot ratings agree quite well with the actual ratings. In addition there is good correlation between predicted and measured rms roll angle.
A model for phase noise generation in amplifiers.
Tomlin, T D; Fynn, K; Cantoni, A
2001-11-01
In this paper, a model is presented for predicting the phase modulation (PM) and amplitude modulation (AM) noise in bipolar junction transistor (BJT) amplifiers. The model correctly predicts the dependence of phase noise on the signal frequency (at a particular carrier offset frequency), explains the noise shaping of the phase noise about the signal frequency, and shows the functional dependence on the transistor parameters and the circuit parameters. Experimental studies on common emitter (CE) amplifiers have been used to validate the PM noise model at carrier frequencies between 10 and 100 MHz.
NASA Astrophysics Data System (ADS)
Liu, Lei; Li, Yaning
2018-07-01
A methodology was developed to use a hyperelastic softening model to predict the constitutive behavior and the spatial damage propagation of nonlinear materials with damage-induced softening under mixed-mode loading. A user subroutine (ABAQUS/VUMAT) was developed for numerical implementation of the model. 3D-printed wavy soft rubbery interfacial layer was used as a material system to verify and validate the methodology. The Arruda - Boyce hyperelastic model is incorporated with the softening model to capture the nonlinear pre-and post- damage behavior of the interfacial layer under mixed Mode I/II loads. To characterize model parameters of the 3D-printed rubbery interfacial layer, a series of scarf-joint specimens were designed, which enabled systematic variation of stress triaxiality via a single geometric parameter, the slant angle. It was found that the important model parameter m is exponentially related to the stress triaxiality. Compact tension specimens of the sinusoidal wavy interfacial layer with different waviness were designed and fabricated via multi-material 3D printing. Finite element (FE) simulations were conducted to predict the spatial damage propagation of the material within the wavy interfacial layer. Compact tension experiments were performed to verify the model prediction. The results show that the model developed is able to accurately predict the damage propagation of the 3D-printed rubbery interfacial layer under complicated stress-state without pre-defined failure criteria.
PredicT-ML: a tool for automating machine learning model building with big clinical data.
Luo, Gang
2016-01-01
Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.
Comparing basal area growth models, consistency of parameters, and accuracy of prediction
J.J. Colbert; Michael Schuckers; Desta Fekedulegn
2002-01-01
We fit alternative sigmoid growth models to sample tree basal area historical data derived from increment cores and disks taken at breast height. We examine and compare the estimated parameters for these models across a range of sample sites. Models are rated on consistency of parameters and on their ability to fit growth data from four sites that are located across a...
Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model
NASA Astrophysics Data System (ADS)
Washington, M. H.; Kumar, S.
2017-12-01
The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.
Housing price prediction: parametric versus semi-parametric spatial hedonic models
NASA Astrophysics Data System (ADS)
Montero, José-María; Mínguez, Román; Fernández-Avilés, Gema
2018-01-01
House price prediction is a hot topic in the economic literature. House price prediction has traditionally been approached using a-spatial linear (or intrinsically linear) hedonic models. It has been shown, however, that spatial effects are inherent in house pricing. This article considers parametric and semi-parametric spatial hedonic model variants that account for spatial autocorrelation, spatial heterogeneity and (smooth and nonparametrically specified) nonlinearities using penalized splines methodology. The models are represented as a mixed model that allow for the estimation of the smoothing parameters along with the other parameters of the model. To assess the out-of-sample performance of the models, the paper uses a database containing the price and characteristics of 10,512 homes in Madrid, Spain (Q1 2010). The results obtained suggest that the nonlinear models accounting for spatial heterogeneity and flexible nonlinear relationships between some of the individual or areal characteristics of the houses and their prices are the best strategies for house price prediction.
NASA Technical Reports Server (NTRS)
Mitchell, David L.; Chai, Steven K.; Dong, Yayi; Arnott, W. Patrick; Hallett, John
1993-01-01
The 1 November 1986 FIRE I case study was used to test an ice particle growth model which predicts bimodal size spectra in cirrus clouds. The model was developed from an analytically based model which predicts the height evolution of monomodal ice particle size spectra from the measured ice water content (IWC). Size spectra from the monomodal model are represented by a gamma distribution, N(D) = N(sub o)D(exp nu)exp(-lambda D), where D = ice particle maximum dimension. The slope parameter, lambda, and the parameter N(sub o) are predicted from the IWC through the growth processes of vapor diffusion and aggregation. The model formulation is analytical, computationally efficient, and well suited for incorporation into larger models. The monomodal model has been validated against two other cirrus cloud case studies. From the monomodal size spectra, the size distributions which determine concentrations of ice particles less than about 150 mu m are predicted.
Process-based soil erodibility estimation for empirical water erosion models
USDA-ARS?s Scientific Manuscript database
A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...
NASA Astrophysics Data System (ADS)
Augustine, Starrlight; Rosa, Sara; Kooijman, Sebastiaan A. L. M.; Carlotti, François; Poggiale, Jean-Christophe
2014-11-01
Parameters for the standard Dynamic Energy Budget (DEB) model were estimated for the purple mauve stinger, Pelagia noctiluca, using literature data. Overall, the model predictions are in good agreement with data covering the full life-cycle. The parameter set we obtain suggests that P. noctiluca is well adapted to survive long periods of starvation since the predicted maximum reserve capacity is extremely high. Moreover we predict that the reproductive output of larger individuals is relatively insensitive to changes in food level while wet mass and length are. Furthermore, the parameters imply that even if food were scarce (ingestion levels only 14% of the maximum for a given size) an individual would still mature and be able to reproduce. We present detailed model predictions for embryo development and discuss the developmental energetics of the species such as the fact that the metabolism of ephyrae accelerates for several days after birth. Finally we explore a number of concrete testable model predictions which will help to guide future research. The application of DEB theory to the collected data allowed us to conclude that P. noctiluca combines maximizing allocation to reproduction with rather extreme capabilities to survive starvation. The combination of these properties might explain why P. noctiluca is a rapidly growing concern to fisheries and tourism.
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2017-12-01
Data assimilation (DA) is the widely accepted procedure for estimating parameters within predictive models because of the adaptability and uncertainty quantification offered by Bayesian methods. DA applications in phenology modeling offer critical insights into how extreme weather or changes in climate impact the vegetation life cycle. Changes in leaf onset and senescence, root phenology, and intermittent leaf shedding imply large changes in the surface radiative, water, and carbon budgets at multiple scales. Models of leaf phenology require concurrent atmospheric and soil conditions to determine how biophysical plant properties respond to changes in temperature, light and water demand. Presently, climatological records for fraction of photosynthetically active radiation (FPAR) and leaf area index (LAI), the modelled states indicative of plant phenology, are not available. Further, DA models are typically trained on short periods of record (e.g. less than 10 years). Using limited records with a DA framework imposes non-stationarity on estimated parameters and the resulting predicted model states. This talk discusses how uncertainty introduced by the inherent non-stationarity of the modeled processes propagates through a land-surface hydrology model coupled to a predictive phenology model. How water demand is accounted for in the upscaling of DA model inputs and analysis period serves as a key source of uncertainty in the FPAR and LAI predictions. Parameters estimated from different DA effectively calibrate a plant water-use strategy within the land-surface hydrology model. For example, when extreme droughts are included in the DA period, the plants are trained to uptake water, transpire, and assimilate carbon under favorable conditions and quickly shut down at the onset of water stress.
Season-ahead water quality forecasts for the Schuylkill River, Pennsylvania
NASA Astrophysics Data System (ADS)
Block, P. J.; Leung, K.
2013-12-01
Anticipating and preparing for elevated water quality parameter levels in critical water sources, using weather forecasts, is not uncommon. In this study, we explore the feasibility of extending this prediction scale to a season-ahead for the Schuylkill River in Philadelphia, utilizing both statistical and dynamical prediction models, to characterize the season. This advance information has relevance for recreational activities, ecosystem health, and water treatment, as the Schuylkill provides 40% of Philadelphia's water supply. The statistical model associates large-scale climate drivers with streamflow and water quality parameter levels; numerous variables from NOAA's CFSv2 model are evaluated for the dynamical approach. A multi-model combination is also assessed. Results indicate moderately skillful prediction of average summertime total coliform and wintertime turbidity, using season-ahead oceanic and atmospheric variables, predominantly from the North Atlantic Ocean. Models predicting the number of elevated turbidity events across the wintertime season are also explored.
Strauss, Ludwig G; Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2011-03-01
(18)F-FDG kinetics are quantified by a 2-tissue-compartment model. The routine use of dynamic PET is limited because of this modality's 1-h acquisition time. We evaluated shortened acquisition protocols up to 0-30 min regarding the accuracy for data analysis with the 2-tissue-compartment model. Full dynamic series for 0-60 min were analyzed using a 2-tissue-compartment model. The time-activity curves and the resulting parameters for the model were stored in a database. Shortened acquisition data were generated from the database using the following time intervals: 0-10, 0-16, 0-20, 0-25, and 0-30 min. Furthermore, the impact of adding a 60-min uptake value to the dynamic series was evaluated. The datasets were analyzed using dedicated software to predict the results of the full dynamic series. The software is based on a modified support vector machines (SVM) algorithm and predicts the compartment parameters of the full dynamic series. The SVM-based software provides user-independent results and was accurate at predicting the compartment parameters of the full dynamic series. If a squared correlation coefficient of 0.8 (corresponding to 80% explained variance of the data) was used as a limit, a shortened acquisition of 0-16 min was accurate at predicting the 60-min 2-tissue-compartment parameters. If a limit of 0.9 (90% explained variance) was used, a dynamic series of at least 0-20 min together with the 60-min uptake values is required. Shortened acquisition protocols can be used to predict the parameters of the 2-tissue-compartment model. Either a dynamic PET series of 0-16 min or a combination of a dynamic PET/CT series of 0-20 min and a 60-min uptake value is accurate for analysis with a 2-tissue-compartment model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salajegheh, Nima; Abedrabbo, Nader; Pourboghrat, Farhang
An efficient integration algorithm for continuum damage based elastoplastic constitutive equations is implemented in LS-DYNA. The isotropic damage parameter is defined as the ratio of the damaged surface area over the total cross section area of the representative volume element. This parameter is incorporated into the integration algorithm as an internal variable. The developed damage model is then implemented in the FEM code LS-DYNA as user material subroutine (UMAT). Pure stretch experiments of a hemispherical punch are carried out for copper sheets and the results are compared against the predictions of the implemented damage model. Evaluation of damage parameters ismore » carried out and the optimized values that correctly predicted the failure in the sheet are reported. Prediction of failure in the numerical analysis is performed through element deletion using the critical damage value. The set of failure parameters which accurately predict the failure behavior in copper sheets compared to experimental data is reported as well.« less
Can arsenic occurrence rate in bedrock aquifers be predicted?
Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan
2012-01-01
A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 μg L–1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 μg L–1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology.
Can arsenic occurrence rates in bedrock aquifers be predicted?
Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan
2012-01-01
A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 µg L−1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 µg L−1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology. PMID:22260208
Prediction of silicon oxynitride plasma etching using a generalized regression neural network
NASA Astrophysics Data System (ADS)
Kim, Byungwhan; Lee, Byung Teak
2005-08-01
A prediction model of silicon oxynitride (SiON) etching was constructed using a neural network. Model prediction performance was improved by means of genetic algorithm. The etching was conducted in a C2F6 inductively coupled plasma. A 24 full factorial experiment was employed to systematically characterize parameter effects on SiON etching. The process parameters include radio frequency source power, bias power, pressure, and C2F6 flow rate. To test the appropriateness of the trained model, additional 16 experiments were conducted. For comparison, four types of statistical regression models were built. Compared to the best regression model, the optimized neural network model demonstrated an improvement of about 52%. The optimized model was used to infer etch mechanisms as a function of parameters. The pressure effect was noticeably large only as relatively large ion bombardment was maintained in the process chamber. Ion-bombardment-activated polymer deposition played the most significant role in interpreting the complex effect of bias power or C2F6 flow rate. Moreover, [CF2] was expected to be the predominant precursor to polymer deposition.
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
2013-01-01
Background Our previous model of the non-isometric muscle fatigue that occurs during repetitive functional electrical stimulation included models of force, motion, and fatigue and accounted for applied load but not stimulation pulse duration. Our objectives were to: 1) further develop, 2) validate, and 3) present outcome measures for a non-isometric fatigue model that can predict the effect of a range of pulse durations on muscle fatigue. Methods A computer-controlled stimulator sent electrical pulses to electrodes on the thighs of 25 able-bodied human subjects. Isometric and non-isometric non-fatiguing and fatiguing knee torques and/or angles were measured. Pulse duration (170–600 μs) was the independent variable. Measurements were divided into parameter identification and model validation subsets. Results The fatigue model was simplified by removing two of three non-isometric parameters. The third remained a function of other model parameters. Between 66% and 77% of the variability in the angle measurements was explained by the new model. Conclusion Muscle fatigue in response to different stimulation pulse durations can be predicted during non-isometric repetitive contractions. PMID:23374142
Hyperspectral imaging technique for determination of pork freshness attributes
NASA Astrophysics Data System (ADS)
Li, Yongyu; Zhang, Leilei; Peng, Yankun; Tang, Xiuying; Chao, Kuanglin; Dhakal, Sagar
2011-06-01
Freshness of pork is an important quality attribute, which can vary greatly in storage and logistics. The specific objectives of this research were to develop a hyperspectral imaging system to predict pork freshness based on quality attributes such as total volatile basic-nitrogen (TVB-N), pH value and color parameters (L*,a*,b*). Pork samples were packed in seal plastic bags and then stored at 4°C. Every 12 hours. Hyperspectral scattering images were collected from the pork surface at the range of 400 nm to 1100 nm. Two different methods were performed to extract scattering feature spectra from the hyperspectral scattering images. First, the spectral scattering profiles at individual wavelengths were fitted accurately by a three-parameter Lorentzian distribution (LD) function; second, reflectance spectra were extracted from the scattering images. Partial Least Square Regression (PLSR) method was used to establish prediction models to predict pork freshness. The results showed that the PLSR models based on reflectance spectra was better than combinations of LD "parameter spectra" in prediction of TVB-N with a correlation coefficient (r) = 0.90, a standard error of prediction (SEP) = 7.80 mg/100g. Moreover, a prediction model for pork freshness was established by using a combination of TVB-N, pH and color parameters. It could give a good prediction results with r = 0.91 for pork freshness. The research demonstrated that hyperspectral scattering technique is a valid tool for real-time and nondestructive detection of pork freshness.
Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.
Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing
2016-06-01
The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Kalman, J; Smith, B D; Riba, I; Blasco, J; Rainbow, P S
2010-06-01
Biodynamic parameters of the ragworm Nereis diversicolor from southern Spain and south England were experimentally derived to assess the inter-population variability of physiological parameters of the bioaccumulation of Ag, Cd and Zn from water and sediment. Although there were some limited variations, these were not consistent with the local metal bioavailability nor with temperature changes. Incorporating the biodynamic parameters into a defined biodynamic model, confirmed that sediment is the predominant source of Cd and Zn accumulated by the worms, accounting in each case for 99% of the overall accumulated metals, whereas the contribution of dissolved Ag to the total accumulated by the worm increased from about 27 to about 53% with increasing dissolved Ag concentration. Standardised values of metal-specific parameters were chosen to generate a generalised model to be extended to N. diversicolor populations across a wide geographical range from western Europe to North Africa. According to the assumptions of this model, predicted steady state concentrations of Cd and Zn in N. diversicolor were overestimated, those of Ag underestimated, but still comparable to independent field measurements. We conclude that species-specific physiological metal bioaccumulation parameters are relatively constant over large geographical distances, and a single generalised biodynamic model does have potential to predict accumulated Ag, Cd and Zn concentrations in this polychaete from a single sediment metal concentration.
Yoschenko, V I; Kashparov, V A; Levchuk, S E; Glukhovskiy, A S; Khomutinin, Yu V; Protsak, V P; Lundin, S M; Tschiersch, J
2006-01-01
To predict parameters of radionuclide resuspension, transport and deposition during forest and grassland fires, several model modules were developed and adapted. Experimental data of controlled burning of prepared experimental plots in the Chernobyl exclusion zone have been used to evaluate the prognostic power of the models. The predicted trajectories and elevations of the plume match with those visually observed during the fire experiments in the grassland and forest sites. Experimentally determined parameters could be successfully used for the calculation of the initial plume parameters which provide the tools for the description of various fire scenarios and enable prognostic calculations. In summary, the model predicts a release of some per thousand from the radionuclide inventory of the fuel material by the grassland fires. During the forest fire, up to 4% of (137)Cs and (90)Sr and up to 1% of the Pu isotopes can be released from the forest litter according to the model calculations. However, these results depend on the parameters of the fire events. In general, the modeling results are in good accordance with the experimental data. Therefore, the considered models were successfully validated and can be recommended for the assessment of the resuspension and redistribution of radionuclides during grassland and forest fires in contaminated territories.
Challenges of model transferability to data-scarce regions (Invited)
NASA Astrophysics Data System (ADS)
Samaniego, L. E.
2013-12-01
Developing the ability to globally predict the movement of water on the land surface at spatial scales from 1 to 5 km constitute one of grand challenges in land surface modelling. Copying with this grand challenge implies that land surface models (LSM) should be able to make reliable predictions across locations and/or scales other than those used for parameter estimation. In addition to that, data scarcity and quality impose further difficulties in attaining reliable predictions of water and energy fluxes at the scales of interest. Current computational limitations impose also seriously limitations to exhaustively investigate the parameter space of LSM over large domains (e.g. greater than half a million square kilometers). Addressing these challenges require holistic approaches that integrate the best techniques available for parameter estimation, field measurements and remotely sensed data at their native resolutions. An attempt to systematically address these issues is the multiscale parameterisation technique (MPR) that links high resolution land surface characteristics with effective model parameters. This technique requires a number of pedo-transfer functions and a much fewer global parameters (i.e. coefficients) to be inferred by calibration in gauged basins. The key advantage of this technique is the quasi-scale independence of the global parameters which enables to estimate global parameters at coarser spatial resolutions and then to transfer them to (ungauged) areas and scales of interest. In this study we show the ability of this technique to reproduce the observed water fluxes and states over a wide range of climate and land surface conditions ranging from humid to semiarid and from sparse to dense forested regions. Results of transferability of global model parameters in space (from humid to semi-arid basins) and across scales (from coarser to finer) clearly indicate the robustness of this technique. Simulations with coarse data sets (e.g. EOBS forcing 25x25 km2, FAO soil map 1:5000000) using parameters obtained with high resolution information (REGNIE forcing 1x1 km2, BUEK soil map 1:1000000) in different climatic regions indicate the potential of MPR for prediction in data-scarce regions. In this presentation, we will also discuss how the transferability of global model parameters across scales and locations helps to identify deficiencies in model structure and regionalization functions.
NASA Astrophysics Data System (ADS)
Branger, E.; Grape, S.; Jansson, P.; Jacobsson Svärd, S.
2018-02-01
The Digital Cherenkov Viewing Device (DCVD) is a tool used by nuclear safeguards inspectors to verify irradiated nuclear fuel assemblies in wet storage based on the recording of Cherenkov light produced by the assemblies. One type of verification involves comparing the measured light intensity from an assembly with a predicted intensity, based on assembly declarations. Crucial for such analyses is the performance of the prediction model used, and recently new modelling methods have been introduced to allow for enhanced prediction capabilities by taking the irradiation history into account, and by including the cross-talk radiation from neighbouring assemblies in the predictions. In this work, the performance of three models for Cherenkov-light intensity prediction is evaluated by applying them to a set of short-cooled PWR 17x17 assemblies for which experimental DCVD measurements and operator-declared irradiation data was available; (1) a two-parameter model, based on total burnup and cooling time, previously used by the safeguards inspectors, (2) a newly introduced gamma-spectrum-based model, which incorporates cycle-wise burnup histories, and (3) the latter gamma-spectrum-based model with the addition to account for contributions from neighbouring assemblies. The results show that the two gamma-spectrum-based models provide significantly higher precision for the measured inventory compared to the two-parameter model, lowering the standard deviation between relative measured and predicted intensities from 15.2 % to 8.1 % respectively 7.8 %. The results show some systematic differences between assemblies of different designs (produced by different manufacturers) in spite of their similar PWR 17x17 geometries, and possible ways are discussed to address such differences, which may allow for even higher prediction capabilities. Still, it is concluded that the gamma-spectrum-based models enable confident verification of the fuel assembly inventory at the currently used detection limit for partial defects, being a 30 % discrepancy between measured and predicted intensities, while some false detection occurs with the two-parameter model. The results also indicate that the gamma-spectrum-based prediction methods are accurate enough that the 30 % discrepancy limit could potentially be lowered.
A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth
NASA Astrophysics Data System (ADS)
Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.
2018-04-01
A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).
NASA Astrophysics Data System (ADS)
Tjiputra, Jerry F.; Polzin, Dierk; Winguth, Arne M. E.
2007-03-01
An adjoint method is applied to a three-dimensional global ocean biogeochemical cycle model to optimize the ecosystem parameters on the basis of SeaWiFS surface chlorophyll observation. We showed with identical twin experiments that the model simulated chlorophyll concentration is sensitive to perturbation of phytoplankton and zooplankton exudation, herbivore egestion as fecal pellets, zooplankton grazing, and the assimilation efficiency parameters. The assimilation of SeaWiFS chlorophyll data significantly improved the prediction of chlorophyll concentration, especially in the high-latitude regions. Experiments that considered regional variations of parameters yielded a high seasonal variance of ecosystem parameters in the high latitudes, but a low variance in the tropical regions. These experiments indicate that the adjoint model is, despite the many uncertainties, generally capable to optimize sensitive parameters and carbon fluxes in the euphotic zone. The best fit regional parameters predict a global net primary production of 36 Pg C yr-1, which lies within the range suggested by Antoine et al. (1996). Additional constraints of nutrient data from the World Ocean Atlas showed further reduction in the model-data misfit and that assimilation with extensive data sets is necessary.
Olondo, C; Legarda, F; Herranz, M; Idoeta, R
2017-04-01
This paper shows the procedure performed to validate the migration equation and the migration parameters' values presented in a previous paper (Legarda et al., 2011) regarding the migration of 137 Cs in Spanish mainland soils. In this paper, this model validation has been carried out checking experimentally obtained activity concentration values against those predicted by the model. This experimental data come from the measured vertical activity profiles of 8 new sampling points which are located in northern Spain. Before testing predicted values of the model, the uncertainty of those values has been assessed with the appropriate uncertainty analysis. Once establishing the uncertainty of the model, both activity concentration values, experimental versus model predicted ones, have been compared. Model validation has been performed analyzing its accuracy, studying it as a whole and also at different depth intervals. As a result, this model has been validated as a tool to predict 137 Cs behaviour in a Mediterranean environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yao, Xiaojun; Zhang, Xiaoyun; Zhang, Ruisheng; Liu, Mancang; Hu, Zhide; Fan, Botao
2002-05-16
A new method for the prediction of retention indices for a diverse set of compounds from their physicochemical parameters has been proposed. The two used input parameters for representing molecular properties are boiling point and molar volume. Models relating relationships between physicochemical parameters and retention indices of compounds are constructed by means of radial basis function neural networks. To get the best prediction results, some strategies are also employed to optimize the topology and learning parameters of the RBFNNs. For the test set, a predictive correlation coefficient R=0.9910 and root mean squared error of 14.1 are obtained. Results show that radial basis function networks can give satisfactory prediction ability and its optimization is less-time consuming and easy to implement.
Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry
Meyer, Andrew J.; Patten, Carolynn
2017-01-01
Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708
K-ε Turbulence Model Parameter Estimates Using an Approximate Self-similar Jet-in-Crossflow Solution
DeChant, Lawrence; Ray, Jaideep; Lefantzi, Sophia; ...
2017-06-09
The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnelmore » data has yielded parameter estimates that are far more predictive than nominal parameter values. In this paper, we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. Finally, the close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.« less
Synthetic calibration of a Rainfall-Runoff Model
Thompson, David B.; Westphal, Jerome A.; ,
1990-01-01
A method for synthetically calibrating storm-mode parameters for the U.S. Geological Survey's Precipitation-Runoff Modeling System is described. Synthetic calibration is accomplished by adjusting storm-mode parameters to minimize deviations between the pseudo-probability disributions represented by regional regression equations and actual frequency distributions fitted to model-generated peak discharge and runoff volume. Results of modeling storm hydrographs using synthetic and analytic storm-mode parameters are presented. Comparisons are made between model results from both parameter sets and between model results and observed hydrographs. Although mean storm runoff is reproducible to within about 26 percent of the observed mean storm runoff for five or six parameter sets, runoff from individual storms is subject to large disparities. Predicted storm runoff volume ranged from 2 percent to 217 percent of commensurate observed values. Furthermore, simulation of peak discharges was poor. Predicted peak discharges from individual storm events ranged from 2 percent to 229 percent of commensurate observed values. The model was incapable of satisfactorily executing storm-mode simulations for the study watersheds. This result is not considered a particular fault of the model, but instead is indicative of deficiencies in similar conceptual models.
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.
Summertime Thunderstorms Prediction in Belarus
NASA Astrophysics Data System (ADS)
Lapo, Palina; Sokolovskaya, Yaroslava; Krasouski, Aliaksandr; Svetashev, Alexander; Turishev, Leonid; Barodka, Siarhei
2015-04-01
Mesoscale modeling with the Weather Research & Forecasting (WRF) system makes it possible to predict thunderstorm formation events by direct numerical simulation. In the present study, we analyze the feasibility and quality of thunderstorm prediction on the territory of Belarus for the summer period of 2014 based on analysis of several characteristic parameters in WRF modeling results that can serve as indicators of thunderstorms formation. These parameters include vertical velocity distribution, convective available potential energy (CAPE), K-index, SWEAT-index, Thompson index, lifted condensation level (LCL), and others, all of them being indicators of favorable atmospheric conditions for thunderstorms development. We perform mesoscale simulations of several cases of thunderstorm development in Belarus with WRF-ARW modeling system using 3 km grid spacing, WSM6 microphysics parameterization and explicit convection (no convective parameterization). Typical modeling duration makes 48 hours, which is equivalent to next-day thunderstorm prediction in operational use. We focus our attention to most prominent cases of intense thunderstorms in Minsk. For validation purposes, we use radar and satellite data in addition to surface observations. In summertime, the territory of Belarus is quite often under the influence of atmospheric fronts and stationary anticyclones. In this study, we subdivide thunderstorm cases under consideration into 2 categories: thunderstorms related to free convection and those related to forced convection processes. Our aim is to study the differences in thunderstorm indicator parameters between these two categories of thunderstorms in order to elaborate a set of parameters that can be used for operational thunderstorm forecasting. For that purpose, we analyze characteristic features of thunderstorms development on cold atmospheric fronts as well as thunderstorms formation in stable air masses. Modeling results demonstrate good predictive skill for thunderstorms development forecasting in summertime, which is even better in cases of atmospheric fronts passage. Integrated use of thunderstorm indicator parameters makes it possible to further improve the predictive skill.
NASA Astrophysics Data System (ADS)
Baldwin, D.; Manfreda, S.; Keller, K.; Smithwick, E. A. H.
2017-03-01
Satellite-based near-surface (0-2 cm) soil moisture estimates have global coverage, but do not capture variations of soil moisture in the root zone (up to 100 cm depth) and may be biased with respect to ground-based soil moisture measurements. Here, we present an ensemble Kalman filter (EnKF) hydrologic data assimilation system that predicts bias in satellite soil moisture data to support the physically based Soil Moisture Analytical Relationship (SMAR) infiltration model, which estimates root zone soil moisture with satellite soil moisture data. The SMAR-EnKF model estimates a regional-scale bias parameter using available in situ data. The regional bias parameter is added to satellite soil moisture retrievals before their use in the SMAR model, and the bias parameter is updated continuously over time with the EnKF algorithm. In this study, the SMAR-EnKF assimilates in situ soil moisture at 43 Soil Climate Analysis Network (SCAN) monitoring locations across the conterminous U.S. Multivariate regression models are developed to estimate SMAR parameters using soil physical properties and the moderate resolution imaging spectroradiometer (MODIS) evapotranspiration data product as covariates. SMAR-EnKF root zone soil moisture predictions are in relatively close agreement with in situ observations when using optimal model parameters, with root mean square errors averaging 0.051 [cm3 cm-3] (standard error, s.e. = 0.005). The average root mean square error associated with a 20-fold cross-validation analysis with permuted SMAR parameter regression models increases moderately (0.082 [cm3 cm-3], s.e. = 0.004). The expected regional-scale satellite correction bias is negative in four out of six ecoregions studied (mean = -0.12 [-], s.e. = 0.002), excluding the Great Plains and Eastern Temperate Forests (0.053 [-], s.e. = 0.001). With its capability of estimating regional-scale satellite bias, the SMAR-EnKF system can predict root zone soil moisture over broad extents and has applications in drought predictions and other operational hydrologic modeling purposes.
Olmez, Hülya Kaptan; Aran, Necla
2005-02-01
Mathematical models describing the growth kinetic parameters (lag phase duration and growth rate) of Bacillus cereus as a function of temperature, pH, sodium lactate and sodium chloride concentrations were obtained in this study. In order to get a residual distribution closer to a normal distribution, the natural logarithm of the growth kinetic parameters were used in modeling. For reasons of parsimony, the polynomial models were reduced to contain only the coefficients significant at a level of p
Klinzing, Gerard R; Zavaliangos, Antonios
2016-08-01
This work establishes a predictive model that explicitly recognizes microstructural parameters in the description of the overall mass uptake and local gradients of moisture into tablets. Model equations were formulated based on local tablet geometry to describe the transient uptake of moisture. An analytical solution to a simplified set of model equations was solved to predict the overall mass uptake and moisture gradients with the tablets. The analytical solution takes into account individual diffusion mechanisms in different scales of porosity and diffusion into the solid phase. The time constant of mass uptake was found to be a function of several key material properties, such as tablet relative density, pore tortuosity, and equilibrium moisture content of the material. The predictions of the model are in excellent agreement with experimental results for microcrystalline cellulose tablets without the need for parameter fitting. The model presented provides a new method to analyze the transient uptake of moisture into hydrophilic materials with the knowledge of only a few fundamental material and microstructural parameters. In addition, the model allows for quick and insightful predictions of moisture diffusion for a variety of practical applications including pharmaceutical tablets, porous polymer systems, or cementitious materials. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Mathematics as a Conduit for Translational Research in Post-Traumatic Osteoarthritis
Ayati, Bruce P.; Kapitanov, Georgi I.; Coleman, Mitchell C.; Anderson, Donald D.; Martin, James A.
2016-01-01
Biomathematical models offer a powerful method of clarifying complex temporal interactions and the relationships among multiple variables in a system. We present a coupled in silico biomathematical model of articular cartilage degeneration in response to impact and/or aberrant loading such as would be associated with injury to an articular joint. The model incorporates fundamental biological and mechanical information obtained from explant and small animal studies to predict post-traumatic osteoarthritis (PTOA) progression, with an eye toward eventual application in human patients. In this sense, we refer to the mathematics as a “conduit of translation”. The new in silico framework presented in this paper involves a biomathematical model for the cellular and biochemical response to strains computed using finite element analysis. The model predicts qualitative responses presently, utilizing system parameter values largely taken from the literature. To contribute to accurate predictions, models need to be accurately parameterized with values that are based on solid science. We discuss a parameter identification protocol that will enable us to make increasingly accurate predictions of PTOA progression using additional data from smaller scale explant and small animal assays as they become available. By distilling the data from the explant and animal assays into parameters for biomathematical models, mathematics can translate experimental data to clinically relevant knowledge. PMID:27653021
Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.
2014-01-01
Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions.
Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.
2014-01-01
Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions
Fast integration-based prediction bands for ordinary differential equation models.
Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel
2016-04-15
To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Simon, E.; Meixner, F. X.; Ganzeveld, L.; Kesselmeier, J.
2005-04-01
Detailed one-dimensional multilayer biosphere-atmosphere models, also referred to as CANVEG models, are used for more than a decade to describe coupled water-carbon exchange between the terrestrial vegetation and the lower atmosphere. Within the present study, a modified CANVEG scheme is described. A generic parameterization and characterization of biophysical properties of Amazon rain forest canopies is inferred using available field measurements of canopy structure, in-canopy profiles of horizontal wind speed and radiation, canopy albedo, soil heat flux and soil respiration, photosynthetic capacity and leaf nitrogen as well as leaf level enclosure measurements made on sunlit and shaded branches of several Amazonian tree species during the wet and dry season. The sensitivity of calculated canopy energy and CO2 fluxes to the uncertainty of individual parameter values is assessed. In the companion paper, the predicted seasonal exchange of energy, CO2, ozone and isoprene is compared to observations.
A bi-modal distribution of leaf area density with a total leaf area index of 6 is inferred from several observations in Amazonia. Predicted light attenuation within the canopy agrees reasonably well with observations made at different field sites. A comparison of predicted and observed canopy albedo shows a high model sensitivity to the leaf optical parameters for near-infrared short-wave radiation (NIR). The predictions agree much better with observations when the leaf reflectance and transmission coefficients for NIR are reduced by 25-40%. Available vertical distributions of photosynthetic capacity and leaf nitrogen concentration suggest a low but significant light acclimation of the rain forest canopy that scales nearly linearly with accumulated leaf area.
Evaluation of the biochemical leaf model, using the enclosure measurements, showed that recommended parameter values describing the photosynthetic light response, have to be optimized. Otherwise, predicted net assimilation is overestimated by 30-50%. Two stomatal models have been tested, which apply a well established semi-empirical relationship between stomatal conductance and net assimilation. Both models differ in the way they describe the influence of humidity on stomatal response. However, they show a very similar performance within the range of observed environmental conditions. The agreement between predicted and observed stomatal conductance rates is reasonable. In general, the leaf level data suggests seasonal physiological changes, which can be reproduced reasonably well by assuming increased stomatal conductance rates during the wet season, and decreased assimilation rates during the dry season.
The sensitivity of the predicted canopy fluxes of energy and CO2 to the parameterization of canopy structure, the leaf optical parameters, and the scaling of photosynthetic parameters is relatively low (1-12%), with respect to parameter uncertainty. In contrast, modifying leaf model parameters within their uncertainty range results in much larger changes of the predicted canopy net fluxes (5-35%).
NASA Astrophysics Data System (ADS)
Simon, E.; Meixner, F. X.; Ganzeveld, L.; Kesselmeier, J.
2005-09-01
Detailed one-dimensional multilayer biosphere-atmosphere models, also referred to as CANVEG models, are used for more than a decade to describe coupled water-carbon exchange between the terrestrial vegetation and the lower atmosphere. Within the present study, a modified CANVEG scheme is described. A generic parameterization and characterization of biophysical properties of Amazon rain forest canopies is inferred using available field measurements of canopy structure, in-canopy profiles of horizontal wind speed and radiation, canopy albedo, soil heat flux and soil respiration, photosynthetic capacity and leaf nitrogen as well as leaf level enclosure measurements made on sunlit and shaded branches of several Amazonian tree species during the wet and dry season. The sensitivity of calculated canopy energy and CO2 fluxes to the uncertainty of individual parameter values is assessed. In the companion paper, the predicted seasonal exchange of energy, CO2, ozone and isoprene is compared to observations.
A bi-modal distribution of leaf area density with a total leaf area index of 6 is inferred from several observations in Amazonia. Predicted light attenuation within the canopy agrees reasonably well with observations made at different field sites. A comparison of predicted and observed canopy albedo shows a high model sensitivity to the leaf optical parameters for near-infrared short-wave radiation (NIR). The predictions agree much better with observations when the leaf reflectance and transmission coefficients for NIR are reduced by 25-40%. Available vertical distributions of photosynthetic capacity and leaf nitrogen concentration suggest a low but significant light acclimation of the rain forest canopy that scales nearly linearly with accumulated leaf area.
Evaluation of the biochemical leaf model, using the enclosure measurements, showed that recommended parameter values describing the photosynthetic light response, have to be optimized. Otherwise, predicted net assimilation is overestimated by 30-50%. Two stomatal models have been tested, which apply a well established semi-empirical relationship between stomatal conductance and net assimilation. Both models differ in the way they describe the influence of humidity on stomatal response. However, they show a very similar performance within the range of observed environmental conditions. The agreement between predicted and observed stomatal conductance rates is reasonable. In general, the leaf level data suggests seasonal physiological changes, which can be reproduced reasonably well by assuming increased stomatal conductance rates during the wet season, and decreased assimilation rates during the dry season.
The sensitivity of the predicted canopy fluxes of energy and CO2 to the parameterization of canopy structure, the leaf optical parameters, and the scaling of photosynthetic parameters is relatively low (1-12%), with respect to parameter uncertainty. In contrast, modifying leaf model parameters within their uncertainty range results in much larger changes of the predicted canopy net fluxes (5-35%).
Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V
2008-05-01
The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.
Hassel, Erlend; Stensvold, Dorthe; Halvorsen, Thomas; Wisløff, Ulrik; Langhammer, Arnulf; Steinshamn, Sigurd
2017-01-01
Peak oxygen uptake (VO2peak) is an indicator of cardiovascular health and a useful tool for risk stratification. Direct measurement of VO2peak is resource-demanding and may be contraindicated. There exist several non-exercise models to estimate VO2peak that utilize easily obtainable health parameters, but none of them includes lung function measures or hemoglobin concentrations. We aimed to test whether addition of these parameters could improve prediction of VO2peak compared to an established model that includes age, waist circumference, self-reported physical activity and resting heart rate. We included 1431 subjects aged 69-77 years that completed a laboratory test of VO2peak, spirometry, and a gas diffusion test. Prediction models for VO2peak were developed with multiple linear regression, and goodness of fit was evaluated. Forced expiratory volume in one second (FEV1), diffusing capacity of the lung for carbon monoxide and blood hemoglobin concentration significantly improved the ability of the established model to predict VO2peak. The explained variance of the model increased from 31% to 48% for men and from 32% to 38% for women (p<0.001). FEV1, diffusing capacity of the lungs for carbon monoxide and hemoglobin concentration substantially improved the accuracy of VO2peak prediction when added to an established model in an elderly population.
NASA Astrophysics Data System (ADS)
Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.
2016-04-01
Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.
Petroleum-resource appraisal and discovery rate forecasting in partially explored regions
Drew, Lawrence J.; Schuenemeyer, J.H.; Root, David H.; Attanasi, E.D.
1980-01-01
PART A: A model of the discovery process can be used to predict the size distribution of future petroleum discoveries in partially explored basins. The parameters of the model are estimated directly from the historical drilling record, rather than being determined by assumptions or analogies. The model is based on the concept of the area of influence of a drill hole, which states that the area of a basin exhausted by a drill hole varies with the size and shape of targets in the basin and with the density of previously drilled wells. It also uses the concept of discovery efficiency, which measures the rate of discovery within several classes of deposit size. The model was tested using 25 years of historical exploration data (1949-74) from the Denver basin. From the trend in the discovery rate (the number of discoveries per unit area exhausted), the discovery efficiencies in each class of deposit size were estimated. Using pre-1956 discovery and drilling data, the model accurately predicted the size distribution of discoveries for the 1956-74 period. PART B: A stochastic model of the discovery process has been developed to predict, using past drilling and discovery data, the distribution of future petroleum deposits in partially explored basins, and the basic mathematical properties of the model have been established. The model has two exogenous parameters, the efficiency of exploration and the effective basin size. The first parameter is the ratio of the probability that an actual exploratory well will make a discovery to the probability that a randomly sited well will make a discovery. The second parameter, the effective basin size, is the area of that part of the basin in which drillers are willing to site wells. Methods for estimating these parameters from locations of past wells and from the sizes and locations of past discoveries were derived, and the properties of estimators of the parameters were studied by simulation. PART C: This study examines the temporal properties and determinants of petroleum exploration for firms operating in the Denver basin. Expectations associated with the favorability of a specific area are modeled by using distributed lag proxy variables (of previous discoveries) and predictions from a discovery process model. In the second part of the study, a discovery process model is linked with a behavioral well-drilling model in order to predict the supply of new reserves. Results of the study indicate that the positive effects of new discoveries on drilling increase for several periods and then diminish to zero within 2? years after the deposit discovery date. Tests of alternative specifications of the argument of the distributed lag function using alternative minimum size classes of deposits produced little change in the model's explanatory power. This result suggests that, once an exploration play is underway, favorable operator expectations are sustained by the quantity of oil found per time period rather than by the discovery of specific size deposits. When predictions of the value of undiscovered deposits (generated from a discovery process model) were substituted for the expectations variable in models used to explain exploration effort, operator behavior was found to be consistent with these predictions. This result suggests that operators, on the average, were efficiently using information contained in the discovery history of the basin in carrying out their exploration plans. Comparison of the two approaches to modeling unobservable operator expectations indicates that the two models produced very similar results. The integration of the behavioral well-drilling model and discovery process model to predict the additions to reserves per unit time was successful only when the quarterly predictions were aggregated to annual values. The accuracy of the aggregated predictions was also found to be reasonably robust to errors in predictions from the behavioral well-drilling equation.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
Mathematical models of human paralyzed muscle after long-term training.
Law, L A Frey; Shields, R K
2007-01-01
Spinal cord injury (SCI) results in major musculoskeletal adaptations, including muscle atrophy, faster contractile properties, increased fatigability, and bone loss. The use of functional electrical stimulation (FES) provides a method to prevent paralyzed muscle adaptations in order to sustain force-generating capacity. Mathematical muscle models may be able to predict optimal activation strategies during FES, however muscle properties further adapt with long-term training. The purpose of this study was to compare the accuracy of three muscle models, one linear and two nonlinear, for predicting paralyzed soleus muscle force after exposure to long-term FES training. Further, we contrasted the findings between the trained and untrained limbs. The three models' parameters were best fit to a single force train in the trained soleus muscle (N=4). Nine additional force trains (test trains) were predicted for each subject using the developed models. Model errors between predicted and experimental force trains were determined, including specific muscle force properties. The mean overall error was greatest for the linear model (15.8%) and least for the nonlinear Hill Huxley type model (7.8%). No significant error differences were observed between the trained versus untrained limbs, although model parameter values were significantly altered with training. This study confirmed that nonlinear models most accurately predict both trained and untrained paralyzed muscle force properties. Moreover, the optimized model parameter values were responsive to the relative physiological state of the paralyzed muscle (trained versus untrained). These findings are relevant for the design and control of neuro-prosthetic devices for those with SCI.
NASA Technical Reports Server (NTRS)
Curry, Timothy J.; Batterson, James G. (Technical Monitor)
2000-01-01
Low order equivalent system (LOES) models for the Tu-144 supersonic transport aircraft were identified from flight test data. The mathematical models were given in terms of transfer functions with a time delay by the military standard MIL-STD-1797A, "Flying Qualities of Piloted Aircraft," and the handling qualities were predicted from the estimated transfer function coefficients. The coefficients and the time delay in the transfer functions were estimated using a nonlinear equation error formulation in the frequency domain. Flight test data from pitch, roll, and yaw frequency sweeps at various flight conditions were used for parameter estimation. Flight test results are presented in terms of the estimated parameter values, their standard errors, and output fits in the time domain. Data from doublet maneuvers at the same flight conditions were used to assess the predictive capabilities of the identified models. The identified transfer function models fit the measured data well and demonstrated good prediction capabilities. The Tu-144 was predicted to be between level 2 and 3 for all longitudinal maneuvers and level I for all lateral maneuvers. High estimates of the equivalent time delay in the transfer function model caused the poor longitudinal rating.
DRAINMOD-GIS: a lumped parameter watershed scale drainage and water quality model
G.P. Fernandez; G.M. Chescheir; R.W. Skaggs; D.M. Amatya
2006-01-01
A watershed scale lumped parameter hydrology and water quality model that includes an uncertainty analysis component was developed and tested on a lower coastal plain watershed in North Carolina. Uncertainty analysis was used to determine the impacts of uncertainty in field and network parameters of the model on the predicted outflows and nitrate-nitrogen loads at the...
Origin of the sensitivity in modeling the glide behaviour of dislocations
Pei, Zongrui; Stocks, George Malcolm
2018-03-26
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Agha, Syed A; Kalogeropoulos, Andreas P; Shih, Jeffrey; Georgiopoulou, Vasiliki V; Giamouzis, Grigorios; Anarado, Perry; Mangalat, Deepa; Hussain, Imad; Book, Wendy; Laskar, Sonjoy; Smith, Andrew L; Martin, Randolph; Butler, Javed
2009-09-01
Incremental value of echocardiography over clinical parameters for outcome prediction in advanced heart failure (HF) is not well established. We evaluated 223 patients with advanced HF receiving optimal therapy (91.9% angiotensin-converting enzyme inhibitor/angiotensin receptor blocker, 92.8% beta-blockers, 71.8% biventricular pacemaker, and/or defibrillator use). The Seattle Heart Failure Model (SHFM) was used as the reference clinical risk prediction scheme. The incremental value of echocardiographic parameters for event prediction (death or urgent heart transplantation) was measured by the improvement in fit and discrimination achieved by addition of standard echocardiographic parameters to the SHFM. After a median follow-up of 2.4 years, there were 38 (17.0%) events (35 deaths; 3 urgent transplants). The SHFM had likelihood ratio (LR) chi(2) 32.0 and C statistic 0.756 for event prediction. Left ventricular end-systolic volume, stroke volume, and severe tricuspid regurgitation were independent echocardiographic predictors of events. The addition of these parameters to SHFM improved LR chi(2) to 72.0 and C statistic to 0.866 (P < .001 and P=.019, respectively). Reclassifying the SHFM-predicted risk with use of the echocardiography-added model resulted in improved prognostic separation. Addition of standard echocardiographic variables to the SHFM results in significant improvement in risk prediction for patients with advanced HF.
A novel auto-tuning PID control mechanism for nonlinear systems.
Cetin, Meric; Iplikci, Serdar
2015-09-01
In this paper, a novel Runge-Kutta (RK) discretization-based model-predictive auto-tuning proportional-integral-derivative controller (RK-PID) is introduced for the control of continuous-time nonlinear systems. The parameters of the PID controller are tuned using RK model of the system through prediction error-square minimization where the predicted information of tracking error provides an enhanced tuning of the parameters. Based on the model-predictive control (MPC) approach, the proposed mechanism provides necessary PID parameter adaptations while generating additive correction terms to assist the initially inadequate PID controller. Efficiency of the proposed mechanism has been tested on two experimental real-time systems: an unstable single-input single-output (SISO) nonlinear magnetic-levitation system and a nonlinear multi-input multi-output (MIMO) liquid-level system. RK-PID has been compared to standard PID, standard nonlinear MPC (NMPC), RK-MPC and conventional sliding-mode control (SMC) methods in terms of control performance, robustness, computational complexity and design issue. The proposed mechanism exhibits acceptable tuning and control performance with very small steady-state tracking errors, and provides very short settling time for parameter convergence. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
An online air pollution forecasting system using neural networks.
Kurt, Atakan; Gulbagci, Betul; Karaca, Ferhat; Alagha, Omar
2008-07-01
In this work, an online air pollution forecasting system for Greater Istanbul Area is developed. The system predicts three air pollution indicator (SO(2), PM(10) and CO) levels for the next three days (+1, +2, and +3 days) using neural networks. AirPolTool, a user-friendly website (http://airpol.fatih.edu.tr), publishes +1, +2, and +3 days predictions of air pollutants updated twice a day. Experiments presented in this paper show that quite accurate predictions of air pollutant indicator levels are possible with a simple neural network. It is shown that further optimizations of the model can be achieved using different input parameters and different experimental setups. Firstly, +1, +2, and +3 days' pollution levels are predicted independently using same training data, then +2 and +3 days are predicted cumulatively using previously days predicted values. Better prediction results are obtained in the cumulative method. Secondly, the size of training data base used in the model is optimized. The best modeling performance with minimum error rate is achieved using 3-15 past days in the training data set. Finally, the effect of the day of week as an input parameter is investigated. Better forecasts with higher accuracy are observed using the day of week as an input parameter.
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport
of viruses in the unsaturated zone. A database of input parameters
allowed Monte Carlo analysis with the model. The resulting kernel
densities of predicted attenuation during percolation indicated very ...
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.
2017-05-01
The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.
A fluidized bed technique for estimating soil critical shear stress
USDA-ARS?s Scientific Manuscript database
Soil erosion models, depending on how they are formulated, always have erodibilitiy parameters in the erosion equations. For a process-based model like the Water Erosion Prediction Project (WEPP) model, the erodibility parameters include rill and interrill erodibility and critical shear stress. Thes...
Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces
Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.
2013-01-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657
Toward a model-based predictive controller design in brain-computer interfaces.
Kamrunnahar, M; Dias, N S; Schiff, S J
2011-05-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.
Parameter uncertainty analysis of a biokinetic model of caesium
Li, W. B.; Klein, W.; Blanchardon, Eric; ...
2014-04-17
Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects atmore » different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.« less
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Improving a regional model using reduced complexity and parameter estimation.
Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed us- ing a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify de ciencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the mea- sured data, a sensitivity analysis of the model parameters with emphasis on the de nition of the convection velocity parameter, and a least-squares t of the predicted to the mea- sured shock-associated noise component spectra, resulted in a new de nition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
Predicting Loss-of-Control Boundaries Toward a Piloting Aid
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.
The Rangeland Hydrology and Erosion Model: A dynamic approach for predicting soil loss on rangelands
USDA-ARS?s Scientific Manuscript database
In this study we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed agains...
A Biomathematical Model of Pneumococcal Lung Infection and Antibiotic Treatment in Mice.
Schirm, Sibylle; Ahnert, Peter; Wienhold, Sandra; Mueller-Redetzky, Holger; Nouailles-Kursar, Geraldine; Loeffler, Markus; Witzenrath, Martin; Scholz, Markus
2016-01-01
Pneumonia is considered to be one of the leading causes of death worldwide. The outcome depends on both, proper antibiotic treatment and the effectivity of the immune response of the host. However, due to the complexity of the immunologic cascade initiated during infection, the latter cannot be predicted easily. We construct a biomathematical model of the murine immune response during infection with pneumococcus aiming at predicting the outcome of antibiotic treatment. The model consists of a number of non-linear ordinary differential equations describing dynamics of pneumococcal population, the inflammatory cytokine IL-6, neutrophils and macrophages fighting the infection and destruction of alveolar tissue due to pneumococcus. Equations were derived by translating known biological mechanisms and assuming certain response kinetics. Antibiotic therapy is modelled by a transient depletion of bacteria. Unknown model parameters were determined by fitting the predictions of the model to data sets derived from mice experiments of pneumococcal lung infection with and without antibiotic treatment. Time series of pneumococcal population, debris, neutrophils, activated epithelial cells, macrophages, monocytes and IL-6 serum concentrations were available for this purpose. The antibiotics Ampicillin and Moxifloxacin were considered. Parameter fittings resulted in a good agreement of model and data for all experimental scenarios. Identifiability of parameters is also estimated. The model can be used to predict the performance of alternative schedules of antibiotic treatment. We conclude that we established a biomathematical model of pneumococcal lung infection in mice allowing predictions regarding the outcome of different schedules of antibiotic treatment. We aim at translating the model to the human situation in the near future.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.
Nava, Michele M; Raimondi, Manuela T; Pietrabissa, Riccardo
2013-11-01
The main challenge in engineered cartilage consists in understanding and controlling the growth process towards a functional tissue. Mathematical and computational modelling can help in the optimal design of the bioreactor configuration and in a quantitative understanding of important culture parameters. In this work, we present a multiphysics computational model for the prediction of cartilage tissue growth in an interstitial perfusion bioreactor. The model consists of two separate sub-models, one two-dimensional (2D) sub-model and one three-dimensional (3D) sub-model, which are coupled between each other. These sub-models account both for the hydrodynamic microenvironment imposed by the bioreactor, using a model based on the Navier-Stokes equation, the mass transport equation and the biomass growth. The biomass, assumed as a phase comprising cells and the synthesised extracellular matrix, has been modelled by using a moving boundary approach. In particular, the boundary at the fluid-biomass interface is moving with a velocity depending from the local oxygen concentration and viscous stress. In this work, we show that all parameters predicted, such as oxygen concentration and wall shear stress, by the 2D sub-model with respect to the ones predicted by the 3D sub-model are systematically overestimated and thus the tissue growth, which directly depends on these parameters. This implies that further predictive models for tissue growth should take into account of the three dimensionality of the problem for any scaffold microarchitecture.
Lothe, Anjali G; Sinha, Alok
2017-05-01
Leachate pollution index (LPI) is an environmental index which quantifies the pollution potential of leachate generated in landfill site. Calculation of Leachate pollution index (LPI) is based on concentration of 18 parameters present in leachate. However, in case of non-availability of all 18 parameters evaluation of actual values of LPI becomes difficult. In this study, a model has been developed to predict the actual values of LPI in case of partial availability of parameters. This model generates eleven equations that helps in determination of upper and lower limit of LPI. The geometric mean of these two values results in LPI value. Application of this model to three landfill site results in LPI value with an error of ±20% for ∑ i n w i ⩾0.6. Copyright © 2016 Elsevier Ltd. All rights reserved.
Machine Learning Predictions of a Multiresolution Climate Model Ensemble
NASA Astrophysics Data System (ADS)
Anderson, Gemma J.; Lucas, Donald D.
2018-05-01
Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
Mining manufacturing data for discovery of high productivity process characteristics.
Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou
2010-06-01
Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bo, T. L.; Fu, L. T.; Liu, L.; Zheng, X. J.
2017-06-01
The studies on wind-blown sand are crucial for understanding the change of climate and landscape on Mars. However, the disadvantages of the saltation models may result in unreliable predictions. In this paper, the saltation model has been improved from two main aspects, the aerodynamic surface roughness and the lift-off parameters. The aerodynamic surface roughness is expressed as function of particle size, wind strength, air density, and air dynamic viscosity. The lift-off parameters are improved through including the dependence of restitution coefficient on incident parameters and the correlation between saltating speed and angle. The improved model proved to be capable of reproducing the observed data well in both stable stage and evolution process. The modeling of wind-blown sand is promoted by all improved aspects, and the dependence of restitution coefficient on incident parameters could not be ignored. The constant restitution coefficient and uncorrelated lift-off parameter distributions would lead to both the overestimation of the sand transport rate and apparent surface roughness and the delay of evolution process. The distribution of lift-off speed and the evolution of lift-off parameters on Mars are found to be different from those on Earth. This may thus suggest that it is inappropriate to predict the evolution of wind-blown sand by using the lift-off velocity obtained in steady state saltation. And it also may be problematic to predict the wind-blown sand on Mars through applying the lift-off velocity obtained upon terrestrial conditions directly.
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
Modelling the growth of Populus species using Ecosystem Demography (ED) model
NASA Astrophysics Data System (ADS)
Wang, D.; Lebauer, D. S.; Feng, X.; Dietze, M. C.
2010-12-01
Hybrid poplar plantations are an important source being evaluated for biomass production. Effective management of such plantations requires adequate growth and yield models. The Ecosystem Demography model (ED) makes predictions about the large scales of interest in above- and belowground ecosystem structure and the fluxes of carbon and water from a description of the fine-scale physiological processes. In this study, we used a workflow management tool, the Predictive Ecophysiological Carbon flux Analyzer (PECAn), to integrate literature data, field measurement and the ED model to provide predictions of ecosystem functioning. Parameters for the ED ensemble runs were sampled from the posterior distribution of ecophysiological traits of Populus species compiled from the literature using a Bayesian meta-analysis approach. Sensitivity analysis was performed to identify the parameters which contribute the most to the uncertainties of the ED model output. Model emulation techniques were used to update parameter posterior distributions using field-observed data in northern Wisconsin hybrid poplar plantations. Model results were evaluated with 5-year field-observed data in a hybrid poplar plantation at New Franklin, MO. ED was then used to predict the spatial variability of poplar yield in the coterminous United States (United States minus Alaska and Hawaii). Sensitivity analysis showed that root respiration, dark respiration, growth respiration, stomatal slope and specific leaf area contribute the most to the uncertainty, which suggests that our field measurements and data collection should focus on these parameters. The ED model successfully captured the inter-annual and spatial variability of the yield of poplar. Analyses in progress with the ED model focus on evaluating the ecosystem services of short-rotation woody plantations, such as impacts on soil carbon storage, water use, and nutrient retention.
Bayesian Modeling of Exposure and Airflow Using Two-Zone Models
Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy
2009-01-01
Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of uncertainty and variability in these quantities can be used to not only provide better estimates of model outputs but also model parameters. PMID:19403840
Erguler, Kamil; Stumpf, Michael P H
2011-05-01
The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.
Yousefzadeh, Behrooz; Hodgson, Murray
2012-09-01
A beam-tracing model was used to study the acoustical responses of three empty, rectangular rooms with different boundary conditions. The model is wave-based (accounting for sound phase) and can be applied to rooms with extended-reaction surfaces that are made of multiple layers of solid, fluid, or poroelastic materials-the acoustical properties of these surfaces are calculated using Biot theory. Three room-acoustical parameters were studied in various room configurations: sound strength, reverberation time, and RApid Speech Transmission Index. The main objective was to investigate the effects of modeling surfaces as either local or extended reaction on predicted values of these three parameters. Moreover, the significance of modeling interference effects was investigated, including the study of sound phase-change on surface reflection. Modeling surfaces as of local or extended reaction was found to be significant for surfaces consisting of multiple layers, specifically when one of the layers is air. For multilayers of solid materials with an air-cavity, this was most significant around their mass-air-mass resonance frequencies. Accounting for interference effects made significant changes in the predicted values of all parameters. Modeling phase change on reflection, on the other hand, was found to be relatively much less significant.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-01-17
A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.
Mass Transport through Nanostructured Membranes: Towards a Predictive Tool
Darvishmanesh, Siavash; Van der Bruggen, Bart
2016-01-01
This study proposes a new mechanism to understand the transport of solvents through nanostructured membranes from a fundamental point of view. The findings are used to develop readily applicable mathematical models to predict solvent fluxes and solute rejections through solvent resistant membranes used for nanofiltration. The new model was developed based on a pore-flow type of transport. New parameters found to be of fundamental importance were introduced to the equation, i.e., the affinity of the solute and the solvent for the membrane expressed as the hydrogen-bonding contribution of the solubility parameter for the solute, solvent and membrane. A graphical map was constructed to predict the solute rejection based on the hydrogen-bonding contribution of the solubility parameter. The model was evaluated with performance data from the literature. Both the solvent flux and the solute rejection calculated with the new approach were similar to values reported in the literature. PMID:27918434
Modeling Patterns of Total Dissolved Solids Release from Central Appalachia, USA, Mine Spoils.
Clark, Elyse V; Zipper, Carl E; Daniels, W Lee; Orndorff, Zenah W; Keefe, Matthew J
2017-01-01
Surface mining in the central Appalachian coalfields (USA) influences water quality because the interaction of infiltrated waters and O with freshly exposed mine spoils releases elevated levels of total dissolved solids (TDS) to streams. Modeling and predicting the short- and long-term TDS release potentials of mine spoils can aid in the management of current and future mining-influenced watersheds and landscapes. In this study, the specific conductance (SC, a proxy variable for TDS) patterns of 39 mine spoils during a sequence of 40 leaching events were modeled using a five-parameter nonlinear regression. Estimated parameter values were compared to six rapid spoil assessment techniques (RSATs) to assess predictive relationships between model parameters and RSATs. Spoil leachates reached maximum values, 1108 ± 161 μS cm on average, within the first three leaching events, then declined exponentially to a breakpoint at the 16th leaching event on average. After the breakpoint, SC release remained linear, with most spoil samples exhibiting declines in SC release with successive leaching events. The SC asymptote averaged 276 ± 25 μS cm. Only three samples had SCs >500 μS cm at the end of the 40 leaching events. Model parameters varied with mine spoil rock and weathering type, and RSATs were predictive of four model parameters. Unweathered samples released higher SCs throughout the leaching period relative to weathered samples, and rock type influenced the rate of SC release. The RSATs for SC, total S, and neutralization potential may best predict certain phases of mine spoil TDS release. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Identification and synthetic modeling of factors affecting American black duck populations
Conroy, Michael J.; Miller, Mark W.; Hines, James E.
2002-01-01
We reviewed the literature on factors potentially affecting the population status of American black ducks (Anas rupribes). Our review suggests that there is some support for the influence of 4 major, continental-scope factors in limiting or regulating black duck populations: 1) loss in the quantity or quality of breeding habitats; 2) loss in the quantity or quality of wintering habitats; 3) harvest, and 4) interactions (competition, hybridization) with mallards (Anas platyrhychos) during the breeding and/or wintering periods. These factors were used as the basis of an annual life cycle model in which reproduction rates and survival rates were modeled as functions of the above factors, with parameters of the model describing the strength of these relationships. Variation in the model parameter values allows for consideration of scientific uncertainty as to the degree each of these factors may be contributing to declines in black duck populations, and thus allows for the investigation of the possible effects of management (e.g., habitat improvement, harvest reductions) under different assumptions. We then used available, historical data on black duck populations (abundance, annual reproduction rates, and survival rates) and possible driving factors (trends in breeding and wintering habitats, harvest rates, and abundance of mallards) to estimate model parameters. Our estimated reproduction submodel included parameters describing negative density feedback of black ducks, positive influence of breeding habitat, and negative influence of mallard densities; our survival submodel included terms for positive influence of winter habitat on reproduction rates, and negative influences of black duck density (i.e., compensation to harvest mortality). Individual models within each group (reproduction, survival) involved various combinations of these factors, and each was given an information theoretic weight for use in subsequent prediction. The reproduction model with highest AIC weight (0.70) predicted black duck age ratios increasing as a function of decreasing mallard abundance and increasing acreage of breeding habitat; all models considered involved negative density dependence for black ducks. The survival model with highest AIC weight (0.51) predicted nonharvest survival increasing as a function of increasing acreage of wintering habitat and decreasing harvest rates (additive mortality); models involving compensatory mortality effects received ≈0.12 total weight, vs. 0.88 for additive models. We used the combined model, together with our historical data set, to perform a series of 1-year population forecasts, similar to those that might be performed under adaptive management. Initial model forecasts over-predicted observed breeding populations by ≈25%. Least-squares calibration reduced the bias to ≈0.5% under prediction. After calibration, model-averaged predictions over the 16 alternative models (4 reproduction × 4 survival, weighted by AIC model weights) explained 67% of the variation in annual breeding population abundance for black ducks, suggesting that it might have utility as a predictive tool in adaptive management. We investigated the effects of statistical uncertainty in parameter values on predicted population growth rates for the combined annual model, via sensitivity analyses. Parameter sensitivity varied in relation to the parameter values over the estimated confidence intervals, and in relation to harvest rates and mallard abundance. Forecasts of black duck abundance were extremely sensitive to variation in parameter values for the coefficients for breeding and wintering habitat effects. Model-averaged forecasts of black duck abundance were also sensitive to changes in harvest rate and mallard abundance, with rapid declines in black duck abundance predicted for a range of harvest rates and mallard abundance higher than current levels of either factor, but easily envisaged, particularly given current rates of growth for mallard populations. Because of concerns about sensitivity to habitat coefficients, and particularly in light of deficiencies in the historical data used to estimate these parameters, we developed a simplified model that excludes habitat effects. We also developed alternative models involving a calibration adjustment for reproduction rates, survival rates, or neither. Calibration of survival rates performed best (AIC weight 0.59, % BIAS = -0.280, R2=0.679), with reproduction calibration somewhat inferior (AIC weight 0.41, % BIAS = -0.267, R2=0.672); models without calibration received virtually no AIC weight and were discarded. We recommend that the simplified model set (4 biological models × 2 alternative calibration factors) be retained as the best working set of alternative models for research and management. Finally, we provide some preliminary guidance for the development of adaptive harvest management for black ducks, using our working set of models.
Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K
2015-04-01
Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.
Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane
2018-03-01
A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
NASA Astrophysics Data System (ADS)
Paja, Wiesław; Wrzesien, Mariusz; Niemiec, Rafał; Rudnicki, Witold R.
2016-03-01
Climate models are extremely complex pieces of software. They reflect the best knowledge on the physical components of the climate; nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a simulation crashing. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to the simulation crashing and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the data set used in this research using different methodology. We confirm the main conclusion of the original study concerning the suitability of machine learning for the prediction of crashes. We show that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three others are relevant but redundant and two are not relevant at all. We also show that the variance due to the split of data between training and validation sets has a large influence both on the accuracy of predictions and on the relative importance of variables; hence only a cross-validated approach can deliver a robust prediction of performance and relevance of variables.
NASA Astrophysics Data System (ADS)
Dalla Valle, Nicolas; Wutzler, Thomas; Meyer, Stefanie; Potthast, Karin; Michalzik, Beate
2017-04-01
Dual-permeability type models are widely used to simulate water fluxes and solute transport in structured soils. These models contain two spatially overlapping flow domains with different parameterizations or even entirely different conceptual descriptions of flow processes. They are usually able to capture preferential flow phenomena, but a large set of parameters is needed, which are very laborious to obtain or cannot be measured at all. Therefore, model inversions are often used to derive the necessary parameters. Although these require sufficient input data themselves, they can use measurements of state variables instead, which are often easier to obtain and can be monitored by automated measurement systems. In this work we show a method to estimate soil hydraulic parameters from high frequency soil moisture time series data gathered at two different measurement depths by inversion of a simple one dimensional dual-permeability model. The model uses an advection equation based on the kinematic wave theory to describe the flow in the fracture domain and a Richards equation for the flow in the matrix domain. The soil moisture time series data were measured in mesocosms during sprinkling experiments. The inversion consists of three consecutive steps: First, the parameters of the water retention function were assessed using vertical soil moisture profiles in hydraulic equilibrium. This was done using two different exponential retention functions and the Campbell function. Second, the soil sorptivity and diffusivity functions were estimated from Boltzmann-transformed soil moisture data, which allowed the calculation of the hydraulic conductivity function. Third, the parameters governing flow in the fracture domain were determined using the whole soil moisture time series. The resulting retention functions were within the range of values predicted by pedotransfer functions apart from very dry conditions, where all retention functions predicted lower matrix potentials. The diffusivity function predicted values of a similar range as shown in other studies. Overall, the model was able to emulate soil moisture time series for low measurement depths, but deviated increasingly at larger depths. This indicates that some of the model parameters are not constant throughout the profile. However, overall seepage fluxes were still predicted correctly. In the near future we will apply the inversion method to lower frequency soil moisture data from different sites to evaluate the model's ability to predict preferential flow seepage fluxes at the field scale.
Forecasting impact injuries of unrestrained occupants in railway vehicle passenger compartments.
Xie, Suchao; Zhou, Hui
2014-01-01
In order to predict the injury parameters of the occupants corresponding to different experimental parameters and to determine impact injury indices conveniently and efficiently, a model forecasting occupant impact injury was established in this work. The work was based on finite experimental observation values obtained by numerical simulation. First, the various factors influencing the impact injuries caused by the interaction between unrestrained occupants and the compartment's internal structures were collated and the most vulnerable regions of the occupant's body were analyzed. Then, the forecast model was set up based on a genetic algorithm-back propagation (GA-BP) hybrid algorithm, which unified the individual characteristics of the back propagation-artificial neural network (BP-ANN) model and the genetic algorithm (GA). The model was well suited to studies of occupant impact injuries and allowed multiple-parameter forecasts of the occupant impact injuries to be realized assuming values for various influencing factors. Finally, the forecast results for three types of secondary collision were analyzed using forecasting accuracy evaluation methods. All of the results showed the ideal accuracy of the forecast model. When an occupant faced a table, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.0 percent and the average relative error (ARE) values did not exceed 3.0 percent. When an occupant faced a seat, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 5.2 percent and the ARE values did not exceed 3.1 percent. When the occupant faced another occupant, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.3 percent and the ARE values did not exceed 3.8 percent. The injury forecast model established in this article reduced repeat experiment times and improved the design efficiency of the internal compartment's structure parameters, and it provided a new way for assessing the safety performance of the interior structural parameters in existing, and newly designed, railway vehicle compartments.
NASA Astrophysics Data System (ADS)
Kim, M. S.; Onda, Y.; Kim, J. K.
2015-01-01
SHALSTAB model applied to shallow landslides induced by rainfall to evaluate soil properties related with the effect of soil depth for a granite area in Jinbu region, Republic of Korea. Soil depth measured by a knocking pole test and two soil parameters from direct shear test (a and b) as well as one soil parameters from a triaxial compression test (c) were collected to determine the input parameters for the model. Experimental soil data were used for the first simulation (Case I) and, soil data represented the effect of measured soil depth and average soil depth from soil data of Case I were used in the second (Case II) and third simulations (Case III), respectively. All simulations were analysed using receiver operating characteristic (ROC) analysis to determine the accuracy of prediction. ROC analysis results for first simulation showed the low ROC values under 0.75 may be due to the internal friction angle and particularly the cohesion value. Soil parameters calculated from a stochastic hydro-geomorphological model were applied to the SHALSTAB model. The accuracy of Case II and Case III using ROC analysis showed higher accuracy values rather than first simulation. Our results clearly demonstrate that the accuracy of shallow landslide prediction can be improved when soil parameters represented the effect of soil thickness.
NASA Astrophysics Data System (ADS)
Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.
2015-12-01
One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Ruyck, Kim, E-mail: kim.deruyck@UGent.be; Sabbe, Nick; Oberije, Cary
2011-10-01
Purpose: To construct a model for the prediction of acute esophagitis in lung cancer patients receiving chemoradiotherapy by combining clinical data, treatment parameters, and genotyping profile. Patients and Methods: Data were available for 273 lung cancer patients treated with curative chemoradiotherapy. Clinical data included gender, age, World Health Organization performance score, nicotine use, diabetes, chronic disease, tumor type, tumor stage, lymph node stage, tumor location, and medical center. Treatment parameters included chemotherapy, surgery, radiotherapy technique, tumor dose, mean fractionation size, mean and maximal esophageal dose, and overall treatment time. A total of 332 genetic polymorphisms were considered in 112 candidatemore » genes. The predicting model was achieved by lasso logistic regression for predictor selection, followed by classic logistic regression for unbiased estimation of the coefficients. Performance of the model was expressed as the area under the curve of the receiver operating characteristic and as the false-negative rate in the optimal point on the receiver operating characteristic curve. Results: A total of 110 patients (40%) developed acute esophagitis Grade {>=}2 (Common Terminology Criteria for Adverse Events v3.0). The final model contained chemotherapy treatment, lymph node stage, mean esophageal dose, gender, overall treatment time, radiotherapy technique, rs2302535 (EGFR), rs16930129 (ENG), rs1131877 (TRAF3), and rs2230528 (ITGB2). The area under the curve was 0.87, and the false-negative rate was 16%. Conclusion: Prediction of acute esophagitis can be improved by combining clinical, treatment, and genetic factors. A multicomponent prediction model for acute esophagitis with a sensitivity of 84% was constructed with two clinical parameters, four treatment parameters, and four genetic polymorphisms.« less
NASA Astrophysics Data System (ADS)
Norton, Andrew S.
An integral component of managing game species is an understanding of population dynamics and relative abundance. Harvest data are frequently used to estimate abundance of white-tailed deer. Unless harvest age-structure is representative of the population age-structure and harvest vulnerability remains constant from year to year, these data alone are of limited value. Additional model structure and auxiliary information has accommodated this shortcoming. Specifically, integrated age-at-harvest (AAH) state-space population models can formally combine multiple sources of data, and regularization via hierarchical model structure can increase flexibility of model parameters. I collected known fates data, which I evaluated and used to inform trends in survival parameters for an integrated AAH model. I used temperature and snow depth covariates to predict survival outside of the hunting season, and opening weekend temperature and percent of corn harvest covariates to predict hunting season survival. When auxiliary empirical data were unavailable for the AAH model, moderately informative priors provided sufficient information for convergence and parameter estimates. The AAH model was most sensitive to errors in initial abundance, but this error was calibrated after 3 years. Among vital rates, the AAH model was most sensitive to reporting rates (percentage of mortality during the hunting season related to harvest). The AAH model, using only harvest data, was able to track changing abundance trends due to changes in survival rates even when prior models did not inform these changes (i.e. prior models were constant when truth varied). I also compared AAH model results with estimates from the Wisconsin Department of Natural Resources (WIDNR). Trends in abundance estimates from both models were similar, although AAH model predictions were systematically higher than WIDNR estimates in the East study area. When I incorporated auxiliary information (i.e. integrated AAH model) about survival outside the hunting season from known fates data, predicted trends appeared more closely related to what was expected. Disagreements between the AAH model and WIDNR estimates in the East were likely related to biased predictions for reporting and survival rates from the AAH model.
A new method to estimate average hourly global solar radiation on the horizontal surface
NASA Astrophysics Data System (ADS)
Pandey, Pramod K.; Soupir, Michelle L.
2012-10-01
A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.
Bauer, Julia; Chen, Wenjing; Nischwitz, Sebastian; Liebl, Jakob; Rieken, Stefan; Welzel, Thomas; Debus, Juergen; Parodi, Katia
2018-04-24
A reliable Monte Carlo prediction of proton-induced brain tissue activation used for comparison to particle therapy positron-emission-tomography (PT-PET) measurements is crucial for in vivo treatment verification. Major limitations of current approaches to overcome include the CT-based patient model and the description of activity washout due to tissue perfusion. Two approaches were studied to improve the activity prediction for brain irradiation: (i) a refined patient model using tissue classification based on MR information and (ii) a PT-PET data-driven refinement of washout model parameters. Improvements of the activity predictions compared to post-treatment PT-PET measurements were assessed in terms of activity profile similarity for six patients treated with a single or two almost parallel fields delivered by active proton beam scanning. The refined patient model yields a generally higher similarity for most of the patients, except in highly pathological areas leading to tissue misclassification. Using washout model parameters deduced from clinical patient data could considerably improve the activity profile similarity for all patients. Current methods used to predict proton-induced brain tissue activation can be improved with MR-based tissue classification and data-driven washout parameters, thus providing a more reliable basis for PT-PET verification. Copyright © 2018 Elsevier B.V. All rights reserved.
New Secondary Batteries Utilizing Electronically Conductive Polypyrrole Cathode. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Yeu, Taewhan
1991-01-01
To gain a better understanding of the dynamic behavior in electronically conducting polypyrroles and to provide guidance toward designs of new secondary batteries based on these polymers, two mathematical models are developed; one for the potentiostatically controlled switching behavior of polypyrrole film, and one for the galvanostatically controlled charge/discharge behavior of lithium/polypyrrole secondary battery cell. The first model is used to predict the profiles of electrolyte concentrations, charge states, and electrochemical potentials within the thin polypyrrole film during switching process as functions of applied potential and position. Thus, the detailed mechanisms of charge transport and electrochemical reaction can be understood. Sensitivity analysis is performed for independent parameters, describing the physical and electrochemical characteristic of polypyrrole film, to verify their influences on the model performance. The values of independent parameters are estimated by comparing model predictions with experimental data obtained from identical conditions. The second model is used to predict the profiles of electrolyte concentrations, charge state, and electrochemical potentials within the battery system during charge and discharge processes as functions of time and position. Energy and power densities are estimated from model predictions and compared with existing battery systems. The independent design criteria on the charge and discharge performance of the cell are provided by studying the effects of design parameters.
Silitonga, Arridina Susan; Hassan, Masjuki Haji; Ong, Hwai Chyuan; Kusumo, Fitranto
2017-11-01
The purpose of this study is to investigate the performance, emission and combustion characteristics of a four-cylinder common-rail turbocharged diesel engine fuelled with Jatropha curcas biodiesel-diesel blends. A kernel-based extreme learning machine (KELM) model is developed in this study using MATLAB software in order to predict the performance, combustion and emission characteristics of the engine. To acquire the data for training and testing the KELM model, the engine speed was selected as the input parameter, whereas the performance, exhaust emissions and combustion characteristics were chosen as the output parameters of the KELM model. The performance, emissions and combustion characteristics predicted by the KELM model were validated by comparing the predicted data with the experimental data. The results show that the coefficient of determination of the parameters is within a range of 0.9805-0.9991 for both the KELM model and the experimental data. The mean absolute percentage error is within a range of 0.1259-2.3838. This study shows that KELM modelling is a useful technique in biodiesel production since it facilitates scientists and researchers to predict the performance, exhaust emissions and combustion characteristics of internal combustion engines with high accuracy.
Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M
2015-09-01
Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.
Relating Data and Models to Characterize Parameter and Prediction Uncertainty
Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...
Watershed scale rainfall‐runoff models are used for environmental management and regulatory modeling applications, but their effectiveness are limited by predictive uncertainties associated with model input data. This study evaluated the effect of temporal and spatial rainfall re...
Sresht, Vishnu; Lewandowski, Eric P; Blankschtein, Daniel; Jusufi, Arben
2017-08-22
A molecular modeling approach is presented with a focus on quantitative predictions of the surface tension of aqueous surfactant solutions. The approach combines classical Molecular Dynamics (MD) simulations with a molecular-thermodynamic theory (MTT) [ Y. J. Nikas, S. Puvvada, D. Blankschtein, Langmuir 1992 , 8 , 2680 ]. The MD component is used to calculate thermodynamic and molecular parameters that are needed in the MTT model to determine the surface tension isotherm. The MD/MTT approach provides the important link between the surfactant bulk concentration, the experimental control parameter, and the surfactant surface concentration, the MD control parameter. We demonstrate the capability of the MD/MTT modeling approach on nonionic alkyl polyethylene glycol surfactants at the air-water interface and observe reasonable agreement of the predicted surface tensions and the experimental surface tension data over a wide range of surfactant concentrations below the critical micelle concentration. Our modeling approach can be extended to ionic surfactants and their mixtures with both ionic and nonionic surfactants at liquid-liquid interfaces.
NASA Astrophysics Data System (ADS)
Zhao, Xiang-Feng; Shang, De-Guang; Sun, Yu-Juan; Song, Ming-Liang; Wang, Xiao-Wei
2018-01-01
The maximum shear strain and the normal strain excursion on the critical plane are regarded as the primary parameters of the crack driving force to establish a new short crack model in this paper. An equivalent strain-based intensity factor is proposed to correlate the short crack growth rate under multiaxial loading. According to the short crack model, a new method is proposed for multiaxial fatigue life prediction based on crack growth analysis. It is demonstrated that the method can be used under proportional and non-proportional loadings. The predicted results showed a good agreement with experimental lives in both high-cycle and low-cycle regions.
Bernstein, Diana N.; Neelin, J. David
2016-04-28
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Diana N.; Neelin, J. David
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
Selection of fire spread model for Russian fire behavior prediction system
Alexandra V. Volokitina; Kevin C. Ryan; Tatiana M. Sofronova; Mark A. Sofronov
2010-01-01
Mathematical modeling of fire behavior prediction is only possible if the models are supplied with an information database that provides spatially explicit input parameters for modeled area. Mathematical models can be of three kinds: 1) physical; 2) empirical; and 3) quasi-empirical (Sullivan, 2009). Physical models (Grishin, 1992) are of academic interest only because...
Imposing constraints on parameter values of a conceptual hydrological model using baseflow response
NASA Astrophysics Data System (ADS)
Dunn, S. M.
Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.
Modeling the shape and composition of the human body using dual energy X-ray absorptiometry images
Shepherd, John A.; Fan, Bo; Schwartz, Ann V.; Cawthon, Peggy; Cummings, Steven R.; Kritchevsky, Stephen; Nevitt, Michael; Santanasto, Adam; Cootes, Timothy F.
2017-01-01
There is growing evidence that body shape and regional body composition are strong indicators of metabolic health. The purpose of this study was to develop statistical models that accurately describe holistic body shape, thickness, and leanness. We hypothesized that there are unique body shape features that are predictive of mortality beyond standard clinical measures. We developed algorithms to process whole-body dual-energy X-ray absorptiometry (DXA) scans into body thickness and leanness images. We performed statistical appearance modeling (SAM) and principal component analysis (PCA) to efficiently encode the variance of body shape, leanness, and thickness across sample of 400 older Americans from the Health ABC study. The sample included 200 cases and 200 controls based on 6-year mortality status, matched on sex, race and BMI. The final model contained 52 points outlining the torso, upper arms, thighs, and bony landmarks. Correlation analyses were performed on the PCA parameters to identify body shape features that vary across groups and with metabolic risk. Stepwise logistic regression was performed to identify sex and race, and predict mortality risk as a function of body shape parameters. These parameters are novel body composition features that uniquely identify body phenotypes of different groups and predict mortality risk. Three parameters from a SAM of body leanness and thickness accurately identified sex (training AUC = 0.99) and six accurately identified race (training AUC = 0.91) in the sample dataset. Three parameters from a SAM of only body thickness predicted mortality (training AUC = 0.66, validation AUC = 0.62). Further study is warranted to identify specific shape/composition features that predict other health outcomes. PMID:28423041
Berlinguer, Fiammetta; Madeddu, Manuela; Pasciu, Valeria; Succu, Sara; Spezzigu, Antonio; Satta, Valentina; Mereu, Paolo; Leoni, Giovanni G; Naitana, Salvatore
2009-01-01
Currently, the assessment of sperm function in a raw or processed semen sample is not able to reliably predict sperm ability to withstand freezing and thawing procedures and in vivo fertility and/or assisted reproductive biotechnologies (ART) outcome. The aim of the present study was to investigate which parameters among a battery of analyses could predict subsequent spermatozoa in vitro fertilization ability and hence blastocyst output in a goat model. Ejaculates were obtained by artificial vagina from 3 adult goats (Capra hircus) aged 2 years (A, B and C). In order to assess the predictive value of viability, computer assisted sperm analyzer (CASA) motility parameters and ATP intracellular concentration before and after thawing and of DNA integrity after thawing on subsequent embryo output after an in vitro fertility test, a logistic regression analysis was used. Individual differences in semen parameters were evident for semen viability after thawing and DNA integrity. Results of IVF test showed that spermatozoa collected from A and B lead to higher cleavage rates (0 < 0.01) and blastocysts output (p < 0.05) compared with C. Logistic regression analysis model explained a deviance of 72% (p < 0.0001), directly related with the mean percentage of rapid spermatozoa in fresh semen (p < 0.01), semen viability after thawing (p < 0.01), and with two of the three comet parameters considered, i.e tail DNA percentage and comet length (p < 0.0001). DNA integrity alone had a high predictive value on IVF outcome with frozen/thawed semen (deviance explained: 57%). The model proposed here represents one of the many possible ways to explain differences found in embryo output following IVF with different semen donors and may represent a useful tool to select the most suitable donors for semen cryopreservation. PMID:19900288
Decohesion Elements using Two and Three-Parameter Mixed-Mode Criteria
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Camanho, Pedro P.
2001-01-01
An eight-node decohesion element implementing different criteria to predict delamination growth under mixed-mode loading is proposed. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a softening law to track the damage state of the interface. The power law criterion and a three-parameter mixed-mode criterion are used to predict delamination growth. The accuracy of the predictions is evaluated in single mode delamination and in the mixed-mode bending tests.
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
Barrera, Ernesto L; Spanjers, Henri; Solon, Kimberly; Amerlinck, Youri; Nopens, Ingmar; Dewulf, Jo
2015-03-15
This research presents the modeling of the anaerobic digestion of cane-molasses vinasse, hereby extending the Anaerobic Digestion Model No. 1 with sulfate reduction for a very high strength and sulfate rich wastewater. Based on a sensitivity analysis, four parameters of the original ADM1 and all sulfate reduction parameters were calibrated. Although some deviations were observed between model predictions and experimental values, it was shown that sulfates, total aqueous sulfide, free sulfides, methane, carbon dioxide and sulfide in the gas phase, gas flow, propionic and acetic acids, chemical oxygen demand (COD), and pH were accurately predicted during model validation. The model showed high (±10%) to medium (10%-30%) accuracy predictions with a mean absolute relative error ranging from 1% to 26%, and was able to predict failure of methanogenesis and sulfidogenesis when the sulfate loading rate increased. Therefore, the kinetic parameters and the model structure proposed in this work can be considered as valid for the sulfate reduction process in the anaerobic digestion of cane-molasses vinasse when sulfate and organic loading rates range from 0.36 to 1.57 kg [Formula: see text] m(-3) d(-1) and from 7.66 to 12 kg COD m(-3) d(-1), respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sources of Uncertainty in the Prediction of LAI / fPAR from MODIS
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Ganapol, Barry D.; Brass, James A. (Technical Monitor)
2002-01-01
To explicate the sources of uncertainty in the prediction of biophysical variables over space, consider the general equation: where z is a variable with values on some nominal, ordinal, interval or ratio scale; y is a vector of input variables; u is the spatial support of y and z ; x and u are the spatial locations of y and z , respectively; f is a model and B is the vector of the parameters of this model. Any y or z has a value and a spatial extent which is called its support. Viewed in this way, categories of uncertainty are from variable (e.g. measurement), parameter, positional. support and model (e.g. structural) sources. The prediction of Leaf Area Index (LAI) and the fraction of absorbed photosynthetically active radiation (fPAR) are examples of z variables predicted using model(s) as a function of y variables and spatially constant parameters. The MOD15 algorithm is an example of f, called f(sub 1), with parameters including those defined by one of six biome types and solar and view angles. The Leaf Canopy Model (LCM)2, a nested model that combines leaf radiative transfer with a full canopy reflectance model through the phase function, is a simpler though similar radiative transfer approach to f(sub 1). In a previous study, MOD15 and LCM2 gave similar results for the broadleaf forest biome. Differences between these two models can be used to consider the structural uncertainty in prediction results. In an effort to quantify each of the five sources of uncertainty and rank their relative importance for the LAI/fPAR prediction problem, we used recent data for an EOS Core Validation Site in the broadleaf biome with coincident surface reflectance, vegetation index, fPAR and LAI products from the Moderate Resolution Imaging Spectrometer (MODIS). Uncertainty due to support on the input reflectance variable was characterized using Landsat ETM+ data. Input uncertainties were propagated through the LCM2 model and compared with published uncertainties from the MOD15 algorithm.
Real-time flutter boundary prediction based on time series models
NASA Astrophysics Data System (ADS)
Gu, Wenjing; Zhou, Li
2018-03-01
For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.
An empirical propellant response function for combustion stability predictions
NASA Technical Reports Server (NTRS)
Hessler, R. O.
1980-01-01
An empirical response function model was developed for ammonium perchlorate propellants to supplant T-burner testing at the preliminary design stage. The model was developed by fitting a limited T-burner data base, in terms of oxidizer size and concentration, to an analytical two parameter response function expression. Multiple peaks are predicted, but the primary effect is of a single peak for most formulations, with notable bulges for the various AP size fractions. The model was extended to velocity coupling with the assumption that dynamic response was controlled primarily by the solid phase described by the two parameter model. The magnitude of velocity coupling was then scaled using an erosive burning law. Routine use of the model for stability predictions on a number of propulsion units indicates that the model tends to overpredict propellant response. It is concluded that the model represents a generally conservative prediction tool, suited especially for the preliminary design stage when T-burner data may not be readily available. The model work included development of a rigorous summation technique for pseudopropellant properties and of a concept for modeling ordered packing of particulates.
Thermal cut-off response modelling of universal motors
NASA Astrophysics Data System (ADS)
Thangaveloo, Kashveen; Chin, Yung Shin
2017-04-01
This paper presents a model to predict the thermal cut-off (TCO) response behaviour in universal motors. The mathematical model includes the calculations of heat loss in the universal motor and the flow characteristics around the TCO component which together are the main parameters for TCO response prediction. In order to accurately predict the TCO component temperature, factors like the TCO component resistance, the effect of ambient, and the flow conditions through the motor are taken into account to improve the prediction accuracy of the model.
Brakefield, Linzy K.; White, Jeremy T.; Houston, Natalie A.; Thomas, Jonathan V.
2015-01-01
Predictive results of total spring discharge during the 7-year period, as well as head predictions at Bexar County index well J-17, were much different than the dissolved-solids concentration change results at the production wells. These upper bounds are an order of magnitude larger than the actual prediction which implies that (1) the predictions of total spring discharge at Comal and San Marcos Springs and head at Bexar County index well J-17 made with this model are not reliable, and (2) parameters that control these predictions are not informed well by the observation dataset during historymatching, even though the history-matching process yielded parameters to reproduce spring discharges and heads at these locations during the history-matching period. Furthermore, because spring discharges at these two springs and heads at Bexar County index well J-17 represent more of a cumulative effect of upstream conditions over a larger distance (and longer time), many more parameters (with their own uncertainties) are potentially controlling these predictions than the prediction of dissolved-solids concentration change at the prediction wells, and therefore contributing to a large posterior uncertainty.
Rathfelder, K M; Abriola, L M; Taylor, T P; Pennell, K D
2001-04-01
A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, batch and column experiments. These parameters included viscosity, density, solubilization capacity, surfactant sorption, interfacial tension, permeability, capillary retention functions, and interphase mass transfer correlations. Model predictive capability was assessed for the evaluation of the micellar solubilization of tetrachloroethylene (PCE) in the two-dimensional systems. Predicted effluent concentrations and mass recovery agreed reasonably well with measured values. Accurate prediction of enhanced solubilization behavior in the sand tanks was found to require the incorporation of pore-scale, system-dependent, interphase mass transfer limitations, including an explicit representation of specific interfacial contact area. Predicted effluent concentrations and mass recovery were also found to depend strongly upon the initial NAPL entrapment configuration. Numerical results collectively indicate that enhanced solubilization processes in heterogeneous, laboratory sand tank systems can be successfully simulated using independently measured soil parameters and column-measured mass transfer coefficients, provided that permeability and NAPL distributions are accurately known. This implies that the accuracy of model predictions at the field scale will be constrained by our ability to quantify soil heterogeneity and NAPL distribution.
Satellite Remote Sensing is Key to Water Cycle Integrator
NASA Astrophysics Data System (ADS)
Koike, T.
2016-12-01
To promote effective multi-sectoral, interdisciplinary collaboration based on coordinated and integrated efforts, the Global Earth Observation System of Systems (GEOSS) is now developing a "GEOSS Water Cycle Integrator (WCI)", which integrates "Earth observations", "modeling", "data and information", "management systems" and "education systems". GEOSS/WCI sets up "work benches" by which partners can share data, information and applications in an interoperable way, exchange knowledge and experiences, deepen mutual understanding and work together effectively to ultimately respond to issues of both mitigation and adaptation. (A work bench is a virtual geographical or phenomenological space where experts and managers collaborate to use information to address a problem within that space). GEOSS/WCI enhances the coordination of efforts to strengthen individual, institutional and infrastructure capacities, especially for effective interdisciplinary coordination and integration. GEOSS/WCI archives various satellite data to provide various hydrological information such as cloud, rainfall, soil moisture, or land-surface snow. These satellite products were validated using land observation in-situ data. Water cycle models can be developed by coupling in-situ and satellite data. River flows and other hydrological parameters can be simulated and validated by in-situ data. Model outputs from weather-prediction, seasonal-prediction, and climate-prediction models are archived. Some of these model outputs are archived on an online basis, but other models, e.g., climate-prediction models are archived on an offline basis. After models are evaluated and biases corrected, the outputs can be used as inputs into the hydrological models for predicting the hydrological parameters. Additionally, we have already developed a data-assimilation system by combining satellite data and the models. This system can improve our capability to predict hydrological phenomena. The WCI can provide better predictions of the hydrological parameters for integrated water resources management (IWRM) and also assess the impact of climate change and calculate adaptation needs.
Technique for predicting high-frequency stability characteristics of gaseous-propellant combustors
NASA Technical Reports Server (NTRS)
Priem, R. J.; Jefferson, Y. S. Y.
1973-01-01
A technique for predicting the stability characteristics of a gaseous-propellant rocket combustion system is developed based on a model that assumes coupling between the flow through the injector and the oscillating chamber pressure. The theoretical model uses a lumped parameter approach for the flow elements in the injection system plus wave dynamics in the combustion chamber. The injector flow oscillations are coupled to the chamber pressure oscillations with a delay time. Frequency and decay (or growth) rates are calculated for various combustor design and operating parameters to demonstrate the influence of various parameters on stability. Changes in oxidizer design parameters had a much larger influence on stability than a similar change in fuel parameters. A complete description of the computer program used to make these calculations is given in an appendix.
Modelling the growth of Leuconostoc mesenteroides by Artificial Neural Networks.
García-Gimeno, R M; Hervás-Martínez, C; Rodríguez-Pérez, R; Zurera-Cosano, G
2005-12-15
The combined effect of temperature (10.5 to 24.5 degrees C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the predicted specific growth rate (Gr), lag-time (Lag) and maximum population density (yEnd) of Leuconostoc mesenteroides under aerobic and anaerobic conditions, was studied using an Artificial Neural Network-based model (ANN) in comparison with Response Surface Methodology (RS). For both aerobic and anaerobic conditions, two types of ANN model were elaborated, unidimensional for each of the growth parameters, and multidimensional in which the three parameters Gr, Lag, and yEnd are combined. Although in general no significant statistical differences were observed between both types of model, we opted for the unidimensional model, because it obtained the lowest mean value for the standard error of prediction for generalisation. The ANN models developed provided reliable estimates for the three kinetic parameters studied; the SEP values in aerobic conditions ranged from between 2.82% for Gr, 6.05% for Lag and 10% for yEnd, a higher degree accuracy than those of the RS model (Gr: 9.54%; Lag: 8.89%; yEnd: 10.27%). Similar results were observed for anaerobic conditions. During external validation, a higher degree of accuracy (Af) and bias (Bf) were observed for the ANN model compared with the RS model. ANN predictive growth models are a valuable tool, enabling swift determination of L. mesenteroides growth parameters.
Mikami, Akiko; Hori, Satoko; Ohtani, Hisakazu; Sawada, Yasufumi
2017-01-01
The purpose of the study was to quantitatively estimate and predict drug interactions between terbinafine and tricyclic antidepressants (TCAs), amitriptyline or nortriptyline, based on in vitro studies. Inhibition of TCA-metabolizing activity by terbinafine was investigated using human liver microsomes. Based on the unbound K i values obtained in vitro and reported pharmacokinetic parameters, a pharmacokinetic model of drug interaction was fitted to the reported plasma concentration profiles of TCAs administered concomitantly with terbinafine to obtain the drug-drug interaction parameters. Then, the model was used to predict nortriptyline plasma concentration with concomitant administration of terbinafine and changes of area under the curve (AUC) of nortriptyline after cessation of terbinafine. The CYP2D6 inhibitory potency of terbinafine was unaffected by preincubation, so the inhibition seems to be reversible. Terbinafine competitively inhibited amitriptyline or nortriptyline E-10-hydroxylation, with unbound K i values of 13.7 and 12.4 nM, respectively. Observed plasma concentrations of TCAs administered concomitantly with terbinafine were successfully simulated with the drug interaction model using the in vitro parameters. Model-predicted nortriptyline plasma concentration after concomitant nortriptylene/terbinafine administration for two weeks exceeded the toxic level, and drug interaction was predicted to be prolonged; the AUC of nortriptyline was predicted to be increased by 2.5- or 2.0- and 1.5-fold at 0, 3 and 6 months after cessation of terbinafine, respectively. The developed model enables us to quantitatively predict the prolonged drug interaction between terbinafine and TCAs. The model should be helpful for clinical management of terbinafine-CYP2D6 substrate drug interactions, which are difficult to predict due to their time-dependency.
Glassman, Patrick M; Chen, Yang; Balthasar, Joseph P
2015-10-01
Preclinical assessment of monoclonal antibody (mAb) disposition during drug development often includes investigations in non-human primate models. In many cases, mAb exhibit non-linear disposition that relates to mAb-target binding [i.e., target-mediated disposition (TMD)]. The goal of this work was to develop a physiologically-based pharmacokinetic (PBPK) model to predict non-linear mAb disposition in plasma and in tissues in monkeys. Physiological parameters for monkeys were collected from several sources, and plasma data for several mAbs associated with linear pharmacokinetics were digitized from prior literature reports. The digitized data displayed great variability; therefore, parameters describing inter-antibody variability in the rates of pinocytosis and convection were estimated. For prediction of the disposition of individual antibodies, we incorporated tissue concentrations of target proteins, where concentrations were estimated based on categorical immunohistochemistry scores, and with assumed localization of target within the interstitial space of each organ. Kinetics of target-mAb binding and target turnover, in the presence or absence of mAb, were implemented. The model was then employed to predict concentration versus time data, via Monte Carlo simulation, for two mAb that have been shown to exhibit TMD (2F8 and tocilizumab). Model predictions, performed a priori with no parameter fitting, were found to provide good prediction of dose-dependencies in plasma clearance, the areas under plasma concentration versu time curves, and the time-course of plasma concentration data. This PBPK model may find utility in predicting plasma and tissue concentration versus time data and, potentially, the time-course of receptor occupancy (i.e., mAb-target binding) to support the design and interpretation of preclinical pharmacokinetic-pharmacodynamic investigations in non-human primates.
Evolution of non-interacting entropic dark energy and its phantom nature
NASA Astrophysics Data System (ADS)
Mathew, Titus K.; Murali, Chinthak; Shejeelammal, J.
2016-04-01
Assuming the form of the entropic dark energy (EDE) as it arises from the surface term in the Einstein-Hilbert’s action, its evolution was analyzed in an expanding flat universe. The model parameters were evaluated by constraining the model using the Union data on Type Ia supernovae. We found that in the non-interacting case, the model predicts an early decelerated phase and a later accelerated phase at the background level. The evolutions of the Hubble parameter, dark energy (DE) density, equation of state parameter and deceleration parameter were obtained. The model hardly seems to be supporting the linear perturbation growth for the structure formation. We also found that the EDE shows phantom nature for redshifts z < 0.257. During the phantom epoch, the model predicts big rip effect at which both the scale factor of expansion and the DE density become infinitely large and the big rip time is found to be around 36 Giga years from now.
Valdez-Jasso, Daniela; Bia, Daniel; Zócalo, Yanina; Armentano, Ricardo L.; Haider, Mansoor A.; Olufsen, Mette S.
2013-01-01
A better understanding of the biomechanical properties of the arterial wall provides important insight into arterial vascular biology under normal (healthy) and pathological conditions. This insight has potential to improve tracking of disease progression and to aid in vascular graft design and implementation. In this study, we use linear and nonlinear viscoelastic models to predict biomechanical properties of the thoracic descending aorta and the carotid artery under ex vivo and in vivo conditions in ovine and human arteries. Models analyzed include a four-parameter (linear) Kelvin viscoelastic model and two five-parameter nonlinear viscoelastic models (an arctangent and a sigmoid model) that relate changes in arterial blood pressure to the vessel cross-sectional area (via estimation of vessel strain). These models were developed using the framework of Quasilinear Viscoelasticity (QLV) theory and were validated using measurements from the thoracic descending aorta and the carotid artery obtained from human and ovine arteries. In vivo measurements were obtained from ten ovine aortas and ten human carotid arteries. Ex vivo measurements (from both locations) were made in eleven male Merino sheep. Biomechanical properties were obtained through constrained estimation of model parameters. To further investigate the parameter estimates we computed standard errors and confidence intervals and we used analysis of variance to compare results within and between groups. Overall, our results indicate that optimal model selection depends on the arterial type. Results showed that for the thoracic descending aorta (under both experimental conditions) the best predictions were obtained with the nonlinear sigmoid model, while under healthy physiological pressure loading the carotid arteries nonlinear stiffening with increasing pressure is negligible, and consequently, the linear (Kelvin) viscoelastic model better describes the pressure-area dynamics in this vessel. Results comparing biomechanical properties show that the Kelvin and sigmoid models were able to predict the zero-pressure vessel radius; that under ex vivo conditions vessels are more rigid, and comparatively, that the carotid artery is stiffer than the thoracic descending aorta; and that the viscoelastic gain and relaxation parameters do not differ significantly between vessels or experimental conditions. In conclusion, our study demonstrates that the proposed models can predict pressure-area dynamics and that model parameters can be extracted for further interpretation of biomechanical properties. PMID:21203846
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
The use of the logistic model in space motion sickness prediction
NASA Technical Reports Server (NTRS)
Lin, Karl K.; Reschke, Millard F.
1987-01-01
The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.
Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.
2014-01-01
Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522
Merei, Bilal; Badel, Pierre; Davis, Lindsey; Sutton, Michael A; Avril, Stéphane; Lessner, Susan M
2017-03-01
Finite element analyses using cohesive zone models (CZM) can be used to predict the fracture of atherosclerotic plaques but this requires setting appropriate values of the model parameters. In this study, material parameters of a CZM were identified for the first time on two groups of mice (ApoE -/- and ApoE -/- Col8 -/- ) using the measured force-displacement curves acquired during delamination tests. To this end, a 2D finite-element model of each plaque was solved using an explicit integration scheme. Each constituent of the plaque was modeled with a neo-Hookean strain energy density function and a CZM was used for the interface. The model parameters were calibrated by minimizing the quadratic deviation between the experimental force displacement curves and the model predictions. The elastic parameter of the plaque and the CZM interfacial parameter were successfully identified for a cohort of 11 mice. The results revealed that only the elastic parameter was significantly different between the two groups, ApoE -/- Col8 -/- plaques being less stiff than ApoE -/- plaques. Finally, this study demonstrated that a simple 2D finite element model with cohesive elements can reproduce fairly well the plaque peeling global response. Future work will focus on understanding the main biological determinants of regional and inter-individual variations of the material parameters used in the model. Copyright © 2016 Elsevier Ltd. All rights reserved.
System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot
NASA Technical Reports Server (NTRS)
Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)
2015-01-01
A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.
USDA-ARS?s Scientific Manuscript database
AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...
NASA Astrophysics Data System (ADS)
Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix
2017-04-01
It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.
Estimating thermal performance curves from repeated field observations
Childress, Evan; Letcher, Benjamin H.
2017-01-01
Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.
Predictive model for convective flows induced by surface reactivity contrast
NASA Astrophysics Data System (ADS)
Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali
2018-05-01
Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.
ERIC Educational Resources Information Center
Hoijtink, Herbert; Molenaar, Ivo W.
1997-01-01
This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)
Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning
ERIC Educational Resources Information Center
MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.
2015-01-01
Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…
Single neuron modeling and data assimilation in BNST neurons
NASA Astrophysics Data System (ADS)
Farsian, Reza
Neurons, although tiny in size, are vastly complicated systems, which are responsible for the most basic yet essential functions of any nervous system. Even the most simple models of single neurons are usually high dimensional, nonlinear, and contain many parameters and states which are unobservable in a typical neurophysiological experiment. One of the most fundamental problems in experimental neurophysiology is the estimation of these parameters and states, since knowing their values is essential in identification, model construction, and forward prediction of biological neurons. Common methods of parameter and state estimation do not perform well for neural models due to their high dimensionality and nonlinearity. In this dissertation, two alternative approaches for parameters and state estimation of biological neurons have been demonstrated: dynamical parameter estimation (DPE) and a Markov Chain Monte Carlo (MCMC) method. The first method uses elements of chaos control and synchronization theory for parameter and state estimation. MCMC is a statistical approach which uses a path integral formulation to evaluate a mean and an error bound for these unobserved parameters and states. These methods have been applied to biological system of neurons in Bed Nucleus of Stria Termialis neurons (BNST) of rats. State and parameters of neurons in both systems were estimated, and their value were used for recreating a realistic model and predicting the behavior of the neurons successfully. The knowledge of biological parameters can ultimately provide a better understanding of the internal dynamics of a neuron in order to build robust models of neuron networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
Schuwirth, Nele; Reichert, Peter
2013-02-01
For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.
Ogungbenro, Kayode; Aarons, Leon
2014-04-01
6-mercaptopurine (6-MP) is a purine antimetabolite and prodrug that undergoes extensive intracellular metabolism to produce thionucleotides, active metabolites which have cytotoxic and immunosuppressive properties. Combination therapies involving 6-MP and methotrexate have shown remarkable results in the cure of childhood acute lymphoblastic leukaemia (ALL) in the last 30 years. 6-MP undergoes very extensive intestinal and hepatic metabolism following oral dosing due to the activity of xanthine oxidase leading to very low and highly variable bioavailability and methotrexate has been demonstrated as an inhibitor of xanthine oxidase. Despite the success recorded in the use of 6-MP in ALL, there is still lack of effect and life threatening toxicity in some patients due to variability in the pharmacokinetics of 6-MP. Also, dose adjustment during treatment is still based on toxicity. The aim of the current work was to develop a mechanistic model that can be used to simulate trial outcomes and help to improve dose individualisation and dosage regimen optimisation. A physiological based pharmacokinetic model was proposed for 6-MP, this model has compartments for stomach, gut lumen, enterocyte, gut tissue, spleen, liver vascular, liver tissue, kidney vascular, kidney tissue, skin, bone marrow, thymus, muscle, rest of body and red blood cells. The model was based on the assumption of the same elimination pathways in adults and children. Parameters of the model include physiological parameters and drug-specific parameter which were obtained from the literature or estimated using plasma and red blood cell concentration data. Age-dependent changes in parameters were implemented for scaling and variability was also introduced on the parameters for prediction. Inhibition of 6-MP first-pass effect by methotrexate was implemented to predict observed clinical interaction between the two drugs. The model was developed successfully and plasma and red blood cell concentrations were adequately predicted both in terms of mean prediction and variability. The predicted interaction between 6-MP and methotrexate was slightly lower than the reported clinical interaction between the two drugs. The model can be used to predict plasma and tissue concentration in adults and children following oral and intravenous dosing and may ultimately help to improve treatment outcome in childhood ALL patients.
Can plantar soft tissue mechanics enhance prognosis of diabetic foot ulcer?
Naemi, R; Chatzistergos, P; Suresh, S; Sundar, L; Chockalingam, N; Ramachandran, A
2017-04-01
To investigate if the assessment of the mechanical properties of plantar soft tissue can increase the accuracy of predicting Diabetic Foot Ulceration (DFU). 40 patients with diabetic neuropathy and no DFU were recruited. Commonly assessed clinical parameters along with plantar soft tissue stiffness and thickness were measured at baseline using ultrasound elastography technique. 7 patients developed foot ulceration during a 12months follow-up. Logistic regression was used to identify parameters that contribute to predicting the DFU incidence. The effect of using parameters related to the mechanical behaviour of plantar soft tissue on the specificity, sensitivity, prediction strength and accuracy of the predicting models for DFU was assessed. Patients with higher plantar soft tissue thickness and lower stiffness at the 1st Metatarsal head area showed an increased risk of DFU. Adding plantar soft tissue stiffness and thickness to the model improved its specificity (by 3%), sensitivity (by 14%), prediction accuracy (by 5%) and prognosis strength (by 1%). The model containing all predictors was able to effectively (χ 2 (8, N=40)=17.55, P<0.05) distinguish between the patients with and without DFU incidence. The mechanical properties of plantar soft tissue can be used to improve the predictability of DFU in moderate/high risk patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Acoustic energy relations in Mudejar-Gothic churches.
Zamarreño, Teófilo; Girón, Sara; Galindo, Miguel
2007-01-01
Extensive objective energy-based parameters have been measured in 12 Mudejar-Gothic churches in the south of Spain. Measurements took place in unoccupied churches according to the ISO-3382 standard. Monoaural objective measures in the 125-4000 Hz frequency range and in their spatial distributions were obtained. Acoustic parameters: clarity C80, definition D50, sound strength G and center time Ts have been deduced using impulse response analysis through a maximum length sequence measurement system in each church. These parameters spectrally averaged according to the most extended criteria in auditoria in order to consider acoustic quality were studied as a function of source-receiver distance. The experimental results were compared with predictions given by classical and other existing theoretical models proposed for concert halls and churches. An analytical semi-empirical model based on the measured values of the C80 parameter is proposed in this work for these spaces. The good agreement between predicted values and experimental data for definition, sound strength, and center time in the churches analyzed shows that the model can be used for design predictions and other purposes with reasonable accuracy.
Height extrapolation of wind data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikhail, A.S.
1982-11-01
Hourly average data for a period of 1 year from three tall meteorological towers - the Erie tower in Colorado, the Goodnoe Hills tower in Washington and the WKY-TV tower in Oklahoma - were used to analyze the wind shear exponent variabiilty with various parameters such as thermal stability, anemometer level wind speed, projection height and surface roughness. Different proposed models for prediction of height variability of short-term average wind speeds were discussed. Other models that predict the height dependence of Weilbull distribution parameters were tested. The observed power law exponent for all three towers showed strong dependence on themore » anemometer level wind speed and stability (nighttime and daytime). It also exhibited a high degree of dependence on extrapolation height with respect to anemometer height. These dependences became less severe as the anemometer level wind speeds were increased due to the turbulent mixing of the atmospheric boundary layer. The three models used for Weibull distribution parameter extrapolation were he velocity-dependent power law model (Justus), the velocity, surface roughness, and height-dependent model (Mikhail) and the velocity and surface roughness-dependent model (NASA). The models projected the scale parameter C fairly accurately for the Goodnoe Hills and WKY-TV towers and were less accurate for the Erie tower. However, all models overestimated the C value. The maximum error for the Mikhail model was less than 2% for Goodnoe Hills, 6% for WKY-TV and 28% for Erie. The error associated with the prediction of the shape factor (K) was similar for the NASA, Mikhail and Justus models. It ranged from 20 to 25%. The effect of the misestimation of hub-height distribution parameters (C and K) on average power output is briefly discussed.« less
Anhydrous Weight Loss Prediction of Meranti Sawdust during Torrefaction using Rousset Model
NASA Astrophysics Data System (ADS)
Harun, Nur Hazirah Huda Mohd; Samad, Noor Asma Fazli Abdul; Saleh, Suriyati
2018-03-01
In torrefaction, the mass loss distribution is evaluated in terms of anhydrous weight loss (AWL). Since temperature gives significant effects on AWL and the behaviour of biomass is highly associated with the AWL, therefore a suitable model for estimating the reaction kinetics is necessary for describing the thermal degradation and predicting the AWL in order to improve its process. In this study, the kinetic parameters of Meranti sawdust are estimated by applying three-parallel reaction models namely the Rousset Model for torrefaction of Meranti sawdust at temperatures of 240°C, 270°C and 300°C. All kinetic parameters are estimated according to the degradation of biomass constituents which are lignin, cellulose and hemicellulose by following the Arrhenius Law. The result shows that AWL estimation using the kinetic parameters predicted from the Rousset model is in good agreement with the experimental result as the R2 value obtained is 0.99. It shows that the Rousset Model successfully described the degradation of lignin, cellulose and hemicellulose as well as the formation of char, volatile, tar and intermediate compound. Therefore it can be concluded that the Rousset Model is applicable to represent the torrefaction behaviour.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Predicting perturbation patterns from the topology of biological networks.
Santolini, Marc; Barabási, Albert-László
2018-06-20
High-throughput technologies, offering an unprecedented wealth of quantitative data underlying the makeup of living systems, are changing biology. Notably, the systematic mapping of the relationships between biochemical entities has fueled the rapid development of network biology, offering a suitable framework to describe disease phenotypes and predict potential drug targets. However, our ability to develop accurate dynamical models remains limited, due in part to the limited knowledge of the kinetic parameters underlying these interactions. Here, we explore the degree to which we can make reasonably accurate predictions in the absence of the kinetic parameters. We find that simple dynamically agnostic models are sufficient to recover the strength and sign of the biochemical perturbation patterns observed in 87 biological models for which the underlying kinetics are known. Surprisingly, a simple distance-based model achieves 65% accuracy. We show that this predictive power is robust to topological and kinetic parameter perturbations, and we identify key network properties that can increase up to 80% the recovery rate of the true perturbation patterns. We validate our approach using experimental data on the chemotactic pathway in bacteria, finding that a network model of perturbation spreading predicts with ∼80% accuracy the directionality of gene expression and phenotype changes in knock-out and overproduction experiments. These findings show that the steady advances in mapping out the topology of biochemical interaction networks opens avenues for accurate perturbation spread modeling, with direct implications for medicine and drug development.
N'gattia, A K; Coulibaly, D; Nzussouo, N Talla; Kadjo, H A; Chérif, D; Traoré, Y; Kouakou, B K; Kouassi, P D; Ekra, K D; Dagnan, N S; Williams, T; Tiembré, I
2016-09-13
In temperate regions, influenza epidemics occur in the winter and correlate with certain climatological parameters. In African tropical regions, the effects of climatological parameters on influenza epidemics are not well defined. This study aims to identify and model the effects of climatological parameters on seasonal influenza activity in Abidjan, Cote d'Ivoire. We studied the effects of weekly rainfall, humidity, and temperature on laboratory-confirmed influenza cases in Abidjan from 2007 to 2010. We used the Box-Jenkins method with the autoregressive integrated moving average (ARIMA) process to create models using data from 2007-2010 and to assess the predictive value of best model on data from 2011 to 2012. The weekly number of influenza cases showed significant cross-correlation with certain prior weeks for both rainfall, and relative humidity. The best fitting multivariate model (ARIMAX (2,0,0) _RF) included the number of influenza cases during 1-week and 2-weeks prior, and the rainfall during the current week and 5-weeks prior. The performance of this model showed an increase of >3 % for Akaike Information Criterion (AIC) and 2.5 % for Bayesian Information Criterion (BIC) compared to the reference univariate ARIMA (2,0,0). The prediction of the weekly number of influenza cases during 2011-2012 with the best fitting multivariate model (ARIMAX (2,0,0) _RF), showed that the observed values were within the 95 % confidence interval of the predicted values during 97 of 104 weeks. Including rainfall increases the performances of fitted and predicted models. The timing of influenza in Abidjan can be partially explained by rainfall influence, in a setting with little change in temperature throughout the year. These findings can help clinicians to anticipate influenza cases during the rainy season by implementing preventive measures.
Detecting influential observations in nonlinear regression modeling of groundwater flow
Yager, Richard M.
1998-01-01
Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.
Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.
Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K
2017-07-27
Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.
Model-based high-throughput design of ion exchange protein chromatography.
Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo
2016-08-12
This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.
Hahn, Seokyung; Moon, Min Kyong; Park, Kyong Soo; Cho, Young Min
2016-01-01
Background Various diabetes risk scores composed of non-laboratory parameters have been developed, but only a few studies performed cross-validation of these scores and a comparison with laboratory parameters. We evaluated the performance of diabetes risk scores composed of non-laboratory parameters, including a recently published Korean risk score (KRS), and compared them with laboratory parameters. Methods The data of 26,675 individuals who visited the Seoul National University Hospital Healthcare System Gangnam Center for a health screening program were reviewed for cross-sectional validation. The data of 3,029 individuals with a mean of 6.2 years of follow-up were reviewed for longitudinal validation. The KRS and 16 other risk scores were evaluated and compared with a laboratory prediction model developed by logistic regression analysis. Results For the screening of undiagnosed diabetes, the KRS exhibited a sensitivity of 81%, a specificity of 58%, and an area under the receiver operating characteristic curve (AROC) of 0.754. Other scores showed AROCs that ranged from 0.697 to 0.782. For the prediction of future diabetes, the KRS exhibited a sensitivity of 74%, a specificity of 54%, and an AROC of 0.696. Other scores had AROCs ranging from 0.630 to 0.721. The laboratory prediction model composed of fasting plasma glucose and hemoglobin A1c levels showed a significantly higher AROC (0.838, P < 0.001) than the KRS. The addition of the KRS to the laboratory prediction model increased the AROC (0.849, P = 0.016) without a significant improvement in the risk classification (net reclassification index: 4.6%, P = 0.264). Conclusions The non-laboratory risk scores, including KRS, are useful to estimate the risk of undiagnosed diabetes but are inferior to the laboratory parameters for predicting future diabetes. PMID:27214034
Characterization of chemical agent transport in paints.
Willis, Matthew P; Gordon, Wesley; Lalain, Teri; Mantooth, Brent
2013-09-15
A combination of vacuum-based vapor emission measurements with a mass transport model was employed to determine the interaction of chemical warfare agents with various materials, including transport parameters of agents in paints. Accurate determination of mass transport parameters enables the simulation of the chemical agent distribution in a material for decontaminant performance modeling. The evaluation was performed with the chemical warfare agents bis(2-chloroethyl) sulfide (distilled mustard, known as the chemical warfare blister agent HD) and O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (VX), an organophosphate nerve agent, deposited on to two different types of polyurethane paint coatings. The results demonstrated alignment between the experimentally measured vapor emission flux and the predicted vapor flux. Mass transport modeling demonstrated rapid transport of VX into the coatings; VX penetrated through the aliphatic polyurethane-based coating (100 μm) within approximately 107 min. By comparison, while HD was more soluble in the coatings, the penetration depth in the coatings was approximately 2× lower than VX. Applications of mass transport parameters include the ability to predict agent uptake, and subsequent long-term vapor emission or contact transfer where the agent could present exposure risks. Additionally, these parameters and model enable the ability to perform decontamination modeling to predict how decontaminants remove agent from these materials. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.
2017-12-01
Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.
Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model
NASA Technical Reports Server (NTRS)
Boone, Spencer
2017-01-01
This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.
Khan, Taimoor; De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.
De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616
The Predicted Influence of Climate Change on Lesser Prairie-Chicken Reproductive Parameters
Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, Dawn M.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001–2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter’s linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival. PMID:23874549
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
Modelling biological invasions: species traits, species interactions, and habitat heterogeneity.
Cannas, Sergio A; Marco, Diana E; Páez, Sergio A
2003-05-01
In this paper we explore the integration of different factors to understand, predict and control ecological invasions, through a general cellular automaton model especially developed. The model includes life history traits of several species in a modular structure interacting multiple cellular automata. We performed simulations using field values corresponding to the exotic Gleditsia triacanthos and native co-dominant trees in a montane area. Presence of G. triacanthos juvenile bank was a determinant condition for invasion success. Main parameters influencing invasion velocity were mean seed dispersal distance and minimum reproductive age. Seed production had a small influence on the invasion velocity. Velocities predicted by the model agreed well with estimations from field data. Values of population density predicted matched field values closely. The modular structure of the model, the explicit interaction between the invader and the native species, and the simplicity of parameters and transition rules are novel features of the model.
Preliminary study of soil permeability properties using principal component analysis
NASA Astrophysics Data System (ADS)
Yulianti, M.; Sudriani, Y.; Rustini, H. A.
2018-02-01
Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Evaluation of a Mysis bioenergetics model
Chipps, S.R.; Bennett, D.H.
2002-01-01
Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.
[GSH fermentation process modeling using entropy-criterion based RBF neural network model].
Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng
2008-05-01
The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...
2015-02-05
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less
Using Simplistic Shape/Surface Models to Predict Brightness in Estimation Filters
NASA Astrophysics Data System (ADS)
Wetterer, C.; Sheppard, D.; Hunt, B.
The prerequisite for using brightness (radiometric flux intensity) measurements in an estimation filter is to have a measurement function that accurately predicts a space objects brightness for variations in the parameters of interest. These parameters include changes in attitude and articulations of particular components (e.g. solar panel east-west offsets to direct sun-tracking). Typically, shape models and bidirectional reflectance distribution functions are combined to provide this forward light curve modeling capability. To achieve precise orbit predictions with the inclusion of shape/surface dependent forces such as radiation pressure, relatively complex and sophisticated modeling is required. Unfortunately, increasing the complexity of the models makes it difficult to estimate all those parameters simultaneously because changes in light curve features can now be explained by variations in a number of different properties. The classic example of this is the connection between the albedo and the area of a surface. If, however, the desire is to extract information about a single and specific parameter or feature from the light curve, a simple shape/surface model could be used. This paper details an example of this where a complex model is used to create simulated light curves, and then a simple model is used in an estimation filter to extract out a particular feature of interest. In order for this to be successful, however, the simple model must be first constructed using training data where the feature of interest is known or at least known to be constant.
Prediction Model for Relativistic Electrons at Geostationary Orbit
NASA Technical Reports Server (NTRS)
Khazanov, George V.; Lyatsky, Wladislaw
2008-01-01
We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.
Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing
2017-06-18
The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program develops and utilizes QSAR modeling approaches across a broad range of applications. In terms of physical chemistry we have a particular interest in the prediction of basic physicochemical parameters ...
Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.
2016-01-01
Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur
2016-05-01
In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.
NASA Technical Reports Server (NTRS)
Sun, C. T.; Yoon, K. J.
1990-01-01
A one-parameter plasticity model was shown to adequately describe the orthotropic plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The nonlinear stress-strain relations were measured and compared with those predicted by the finite element analysis using the one-parameter elastic-plastic constitutive model. The results show that the one-parameter orthotropic plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.
Elastic-plastic analysis of AS4/PEEK composite laminate using a one-parameter plasticity model
NASA Technical Reports Server (NTRS)
Sun, C. T.; Yoon, K. J.
1992-01-01
A one-parameter plasticity model was shown to adequately describe the plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The elastic-plastic stress-strain relations of coupon specimens were measured and compared with those predicted by the finite element analysis using the one-parameter plasticity model. The results show that the one-parameter plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Xu, Mengchen; Lerner, Amy L; Funkenbusch, Paul D; Richhariya, Ashutosh; Yoon, Geunyoung
2018-02-01
The optical performance of the human cornea under intraocular pressure (IOP) is the result of complex material properties and their interactions. The measurement of the numerous material parameters that define this material behavior may be key in the refinement of patient-specific models. The goal of this study was to investigate the relative contribution of these parameters to the biomechanical and optical responses of human cornea predicted by a widely accepted anisotropic hyperelastic finite element model, with regional variations in the alignment of fibers. Design of experiments methods were used to quantify the relative importance of material properties including matrix stiffness, fiber stiffness, fiber nonlinearity and fiber dispersion under physiological IOP. Our sensitivity results showed that corneal apical displacement was influenced nearly evenly by matrix stiffness, fiber stiffness and nonlinearity. However, the variations in corneal optical aberrations (refractive power and spherical aberration) were primarily dependent on the value of the matrix stiffness. The optical aberrations predicted by variations in this material parameter were sufficiently large to predict clinically important changes in retinal image quality. Therefore, well-characterized individual variations in matrix stiffness could be critical in cornea modeling in order to reliably predict optical behavior under different IOPs or after corneal surgery.
Moreira, Luiz Felipe Pompeu Prado; Ferrari, Adriana Cristina; Moraes, Tiago Bueno; Reis, Ricardo Andrade; Colnago, Luiz Alberto; Pereira, Fabíola Manhas Verbi
2016-05-19
Time-domain nuclear magnetic resonance and chemometrics were used to predict color parameters, such as lightness (L*), redness (a*), and yellowness (b*) of beef (Longissimus dorsi muscle) samples. Analyzing the relaxation decays with multivariate models performed with partial least-squares regression, color quality parameters were predicted. The partial least-squares models showed low errors independent of the sample size, indicating the potentiality of the method. Minced procedure and weighing were not necessary to improve the predictive performance of the models. The reduction of transverse relaxation time (T 2 ) measured by Carr-Purcell-Meiboom-Gill pulse sequence in darker beef in comparison with lighter ones can be explained by the lower relaxivity Fe 2+ present in deoxymyoglobin and oxymyoglobin (red beef) to the higher relaxivity of Fe 3+ present in metmyoglobin (brown beef). These results point that time-domain nuclear magnetic resonance spectroscopy can become a useful tool for quality assessment of beef cattle on bulk of the sample and through-packages, because this technique is also widely applied to measure sensorial parameters, such as flavor, juiciness and tenderness, and physicochemical parameters, cooking loss, fat and moisture content, and instrumental tenderness using Warner Bratzler shear force. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Donovan, Preston; Chehreghanianzabi, Yasaman; Rathinam, Muruhan; Zustiak, Silviya Petrova
2016-01-01
The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter.
NASA Astrophysics Data System (ADS)
Goktan, R. M.; Gunes Yılmaz, N.
2017-09-01
The present study was undertaken to investigate the potential usability of Knoop micro-hardness, both as a single parameter and in combination with operational parameters, for sawblade specific wear rate (SWR) assessment in the machining of ornamental granites. The sawing tests were performed on different commercially available granite varieties by using a fully instrumented side-cutting machine. During the sawing tests, two fundamental productivity parameters, namely the workpiece feed rate and cutting depth, were varied at different levels. The good correspondence observed between the measured Knoop hardness and SWR values for different operational conditions indicates that it has the potential to be used as a rock material property that can be employed in preliminary wear estimations of diamond sawblades. Also, a multiple regression model directed to SWR prediction was developed which takes into account the Knoop hardness, cutting depth and workpiece feed rate. The relative contribution of each independent variable in the prediction of SWR was determined by using test statistics. The prediction accuracy of the established model was checked against new observations. The strong prediction performance of the model suggests that its framework may be applied to other granites and operational conditions for quantifying or differentiating the relative wear performance of diamond sawblades.
NASA Astrophysics Data System (ADS)
Zhuang, Jyun-Rong; Lee, Yee-Ting; Hsieh, Wen-Hsin; Yang, An-Shik
2018-07-01
Selective laser melting (SLM) shows a positive prospect as an additive manufacturing (AM) technique for fabrication of 3D parts with complicated structures. A transient thermal model was developed by the finite element method (FEM) to simulate the thermal behavior for predicting the time evolution of temperature field and melt pool dimensions of Ti6Al4V powder during SLM. The FEM predictions were then compared with published experimental measurements and calculation results for model validation. This study applied the design of experiment (DOE) scheme together with the response surface method (RSM) to conduct the regression analysis based on four processing parameters (exactly, the laser power, scanning speed, preheating temperature and hatch space) for predicting the dimensions of the melt pool in SLM. The preliminary RSM results were used to quantify the effects of those parameters on the melt pool size. The process window was further implemented via two criteria of the width and depth of the molten pool to screen impractical conditions of four parameters for including the practical ranges of processing parameters. The FEM simulations confirmed the good accuracy of the critical RSM models in the predictions of melt pool dimensions for three typical SLM working scenarios.
Viability of using seismic data to predict hydrogeological parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mela, K.
1997-10-01
Design of modem contaminant mitigation and fluid extraction projects make use of solutions from stochastic hydrogeologic models. These models rely heavily on the hydraulic parameters of hydraulic conductivity and the correlation length of hydraulic conductivity. Reliable values of these parameters must be acquired to successfully predict flow of fluids through the aquifer of interest. An inexpensive method of acquiring these parameters by use of seismic reflection surveying would be beneficial. Relationships between seismic velocity and porosity together with empirical observations relating porosity to permeability may lead to a method of extracting the correlation length of hydraulic conductivity from shallow highmore » resolution seismic data making the use of inexpensive high density data sets commonplace for these studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Zongrui; Stocks, George Malcolm
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Faulhammer, E; Llusa, M; Wahl, P R; Paudel, A; Lawrence, S; Biserni, S; Calzolari, V; Khinast, J G
2016-01-01
The objectives of this study were to develop a predictive statistical model for low-fill-weight capsule filling of inhalation products with dosator nozzles via the quality by design (QbD) approach and based on that to create refined models that include quadratic terms for significant parameters. Various controllable process parameters and uncontrolled material attributes of 12 powders were initially screened using a linear model with partial least square (PLS) regression to determine their effect on the critical quality attributes (CQA; fill weight and weight variability). After identifying critical material attributes (CMAs) and critical process parameters (CPPs) that influenced the CQA, model refinement was performed to study if interactions or quadratic terms influence the model. Based on the assessment of the effects of the CPPs and CMAs on fill weight and weight variability for low-fill-weight inhalation products, we developed an excellent linear predictive model for fill weight (R(2 )= 0.96, Q(2 )= 0.96 for powders with good flow properties and R(2 )= 0.94, Q(2 )= 0.93 for cohesive powders) and a model that provides a good approximation of the fill weight variability for each powder group. We validated the model, established a design space for the performance of different types of inhalation grade lactose on low-fill weight capsule filling and successfully used the CMAs and CPPs to predict fill weight of powders that were not included in the development set.
Xue, Ling; Holford, Nick; Ding, Xiao-Liang; Shen, Zhen-Ya; Huang, Chen-Rong; Zhang, Hua; Zhang, Jing-Jing; Guo, Zhe-Ning; Xie, Cheng; Zhou, Ling; Chen, Zhi-Yao; Liu, Lin-Sheng; Miao, Li-Yan
2017-04-01
The aims of this study are to apply a theory-based mechanistic model to describe the pharmacokinetics (PK) and pharmacodynamics (PD) of S- and R-warfarin. Clinical data were obtained from 264 patients. Total concentrations for S- and R-warfarin were measured by ultra-high performance liquid tandem mass spectrometry. Genotypes were measured using pyrosequencing. A sequential population PK parameter with data method was used to describe the international normalized ratio (INR) time course. Data were analyzed with NONMEM. Model evaluation was based on parameter plausibility and prediction-corrected visual predictive checks. Warfarin PK was described using a one-compartment model. CYP2C9 *1/*3 genotype had reduced clearance for S-warfarin, but increased clearance for R-warfarin. The in vitro parameters for the relationship between prothrombin complex activity (PCA) and INR were markedly different (A = 0.560, B = 0.386) from the theory-based values (A = 1, B = 0). There was a small difference between healthy subjects and patients. A sigmoid E max PD model inhibiting PCA synthesis as a function of S-warfarin concentration predicted INR. Small R-warfarin effects was described by competitive antagonism of S-warfarin inhibition. Patients with VKORC1 AA and CYP4F2 CC or CT genotypes had lower C50 for S-warfarin. A theory-based PKPD model describes warfarin concentrations and clinical response. Expected PK and PD genotype effects were confirmed. The role of predicted fat free mass with theory-based allometric scaling of PK parameters was identified. R-warfarin had a minor effect compared with S-warfarin on PCA synthesis. INR is predictable from 1/PCA in vivo. © 2016 The British Pharmacological Society.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
Hutson, J R; Garcia-Bournissen, F; Davis, A; Koren, G
2011-07-01
Dual perfusion of a single placental lobule is the only experimental model to study human placental transfer of substances in organized placental tissue. To date, there has not been any attempt at a systematic evaluation of this model. The aim of this study was to systematically evaluate the perfusion model in predicting placental drug transfer and to develop a pharmacokinetic model to account for nonplacental pharmacokinetic parameters in the perfusion results. In general, the fetal-to-maternal drug concentration ratios matched well between placental perfusion experiments and in vivo samples taken at the time of delivery of the infant. After modeling for differences in maternal and fetal/neonatal protein binding and blood pH, the perfusion results were able to accurately predict in vivo transfer at steady state (R² = 0.85, P < 0.0001). Placental perfusion experiments can be used to predict placental drug transfer when adjusting for extra parameters and can be useful for assessing drug therapy risks and benefits in pregnancy.
NASA Astrophysics Data System (ADS)
Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen
2018-01-01
Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.