NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
USDA-ARS?s Scientific Manuscript database
Classic rainfall-runoff models usually use historical data to estimate model parameters and mean values of parameters are considered for predictions. However, due to climate changes and human effects, the parameters of model change temporally. To overcome this problem, Normalized Difference Vegetati...
Time-varying parameter models for catchments with land use change: the importance of model structure
NASA Astrophysics Data System (ADS)
Pathiraja, Sahani; Anghileri, Daniela; Burlando, Paolo; Sharma, Ashish; Marshall, Lucy; Moradkhani, Hamid
2018-05-01
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
Simulated discharge trends indicate robustness of hydrological models in a changing climate
NASA Astrophysics Data System (ADS)
Addor, Nans; Nikolova, Silviya; Seibert, Jan
2016-04-01
Assessing the robustness of hydrological models under contrasted climatic conditions should be part any hydrological model evaluation. Robust models are particularly important for climate impact studies, as models performing well under current conditions are not necessarily capable of correctly simulating hydrological perturbations caused by climate change. A pressing issue is the usually assumed stationarity of parameter values over time. Modeling experiments using conceptual hydrological models revealed that assuming transposability of parameters values in changing climatic conditions can lead to significant biases in discharge simulations. This raises the question whether parameter values should to be modified over time to reflect changes in hydrological processes induced by climate change. Such a question denotes a focus on the contribution of internal processes (i.e., catchment processes) to discharge generation. Here we adopt a different perspective and explore the contribution of external forcing (i.e., changes in precipitation and temperature) to changes in discharge. We argue that in a robust hydrological model, discharge variability should be induced by changes in the boundary conditions, and not by changes in parameter values. In this study, we explore how well the conceptual hydrological model HBV captures transient changes in hydrological signatures over the period 1970-2009. Our analysis focuses on research catchments in Switzerland undisturbed by human activities. The precipitation and temperature forcing are extracted from recently released 2km gridded data sets. We use a genetic algorithm to calibrate HBV for the whole 40-year period and for the eight successive 5-year periods to assess eventual trends in parameter values. Model calibration is run multiple times to account for parameter uncertainty. We find that in alpine catchments showing a significant increase of winter discharge, this trend can be captured reasonably well with constant parameter values over the whole reference period. Further, preliminary results suggest that some trends in parameter values do not reflect changes in hydrological processes, as reported by others previously, but instead might stem from a modeling artifact related to the parameterization of evapotranspiration, which is overly sensitive to temperature increase. We adopt a trading-space-for-time approach to better understand whether robust relationships between parameter values and forcing can be established, and to critically explore the rationale behind time-dependent parameter values in conceptual hydrological models.
Identification of hydrological model parameter variation using ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao
2016-12-01
Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.
Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.
2008-01-01
Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged box model was calibrated to bathymetric change data and shows rapidly evolving bathymetry in the first 10-20 years, though sediment supply and hydrodynamic forcing did not vary greatly. This initial burst of bathymetric change is believed to be model adjustment to initial conditions, and suggests a spin-up time of greater than 10 years. These three diverse modeling approaches reinforce the sensitivity of cohesive sediment transport models to initial conditions and model parameters, and highlight the importance of appropriate calibration data. Adequate spin-up time of the order of years is required to initialize models, otherwise the solution will contain bathymetric change that is not due to environmental forcings, but rather improper specification of initial conditions and model parameters. Temporally intensive bathymetric change data can assist in determining initial conditions and parameters, provided they are available. Computational effort may be reduced by selectively updating hydrodynamics and bathymetry, thereby allowing time for spin-up periods. reserved.
NASA Astrophysics Data System (ADS)
Zhmud, V. A.; Reva, I. L.; Dimitrov, L. V.
2017-01-01
The design of robust feedback systems by means of the numerical optimization method is mostly accomplished with modeling of the several systems simultaneously. In each such system, regulators are similar. But the object models are different. It includes all edge values from the possible variants of the object model parameters. With all this, not all possible sets of model parameters are taken into account. Hence, the regulator can be not robust, i. e. it can not provide system stability in some cases, which were not tested during the optimization procedure. The paper proposes an alternative method. It consists in sequent changing of all parameters according to harmonic low. The frequencies of changing of each parameter are aliquant. It provides full covering of the parameters space.
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
Using the Modification Index and Standardized Expected Parameter Change for Model Modification
ERIC Educational Resources Information Center
Whittaker, Tiffany A.
2012-01-01
Model modification is oftentimes conducted after discovering a badly fitting structural equation model. During the modification process, the modification index (MI) and the standardized expected parameter change (SEPC) are 2 statistics that may be used to aid in the selection of parameters to add to a model to improve the fit. The purpose of this…
Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.
Hedin, Emma; Bäck, Anna
2013-09-06
Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types.
NASA Astrophysics Data System (ADS)
Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.
2017-12-01
Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.
Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J
2015-01-01
Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. Copyright © 2014 Elsevier B.V. All rights reserved.
Dynamic Computation of Change Operations in Version Management of Business Process Models
NASA Astrophysics Data System (ADS)
Küster, Jochen Malte; Gerth, Christian; Engels, Gregor
Version management of business process models requires that changes can be resolved by applying change operations. In order to give a user maximal freedom concerning the application order of change operations, position parameters of change operations must be computed dynamically during change resolution. In such an approach, change operations with computed position parameters must be applicable on the model and dependencies and conflicts of change operations must be taken into account because otherwise invalid models can be constructed. In this paper, we study the concept of partially specified change operations where parameters are computed dynamically. We provide a formalization for partially specified change operations using graph transformation and provide a concept for their applicability. Based on this, we study potential dependencies and conflicts of change operations and show how these can be taken into account within change resolution. Using our approach, a user can resolve changes of business process models without being unnecessarily restricted to a certain order.
Influence of different dose calculation algorithms on the estimate of NTCP for lung complications
Bäck, Anna
2013-01-01
Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types. PACS numbers: 87.53.‐j, 87.53.Kn, 87.55.‐x, 87.55.dh, 87.55.kd PMID:24036865
Pernik, Meribeth
1987-01-01
The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Karmalkar, A.; Sexton, D.; Murphy, J.
2017-12-01
We present exploratory work towards developing an efficient strategy to select variants of a state-of-the-art but expensive climate model suitable for climate projection studies. The strategy combines information from a set of idealized perturbed parameter ensemble (PPE) and CMIP5 multi-model ensemble (MME) experiments, and uses two criteria as basis to select model variants for a PPE suitable for future projections: a) acceptable model performance at two different timescales, and b) maintaining diversity in model response to climate change. We demonstrate that there is a strong relationship between model errors at weather and climate timescales for a variety of key variables. This relationship is used to filter out parts of parameter space that do not give credible simulations of historical climate, while minimizing the impact on ranges in forcings and feedbacks that drive model responses to climate change. We use statistical emulation to explore the parameter space thoroughly, and demonstrate that about 90% can be filtered out without affecting diversity in global-scale climate change responses. This leads to identification of plausible parts of parameter space from which model variants can be selected for projection studies.
Bernstein, Diana N.; Neelin, J. David
2016-04-28
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Diana N.; Neelin, J. David
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-01-01
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. PMID:26063822
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-07-06
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
Tao, Fulu; Rötter, Reimund P; Palosuo, Taru; Gregorio Hernández Díaz-Ambrona, Carlos; Mínguez, M Inés; Semenov, Mikhail A; Kersebaum, Kurt Christian; Nendel, Claas; Specka, Xenia; Hoffmann, Holger; Ewert, Frank; Dambreville, Anaelle; Martre, Pierre; Rodríguez, Lucía; Ruiz-Ramos, Margarita; Gaiser, Thomas; Höhn, Jukka G; Salo, Tapio; Ferrise, Roberto; Bindi, Marco; Cammarano, Davide; Schulman, Alan H
2018-03-01
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple-ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple-ensemble probabilistic assessment, the median of simulated yield change was -4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981-2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple-ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources. © 2017 John Wiley & Sons Ltd.
Mathematical circulatory system model
NASA Technical Reports Server (NTRS)
Lakin, William D. (Inventor); Stevens, Scott A. (Inventor)
2010-01-01
A system and method of modeling a circulatory system including a regulatory mechanism parameter. In one embodiment, a regulatory mechanism parameter in a lumped parameter model is represented as a logistic function. In another embodiment, the circulatory system model includes a compliant vessel, the model having a parameter representing a change in pressure due to contraction of smooth muscles of a wall of the vessel.
Modelling of intermittent microwave convective drying: parameter sensitivity
NASA Astrophysics Data System (ADS)
Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei
2017-06-01
The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Modeling spatially-varying landscape change points in species occurrence thresholds
Wagner, Tyler; Midway, Stephen R.
2014-01-01
Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.
Uncertainty in BMP evaluation and optimization for watershed management
NASA Astrophysics Data System (ADS)
Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.
2012-12-01
Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.
Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling
NASA Astrophysics Data System (ADS)
Thomas, M. E.; Neuberg, J. W.
2015-12-01
The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
NASA Astrophysics Data System (ADS)
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.
An IRT Model with a Parameter-Driven Process for Change
ERIC Educational Resources Information Center
Rijmen, Frank; De Boeck, Paul; van der Maas, Han L. J.
2005-01-01
An IRT model with a parameter-driven process for change is proposed. Quantitative differences between persons are taken into account by a continuous latent variable, as in common IRT models. In addition, qualitative inter-individual differences and auto-dependencies are accounted for by assuming within-subject variability with respect to the…
Biotic responses buffer warming-induced soil organic carbon loss in Arctic tundra.
Liang, Junyi; Xia, Jiangyang; Shi, Zheng; Jiang, Lifen; Ma, Shuang; Lu, Xingjie; Mauritz, Marguerite; Natali, Susan M; Pegoraro, Elaine; Penton, C Ryan; Plaza, César; Salmon, Verity G; Celis, Gerardo; Cole, James R; Konstantinidis, Konstantinos T; Tiedje, James M; Zhou, Jizhong; Schuur, Edward A G; Luo, Yiqi
2018-05-26
Climate warming can result in both abiotic (e.g., permafrost thaw) and biotic (e.g., microbial functional genes) changes in Arctic tundra. Recent research has incorporated dynamic permafrost thaw in Earth system models (ESMs) and indicates that Arctic tundra could be a significant future carbon (C) source due to the enhanced decomposition of thawed deep soil C. However, warming-induced biotic changes may influence biologically related parameters and the consequent projections in ESMs. How model parameters associated with biotic responses will change under warming and to what extent these changes affect projected C budgets have not been carefully examined. In this study, we synthesized six data sets over five years from a soil warming experiment at the Eight Mile Lake, Alaska, into the Terrestrial ECOsystem (TECO) model with a probabilistic inversion approach. The TECO model used multiple soil layers to track dynamics of thawed soil under different treatments. Our results show that warming increased light use efficiency of vegetation photosynthesis but decreased baseline (i.e., environment-corrected) turnover rates of SOC in both the fast and slow pools in comparison with those under control. Moreover, the parameter changes generally amplified over time, suggesting processes of gradual physiological acclimation and functional gene shifts of both plants and microbes. The TECO model predicted that field warming from 2009 to 2013 resulted in cumulative C losses of 224 or 87 g m -2 , respectively, without or with changes in those parameters. Thus, warming-induced parameter changes reduced predicted soil C loss by 61%. Our study suggests that it is critical to incorporate biotic changes in ESMs to improve the model performance in predicting C dynamics in permafrost regions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior
NASA Technical Reports Server (NTRS)
Smialek, James L.; Auping, Judith V.
2002-01-01
COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,
Understanding which parameters control shallow ascent of silicic effusive magma
NASA Astrophysics Data System (ADS)
Thomas, Mark E.; Neuberg, Jurgen W.
2014-11-01
The estimation of the magma ascent rate is key to predicting volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. Linking potential changes of such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models Soufrière that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. We show that variability in the rate of low frequency seismicity, assumed to correlate directly with the rate of magma movement, can be used as an indicator for changes in ascent rate and, therefore, eruptive activity. The results indicate that conduit diameter and excess pressure in the magma chamber are amongst the dominant controlling variables, but the single most important parameter is the volatile content (assumed as only water). Modeling this parameter in the range of reported values causes changes in the calculated ascent velocities of up to 800%.
Hararuk, Oleksandra; Smith, Matthew J; Luo, Yiqi
2015-06-01
Long-term carbon (C) cycle feedbacks to climate depend on the future dynamics of soil organic carbon (SOC). Current models show low predictive accuracy at simulating contemporary SOC pools, which can be improved through parameter estimation. However, major uncertainty remains in global soil responses to climate change, particularly uncertainty in how the activity of soil microbial communities will respond. To date, the role of microbes in SOC dynamics has been implicitly described by decay rate constants in most conventional global carbon cycle models. Explicitly including microbial biomass dynamics into C cycle model formulations has shown potential to improve model predictive performance when assessed against global SOC databases. This study aimed to data-constrained parameters of two soil microbial models, evaluate the improvements in performance of those calibrated models in predicting contemporary carbon stocks, and compare the SOC responses to climate change and their uncertainties between microbial and conventional models. Microbial models with calibrated parameters explained 51% of variability in the observed total SOC, whereas a calibrated conventional model explained 41%. The microbial models, when forced with climate and soil carbon input predictions from the 5th Coupled Model Intercomparison Project (CMIP5), produced stronger soil C responses to 95 years of climate change than any of the 11 CMIP5 models. The calibrated microbial models predicted between 8% (2-pool model) and 11% (4-pool model) soil C losses compared with CMIP5 model projections which ranged from a 7% loss to a 22.6% gain. Lastly, we observed unrealistic oscillatory SOC dynamics in the 2-pool microbial model. The 4-pool model also produced oscillations, but they were less prominent and could be avoided, depending on the parameter values. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Yokozawa, M.; Sakurai, G.; Ono, K.; Mano, M.; Miyata, A.
2011-12-01
Agricultural activities, cultivating crops, managing soil, harvesting and post-harvest treatments, are not only affected from the surrounding environment but also change the environment reversely. The changes in environment, temperature, radiation and precipitation, brings changes in crop productivity. On the other hand, the status of crops, i.e. the growth and phenological stage, change the exchange of energy, H2O and CO2 between crop vegetation surface and atmosphere. Conducting the stable agricultural harvests, reducing the Greenhouse Effect Gas (GHG) emission and enhancing carbon sequestration in soil are preferable as a win-win activity. We conducted model-data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter. The particle filter, which is one of Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this paper, we applied the model-data fusion analysis to the two datasets on paddy rice field sites in Japan: only a single rice cultivation, and a single rice and wheat cultivation. We focused on the parameters related to crop production as well as soil carbon storage. As a result, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years (Fig.1). The temperature sensitivity, Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition).
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
An approach to measure parameter sensitivity in watershed ...
Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.
Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg
2017-11-01
In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.
Revisiting gamma-ray burst afterglows with time-dependent parameters
NASA Astrophysics Data System (ADS)
Yang, Chao; Zou, Yuan-Chuan; Chen, Wei; Liao, Bin; Lei, Wei-Hua; Liu, Yu
2018-02-01
The relativistic external shock model of gamma-ray burst (GRB) afterglows has been established with five free parameters, i.e., the total kinetic energy E, the equipartition parameters for electrons {{ε }}{{e}} and for the magnetic field {{ε }}{{B}}, the number density of the environment n and the index of the power-law distribution of shocked electrons p. A lot of modified models have been constructed to consider the variety of GRB afterglows, such as: the wind medium environment by letting n change with radius, the energy injection model by letting kinetic energy change with time and so on. In this paper, by assuming all four parameters (except p) change with time, we obtain a set of formulas for the dynamics and radiation, which can be used as a reference for modeling GRB afterglows. Some interesting results are obtained. For example, in some spectral segments, the radiated flux density does not depend on the number density or the profile of the environment. As an application, through modeling the afterglow of GRB 060607A, we find that it can be interpreted in the framework of the time dependent parameter model within a reasonable range.
Daryasafar, Navid; Baghbani, Somaye; Moghaddasi, Mohammad Naser; Sadeghzade, Ramezanali
2014-01-01
We intend to design a broadband band-pass filter with notch-band, which uses coupled transmission lines in the structure, using new models of coupled transmission lines. In order to realize and present the new model, first, previous models will be simulated in the ADS program. Then, according to the change of their equations and consequently change of basic parameters of these models, optimization and dependency among these parameters and also their frequency response are attended and results of these changes in order to design a new filter are converged.
NASA Astrophysics Data System (ADS)
Li, Yue; Yang, Hui; Wang, Tao; MacBean, Natasha; Bacour, Cédric; Ciais, Philippe; Zhang, Yiping; Zhou, Guangsheng; Piao, Shilong
2017-08-01
Reducing parameter uncertainty of process-based terrestrial ecosystem models (TEMs) is one of the primary targets for accurately estimating carbon budgets and predicting ecosystem responses to climate change. However, parameters in TEMs are rarely constrained by observations from Chinese forest ecosystems, which are important carbon sink over the northern hemispheric land. In this study, eddy covariance data from six forest sites in China are used to optimize parameters of the ORganizing Carbon and Hydrology In Dynamics EcosystEms TEM. The model-data assimilation through parameter optimization largely reduces the prior model errors and improves the simulated seasonal cycle and summer diurnal cycle of net ecosystem exchange, latent heat fluxes, and gross primary production and ecosystem respiration. Climate change experiments based on the optimized model are deployed to indicate that forest net primary production (NPP) is suppressed in response to warming in the southern China but stimulated in the northeastern China. Altered precipitation has an asymmetric impact on forest NPP at sites in water-limited regions, with the optimization-induced reduction in response of NPP to precipitation decline being as large as 61% at a deciduous broadleaf forest site. We find that seasonal optimization alters forest carbon cycle responses to environmental change, with the parameter optimization consistently reducing the simulated positive response of heterotrophic respiration to warming. Evaluations from independent observations suggest that improving model structure still matters most for long-term carbon stock and its changes, in particular, nutrient- and age-related changes of photosynthetic rates, carbon allocation, and tree mortality.
Ganju, Neil K.; Jaffe, Bruce E.; Schoellhamer, David H.
2011-01-01
Simulations of estuarine bathymetric change over decadal timescales require methods for idealization and reduction of forcing data and boundary conditions. Continuous simulations are hampered by computational and data limitations and results are rarely evaluated with observed bathymetric change data. Bathymetric change data for Suisun Bay, California span the 1867–1990 period with five bathymetric surveys during that period. The four periods of bathymetric change were modeled using a coupled hydrodynamic-sediment transport model operated at the tidal-timescale. The efficacy of idealization techniques was investigated by discontinuously simulating the four periods. The 1867–1887 period, used for calibration of wave energy and sediment parameters, was modeled with an average error of 37% while the remaining periods were modeled with error ranging from 23% to 121%. Variation in post-calibration performance is attributed to temporally variable sediment parameters and lack of bathymetric and configuration data for portions of Suisun Bay and the Delta. Modifying seaward sediment delivery and bed composition resulted in large performance increases for post-calibration periods suggesting that continuous simulation with constant parameters is unrealistic. Idealization techniques which accelerate morphological change should therefore be used with caution in estuaries where parameters may change on sub-decadal timescales. This study highlights the utility and shortcomings of estuarine geomorphic models for estimating past changes in forcing mechanisms such as sediment supply and bed composition. The results further stress the inherent difficulty of simulating estuarine changes over decadal timescales due to changes in configuration, benthic composition, and anthropogenic forcing such as dredging and channelization.
Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.
2016-03-22
Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.
Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less
Soil and vegetation parameter uncertainty on future terrestrial carbon sinks
NASA Astrophysics Data System (ADS)
Kothavala, Z.; Felzer, B. S.
2013-12-01
We examine the role of the terrestrial carbon cycle in a changing climate at the centennial scale using an intermediate complexity Earth system climate model that includes the effects of dynamic vegetation and the global carbon cycle. We present a series of ensemble simulations to evaluate the sensitivity of simulated terrestrial carbon sinks to three key model parameters: (a) The temperature dependence of soil carbon decomposition, (b) the upper temperature limits on the rate of photosynthesis, and (c) the nitrogen limitation of the maximum rate of carboxylation of Rubisco. We integrated the model in fully coupled mode for a 1200-year spin-up period, followed by a 300-year transient simulation starting at year 1800. Ensemble simulations were conducted varying each parameter individually and in combination with other variables. The results of the transient simulations show that terrestrial carbon uptake is very sensitive to the choice of model parameters. Changes in net primary productivity were most sensitive to the upper temperature limit on the rate of photosynthesis, which also had a dominant effect on overall land carbon trends; this is consistent with previous research that has shown the importance of climatic suppression of photosynthesis as a driver of carbon-climate feedbacks. Soil carbon generally decreased with increasing temperature, though the magnitude of this trend depends on both the net primary productivity changes and the temperature dependence of soil carbon decomposition. Vegetation carbon increased in some simulations, but this was not consistent across all configurations of model parameters. Comparing to global carbon budget observations, we identify the subset of model parameters which are consistent with observed carbon sinks; this serves to narrow considerably the future model projections of terrestrial carbon sink changes in comparison with the full model ensemble.
Hydrograph structure informed calibration in the frequency domain with time localization
NASA Astrophysics Data System (ADS)
Kumarasamy, K.; Belmont, P.
2015-12-01
Complex models with large number of parameters are commonly used to estimate sediment yields and predict changes in sediment loads as a result of changes in management or conservation practice at large watershed (>2000 km2) scales. As sediment yield is a strongly non-linear function that responds to channel (peak or mean) velocity or flow depth, it is critical to accurately represent flows. The process of calibration in such models (e.g., SWAT) generally involves the adjustment of several parameters to obtain better estimates of goodness of fit metrics such as Nash Sutcliff Efficiency (NSE). However, such indicators only provide a global view of model performance, potentially obscuring accuracy of the timing or magnitude of specific flows of interest. We describe an approach for streamflow calibration that will greatly reduce the black-box nature of calibration, when response from a parameter adjustment is not clearly known. Fourier Transform or the Short Term Fourier Transform could be used to characterize model performance in the frequency domain as well, however, the ambiguity of a Fourier transform with regards to time localization renders its implementation in a model calibration setting rather useless. Brief and sudden changes (e.g. stream flow peaks) in signals carry the most interesting information from parameter adjustments, which are completely lost in the transform without time localization. Wavelet transform captures the frequency component in the signal without compromising time and is applied to contrast changes in signal response to parameter adjustments. Here we employ the mother wavelet called the Mexican hat wavelet and apply a Continuous Wavelet Transform to understand the signal in the frequency domain. Further, with the use of the cross-wavelet spectrum we examine the relationship between the two signals (prior or post parameter adjustment) in the time-scale plane (e.g., lower scales correspond to higher frequencies). The non-stationarity of the streamflow signal does not hinder this assessment and regions of change called boundaries of influence (seasons or time when such change occurs in the hydrograph) for each parameter are delineated. In addition, we can discover the structural component of the signal (e.g., shifts or amplitude change) that has changed.
NASA Astrophysics Data System (ADS)
Yokozawa, M.
2017-12-01
Attention has been paid to the agricultural field that could regulate ecosystem carbon exchange by water management and residual treatments. However, there have been less known about the dynamic responses of the ecosystem to environmental changes. In this study, focussing on paddy field, where CO2 emissions due to microbial decomposition of organic matter are suppressed and alternatively CH4 emitted under flooding condition during rice growth season and subsequently CO2 emission following the fallow season after harvest, the responses of ecosystem carbon exchange were examined. We conducted model data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter method. The particle filter method, which is one of the Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this study, we focused on the parameters related to crop production as well as soil carbon storage. As the results, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years. The temperature sensitivity, denoted by Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition in flooding season). It suggests that the response of ecosystem carbon exchange differs due to SOC decomposition process which is sensitive to environmental variation during paddy rice cultivation period.
ERIC Educational Resources Information Center
Karkee, Thakur B.; Wright, Karen R.
2004-01-01
Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…
NASA Technical Reports Server (NTRS)
Smialek, James L.
2002-01-01
A cyclic oxidation interfacial spalling model has been developed in Part 1. The governing equations have been simplified here by substituting a new algebraic expression for the series (Good-Smialek approximation). This produced a direct relationship between cyclic oxidation weight change and model input parameters. It also allowed for the mathematical derivation of various descriptive parameters as a function of the inputs. It is shown that the maximum in weight change varies directly with the parabolic rate constant and cycle duration and inversely with the spall fraction, all to the 1/2 power. The number of cycles to reach maximum and zero weight change vary inversely with the spall fraction, and the ratio of these cycles is exactly 1:3 for most oxides. By suitably normalizing the weight change and cycle number, it is shown that all cyclic oxidation weight change model curves can be represented by one universal expression for a given oxide scale.
Identifiability, reducibility, and adaptability in allosteric macromolecules.
Bohner, Gergő; Venkataraman, Gaurav
2017-05-01
The ability of macromolecules to transduce stimulus information at one site into conformational changes at a distant site, termed "allostery," is vital for cellular signaling. Here, we propose a link between the sensitivity of allosteric macromolecules to their underlying biophysical parameters, the interrelationships between these parameters, and macromolecular adaptability. We demonstrate that the parameters of a canonical model of the mSlo large-conductance Ca 2+ -activated K + (BK) ion channel are non-identifiable with respect to the equilibrium open probability-voltage relationship, a common functional assay. We construct a reduced model with emergent parameters that are identifiable and expressed as combinations of the original mechanistic parameters. These emergent parameters indicate which coordinated changes in mechanistic parameters can leave assay output unchanged. We predict that these coordinated changes are used by allosteric macromolecules to adapt, and we demonstrate how this prediction can be tested experimentally. We show that these predicted parameter compensations are used in the first reported allosteric phenomena: the Bohr effect, by which hemoglobin adapts to varying pH. © 2017 Bohner and Venkataraman.
Identifiability, reducibility, and adaptability in allosteric macromolecules
Bohner, Gergő
2017-01-01
The ability of macromolecules to transduce stimulus information at one site into conformational changes at a distant site, termed “allostery,” is vital for cellular signaling. Here, we propose a link between the sensitivity of allosteric macromolecules to their underlying biophysical parameters, the interrelationships between these parameters, and macromolecular adaptability. We demonstrate that the parameters of a canonical model of the mSlo large-conductance Ca2+-activated K+ (BK) ion channel are non-identifiable with respect to the equilibrium open probability-voltage relationship, a common functional assay. We construct a reduced model with emergent parameters that are identifiable and expressed as combinations of the original mechanistic parameters. These emergent parameters indicate which coordinated changes in mechanistic parameters can leave assay output unchanged. We predict that these coordinated changes are used by allosteric macromolecules to adapt, and we demonstrate how this prediction can be tested experimentally. We show that these predicted parameter compensations are used in the first reported allosteric phenomena: the Bohr effect, by which hemoglobin adapts to varying pH. PMID:28416647
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
NASA Astrophysics Data System (ADS)
Maina, Fadji Zaouna; Guadagnini, Alberto
2018-01-01
We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic parameters of the system.
NASA Astrophysics Data System (ADS)
Rasul, H.; Wu, M.; Olofsson, B.
2017-12-01
Modelling moisture and heat changes in road layers is very important to understand road hydrology and for better construction and maintenance of roads in a sustainable manner. In cold regions due to the freezing/thawing process in the partially saturated material of roads, the modeling task will become more complicated than simple model of flow through porous media without freezing/thawing pores considerations. This study is presenting a 2-D model simulation for a section of highway with considering freezing/thawing and vapor changes. Partial deferential equations (PDEs) are used in formulation of the model. Parameters are optimized from modelling results based on the measured data from test station on E18 highway near Stockholm. Impacts of phase change considerations in the modelling are assessed by comparing the modeled soil moisture with TDR-measured data. The results show that the model can be used for prediction of water and ice content in different layers of the road and at different seasons. Parameter sensitivities are analyzed by implementing a calibration strategy. In addition, the phase change consideration is evaluated in the modeling process, by comparing the PDE model with another model without considerations of freezing/thawing in roads. The PDE model shows high potential in understanding the moisture dynamics in the road system.
2014-09-01
has highlighted the need for physically consistent radiation pressure and Bidirectional Reflectance Distribution Function ( BRDF ) models . This paper...seeks to evaluate the impact of BRDF -consistent radiation pres- sure models compared to changes in the other BRDF parameters. The differences in...orbital position arising because of changes in the shape, attitude, angular rates, BRDF parameters, and radiation pressure model are plotted as a
Kong, Deguo; MacLeod, Matthew; Cousins, Ian T
2014-09-01
The effect of projected future changes in temperature, wind speed, precipitation and particulate organic carbon on concentrations of persistent organic chemicals in the Baltic Sea regional environment is evaluated using the POPCYCLING-Baltic multimedia chemical fate model. Steady-state concentrations of hypothetical perfectly persistent chemicals with property combinations that encompass the entire plausible range for non-ionizing organic substances are modelled under two alternative climate change scenarios (IPCC A2 and B2) and compared to a baseline climate scenario. The contributions of individual climate parameters are deduced in model experiments in which only one of the four parameters is changed from the baseline scenario. Of the four selected climate parameters, temperature is the most influential, and wind speed is least. Chemical concentrations in the Baltic region are projected to change by factors of up to 3.0 compared to the baseline climate scenario. For chemicals with property combinations similar to legacy persistent organic pollutants listed by the Stockholm Convention, modelled concentration ratios between two climate change scenarios and the baseline scenario range from factors of 0.5 to 2.0. This study is a first step toward quantitatively assessing climate change-induced changes in the environmental concentrations of persistent organic chemicals in the Baltic Sea region. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modelling exploration of non-stationary hydrological system
NASA Astrophysics Data System (ADS)
Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei
2015-04-01
Traditional hydrological modelling assumes that the catchment does not change with time (i.e., stationary conditions) which means the model calibrated for the historical period is valid for the future period. However, in reality, due to change of climate and catchment conditions this stationarity assumption may not be valid in the future. It is a challenge to make the hydrological model adaptive to the future climate and catchment conditions that are not observable at the present time. In this study a lumped conceptual rainfall-runoff model called IHACRES was applied to a catchment in southwest England. Long observation data from 1961 to 2008 were used and seasonal calibration (in this study only summer period is further explored because it is more sensitive to climate and land cover change than the other three seasons) has been done since there are significant seasonal rainfall patterns. We expect that the model performance can be improved by calibrating the model based on individual seasons. The data is split into calibration and validation periods with the intention of using the validation period to represent the future unobserved situations. The success of the non-stationary model will depend not only on good performance during the calibration period but also the validation period. Initially, the calibration is based on changing the model parameters with time. Methodology is proposed to adapt the parameters using the step forward and backward selection schemes. However, in the validation both the forward and backward multiple parameter changing models failed. One problem is that the regression with time is not reliable since the trend may not be in a monotonic linear relationship with time. The second issue is that changing multiple parameters makes the selection process very complex which is time consuming and not effective in the validation period. As a result, two new concepts are explored. First, only one parameter is selected for adjustment while the other parameters are set as constant. Secondly, regression is made against climate condition instead of against time. It has been found that such a new approach is very effective and this non-stationary model worked very well both in the calibration and validation period. Although the catchment is specific in southwest England and the data are for only the summer period, the methodology proposed in this study is general and applicable to other catchments. We hope this study will stimulate the hydrological community to explore a variety of sites so that valuable experiences and knowledge could be gained to improve our understanding of such a complex modelling issue in climate change impact assessment.
Models of Pilot Behavior and Their Use to Evaluate the State of Pilot Training
NASA Astrophysics Data System (ADS)
Jirgl, Miroslav; Jalovecky, Rudolf; Bradac, Zdenek
2016-07-01
This article discusses the possibilities of obtaining new information related to human behavior, namely the changes or progressive development of pilots' abilities during training. The main assumption is that a pilot's ability can be evaluated based on a corresponding behavioral model whose parameters are estimated using mathematical identification procedures. The mean values of the identified parameters are obtained via statistical methods. These parameters are then monitored and their changes evaluated. In this context, the paper introduces and examines relevant mathematical models of human (pilot) behavior, the pilot-aircraft interaction, and an example of the mathematical analysis.
An Extension of the Partial Credit Model with an Application to the Measurement of Change.
ERIC Educational Resources Information Center
Fischer, Gerhard H.; Ponocny, Ivo
1994-01-01
An extension to the partial credit model, the linear partial credit model, is considered under the assumption of a certain linear decomposition of the item x category parameters into basic parameters. A conditional maximum likelihood algorithm for estimating basic parameters is presented and illustrated with simulation and an empirical study. (SLD)
Model-data integration to improve the LPJmL dynamic global vegetation model
NASA Astrophysics Data System (ADS)
Forkel, Matthias; Thonicke, Kirsten; Schaphoff, Sibyll; Thurner, Martin; von Bloh, Werner; Dorigo, Wouter; Carvalhais, Nuno
2017-04-01
Dynamic global vegetation models show large uncertainties regarding the development of the land carbon balance under future climate change conditions. This uncertainty is partly caused by differences in how vegetation carbon turnover is represented in global vegetation models. Model-data integration approaches might help to systematically assess and improve model performances and thus to potentially reduce the uncertainty in terrestrial vegetation responses under future climate change. Here we present several applications of model-data integration with the LPJmL (Lund-Potsdam-Jena managed Lands) dynamic global vegetation model to systematically improve the representation of processes or to estimate model parameters. In a first application, we used global satellite-derived datasets of FAPAR (fraction of absorbed photosynthetic activity), albedo and gross primary production to estimate phenology- and productivity-related model parameters using a genetic optimization algorithm. Thereby we identified major limitations of the phenology module and implemented an alternative empirical phenology model. The new phenology module and optimized model parameters resulted in a better performance of LPJmL in representing global spatial patterns of biomass, tree cover, and the temporal dynamic of atmospheric CO2. Therefore, we used in a second application additionally global datasets of biomass and land cover to estimate model parameters that control vegetation establishment and mortality. The results demonstrate the ability to improve simulations of vegetation dynamics but also highlight the need to improve the representation of mortality processes in dynamic global vegetation models. In a third application, we used multiple site-level observations of ecosystem carbon and water exchange, biomass and soil organic carbon to jointly estimate various model parameters that control ecosystem dynamics. This exercise demonstrates the strong role of individual data streams on the simulated ecosystem dynamics which consequently changed the development of ecosystem carbon stocks and fluxes under future climate and CO2 change. In summary, our results demonstrate challenges and the potential of using model-data integration approaches to improve a dynamic global vegetation model.
Image Discrimination Models With Stochastic Channel Selection
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.
Improving RNA nearest neighbor parameters for helices by going beyond the two-state model.
Spasic, Aleksandar; Berger, Kyle D; Chen, Jonathan L; Seetin, Matthew G; Turner, Douglas H; Mathews, David H
2018-06-01
RNA folding free energy change nearest neighbor parameters are widely used to predict folding stabilities of secondary structures. They were determined by linear regression to datasets of optical melting experiments on small model systems. Traditionally, the optical melting experiments are analyzed assuming a two-state model, i.e. a structure is either complete or denatured. Experimental evidence, however, shows that structures exist in an ensemble of conformations. Partition functions calculated with existing nearest neighbor parameters predict that secondary structures can be partially denatured, which also directly conflicts with the two-state model. Here, a new approach for determining RNA nearest neighbor parameters is presented. Available optical melting data for 34 Watson-Crick helices were fit directly to a partition function model that allows an ensemble of conformations. Fitting parameters were the enthalpy and entropy changes for helix initiation, terminal AU pairs, stacks of Watson-Crick pairs and disordered internal loops. The resulting set of nearest neighbor parameters shows a 38.5% improvement in the sum of residuals in fitting the experimental melting curves compared to the current literature set.
NASA Technical Reports Server (NTRS)
Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue
2009-01-01
We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.
Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...
Ogrodnik, Justyna; Piszczatowski, Szczepan
2017-01-01
The aim of the present study was to evaluate the influence of modified morphological parameters of the muscle model and excitation pattern on the results of musculoskeletal system numerical simulation in a cerebral palsy patient. The modelling of the musculoskeletal system was performed in the AnyBody Modelling System. The standard model (MoCap) was subjected to modifications consisting of changes in morphological parameters and excitation patterns of selected muscles. The research was conducted with the use of data of a 14-year-old cerebral palsy patient. A reduction of morphological parameters (variant MI) caused a decrease in the value of active force generated by the muscle with changed geometry, and as a consequence the changes in active force generated by other muscles. A simulation of the abnormal excitation pattern (variant MII) resulted in the muscle's additional activity during its lengthening. The simultaneous modification of the muscle morphology and excitation pattern (variant MIII) points to the interdependence of both types of muscle model changes. A significant increase in the value of the reaction force in the hip joint was observed as a consequence of modification of the hip abductor activity. The morphological parameters and the excitation pattern of modelled muscles have a significant influence on the results of numerical simulation of the musculoskeletal system functioning.
Hill, Mary Catherine
1992-01-01
This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.
NASA Astrophysics Data System (ADS)
Wang, Yong; Liu, Xiaohong
2014-12-01
We introduce a simplified version of the soccer ball model (SBM) developed by Niedermeier et al (2014 Geophys. Res. Lett. 41 736-741) into the Community Atmospheric Model version 5 (CAM5). It is the first time that SBM is used in an atmospheric model to parameterize the heterogeneous ice nucleation. The SBM, which was simplified for its suitable application in atmospheric models, uses the classical nucleation theory to describe the immersion/condensation freezing by dust in the mixed-phase cloud regime. Uncertain parameters (mean contact angle, standard deviation of contact angle probability distribution, and number of surface sites) in the SBM are constrained by fitting them to recent natural dust (Saharan dust) datasets. With the SBM in CAM5, we investigate the sensitivity of modeled cloud properties to the SBM parameters, and find significant seasonal and regional differences in the sensitivity among the three SBM parameters. Changes of mean contact angle and the number of surface sites lead to changes of cloud properties in Arctic in spring, which could be attributed to the transport of dust ice nuclei to this region. In winter, significant changes of cloud properties induced by these two parameters mainly occur in northern hemispheric mid-latitudes (e.g., East Asia). In comparison, no obvious changes of cloud properties caused by changes of standard deviation can be found in all the seasons. These results are valuable for understanding the heterogeneous ice nucleation behavior, and useful for guiding the future model developments.
Influence of parameter values on the oscillation sensitivities of two p53-Mdm2 models.
Cuba, Christian E; Valle, Alexander R; Ayala-Charca, Giancarlo; Villota, Elizabeth R; Coronado, Alberto M
2015-09-01
Biomolecular networks that present oscillatory behavior are ubiquitous in nature. While some design principles for robust oscillations have been identified, it is not well understood how these oscillations are affected when the kinetic parameters are constantly changing or are not precisely known, as often occurs in cellular environments. Many models of diverse complexity level, for systems such as circadian rhythms, cell cycle or the p53 network, have been proposed. Here we assess the influence of hundreds of different parameter sets on the sensitivities of two configurations of a well-known oscillatory system, the p53 core network. We show that, for both models and all parameter sets, the parameter related to the p53 positive feedback, i.e. self-promotion, is the only one that presents sizeable sensitivities on extrema, periods and delay. Moreover, varying the parameter set values to change the dynamical characteristics of the response is more restricted in the simple model, whereas the complex model shows greater tunability. These results highlight the importance of the presence of specific network patterns, in addition to the role of parameter values, when we want to characterize oscillatory biochemical systems.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
A mathematical model of physiological processes and its application to the study of aging
NASA Technical Reports Server (NTRS)
Hibbs, A. R.; Walford, R. L.
1989-01-01
The behavior of a physiological system which, after displacement, returns by homeostatic mechanisms to its original condition can be described by a simple differential equation in which the "recovery time" is a parameter. Two such systems, which influence one another, can be linked mathematically by the use of "coupling" or "feedback" coefficients. These concepts are the basis for many mathematical models of physiological behavior, and we describe the general nature of such models. Next, we introduce the concept of a "fatal limit" for the displacement of a physiological system, and show how measures of such limits can be included in mathematical models. We show how the numerical values of such limits depend on the values of other system parameters, i.e., recovery times and coupling coefficients, and suggest ways of measuring all these parameters experimentally, for example by monitoring changes induced by X-irradiation. Next, we discuss age-related changes in these parameters, and show how the parameters of mortality statistics, such as the famous Gompertz parameters, can be derived from experimentally measurable changes. Concepts of onset-of-aging, critical or fatal limits, equilibrium value (homeostasis), recovery times and coupling constants are involved. Illustrations are given using published data from mouse and rat populations. We believe that this method of deriving survival patterns from model that is experimentally testable is unique.
Nonlinear mixed effects modeling of the diurnal blood pressure profile in a multiracial population.
van Rijn-Bikker, Petra C; Snelder, Nelleke; Ackaert, Oliver; van Hest, Reinier M; Ploeger, Bart A; van Montfrans, Gert A; Koopmans, Richard P; Mathôt, Ron A
2013-09-01
Cardiac and cerebrovascular events in hypertensive patients are related to specific features of the 24-hour diurnal blood pressure (BP) profile (i.e., daytime and nighttime BP, nocturnal dip (ND), and morning surge (MS)). This investigation aimed to characterize 24-hour diurnal systolic BP (SBP) with parameters that correlate directly with daytime and nighttime SBP, ND, and MS using nonlinear mixed effects modeling. Ambulatory 24-hour SBP measurements (ABPM) of 196 nontreated subjects from three ethnic groups were available. A population model was parameterized in NONMEM to estimate and evaluate the parameters baseline SBP (BSL), nadir (minimum SBP during the night), and change (SBP difference between day and night). Associations were tested between these parameters and patient-related factors to explain interindividual variability. The diurnal SBP profile was adequately described as the sum of 2 cosine functions. The following typical values (interindividual variability) were found: BSL = 139 mm Hg (11%); nadir = 122 mm Hg (14%); change = 25 mm Hg (52%), and residual error = 12 mm Hg. The model parameters correlate well with daytime and nighttime SBP, ND, and MS (R (2) = 0.50-0.92). During covariate analysis, ethnicity was found to be associated with change; change was 40% higher in white Dutch subjects and 26.8% higher in South Asians than in blacks. The developed population model allows simultaneous estimation of BSL, nadir, and change for all individuals in the investigated population, regardless of individual number of SBP measurements. Ethnicity was associated with change. The model provides a tool to evaluate and optimize the sampling frequency for 24-hour ABPM.
Halford, Keith J.
2006-01-01
MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Peter J.; Feddema, Johannes J.; Bonan, Gordon B.
To assess the climate impacts of historical and projected land cover change and land use in the Community Climate System Model (CCSM4) we have developed new time series of transient Community Land Model (CLM4) Plant Functional Type (PFT) parameters and wood harvest parameters. The new parameters capture the dynamics of the Coupled Model Inter-comparison Project phase 5 (CMIP5) land cover change and wood harvest trajectories for the historical period from 1850 to 2005, and for the four Representative Concentration Pathways (RCP) periods from 2006 to 2100. Analysis of the biogeochemical impacts of land cover change in CCSM4 with the parametersmore » found the model produced an historical cumulative land use flux of 148.4 PgC from 1850 to 2005, which was in good agreement with other global estimates of around 156 PgC for the same period. The biogeophysical impacts of only applying the transient land cover change parameters in CCSM4 were cooling of the near surface atmospheric over land by -0.1OC, through increased surface albedo and reduced shortwave radiation absorption. When combined with other transient climate forcings, the higher albedo from land cover change was overwhelmed at global scales by decreases in snow albedo from black carbon deposition and from high latitude warming. At regional scales however the land cover change forcing persisted resulting in reduced warming, with the biggest impacts in eastern North America. The future CCSM4 RCP simulations showed that the CLM4 transient PFT and wood harvest parameters could be used to represent a wide range of human land cover change and land use scenarios. Furthermore, these simulations ranged from the RCP 4.5 reforestation scenario that was able to draw down 82.6 PgC from the atmosphere, to the RCP 8.5 wide scale deforestation scenario that released 171.6 PgC to the atmosphere.« less
Dietrich, Yvan; Eliat, Pierre-Antoine; Dieuset, Gabriel; Saint-Jalmes, Herve; Pineau, Charles; Wendling, Fabrice; Martin, Benoit
2016-08-01
An important issue in epilepsy research is to understand the structural and functional modifications leading to chronic epilepsy, characterized by spontaneous recurrent seizures, after initial brain insult. To address this issue, we recorded and analyzed electroencephalography (EEG) and quantitative magnetic resonance imaging (MRI) data during epileptogenesis in the in vivo mouse model of Medial Temporal Lobe Epilepsy (MTLE, kainate). Besides, this model of epilepsy is a particular form of drug-resistant epilepsy. The results indicate that high-field (4.7T) MRI parameters (T2-weighted; T2-quantitative) allow to detect the gradual neuro-anatomical changes that occur during epileptogenesis while electrophysiological parameters (number and duration of Hippocampal Paroxysmal Discharges) allow to assess the dysfunctional changes through the quantification of epileptiform activity. We found a strong correlation between EEG-based markers (invasive recording) and MRI-based parameters (non-invasive) periodically computed over the `latent period' that spans over two weeks, on average. These results indicated that both structural and functional changes occur in the considered epilepsy model and are considered as biomarkers of the installation of epilepsy. Additionally, such structural and functional changes can also be observed in human temporal lobe epilepsy. Interestingly, MRI imaging parameters could be used to track early (day-7) structural changes (gliosis, cell loss) in the lesioned brain and to quantify the evolution of epileptogenesis after traumatic brain injury.
Howard Evan Canfield; Vicente L. Lopes
2000-01-01
A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...
Microfocal angiography of the pulmonary vasculature
NASA Astrophysics Data System (ADS)
Clough, Anne V.; Haworth, Steven T.; Roerig, David T.; Linehan, John H.; Dawson, Christopher A.
1998-07-01
X-ray microfocal angiography provides a means of assessing regional microvascular perfusion parameters using residue detection of vascular indicators. As an application of this methodology, we studied the effects of alveolar hypoxia, a pulmonary vasoconstrictor, on the pulmonary microcirculation to determine changes in regional blood mean transit time, volume and flow between control and hypoxic conditions. Video x-ray images of a dog lung were acquired as a bolus of radiopaque contrast medium passed through the lobar vasculature. X-ray time-absorbance curves were acquired from arterial and microvascular regions-of-interest during both control and hypoxic alveolar gas conditions. A mathematical model based on indicator-dilution theory applied to image residue curves was applied to the data to determine changes in microvascular perfusion parameters. Sensitivity of the model parameters to the model assumptions was analyzed. Generally, the model parameter describing regional microvascular volume, corresponding to area under the microvascular absorbance curve, was the most robust. The results of the model analysis applied to the experimental data suggest a significant decrease in microvascular volume with hypoxia. However, additional model assumptions concerning the flow kinematics within the capillary bed may be required for assessing changes in regional microvascular flow and mean transit time from image residue data.
Brownian motion model with stochastic parameters for asset prices
NASA Astrophysics Data System (ADS)
Ching, Soo Huei; Hin, Pooi Ah
2013-09-01
The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.
The predicted influence of climate change on lesser prairie-chicken reproductive parameters
Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, D.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Nina events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.
The predicted influence of climate change on lesser prairie-chicken reproductive parameters.
Grisham, Blake A; Boal, Clint W; Haukos, David A; Davis, Dawn M; Boydston, Kathy K; Dixon, Charles; Heck, Willard R
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.
NASA Astrophysics Data System (ADS)
Li, Y.; Chang, J.; Luo, L.
2017-12-01
It is of great importance for water resources management to model the truly hydrological process under changing environment, especially under significant changes of underlying surfaces like the Wei River Bain (WRB) where the subsurface hydrology is highly influenced by human activities, and to systematically investigate the interactions among LULC change, streamflow variation and changes in runoff generation process. Therefore, we proposed the idea of evolving parameters in hydrological model (SWAT) to reflect the changes in physical environment with different LULC conditions. Then with these evolving parameters, the spatiotemporal impacts of LULC changes on streamflow were quantified, and qualitative analysis was conducted to further explore how LULC changes affect the streamflow from the perspective of runoff generation mechanism. Results indicate the following: 1) evolving parameter calibration is not only effective but necessary to ensure the validity of the model when dealing with significant changes in underlying surfaces due to human activities. 2) compared to the baseline period, the streamflow in wet seasons increased in the 1990s but decreased in the 2000s. While at yearly and dry seasonal scales, the streamflow decreased in both two decades; 3) the expansion of cropland is the major contributor to the reduction of surface water component, thus causing the decline in streamflow at yearly and dry seasonal scales. While compared to the 1990s, the expansions of woodland in the middle stream and grassland in the downstream are the main stressors that increased the soil water component, thus leading to the more decline of the streamflow in the 2000s.
A probabilistic model framework for evaluating year-to-year variation in crop productivity
NASA Astrophysics Data System (ADS)
Yokozawa, M.; Iizumi, T.; Tao, F.
2008-12-01
Most models describing the relation between crop productivity and weather condition have so far been focused on mean changes of crop yield. For keeping stable food supply against abnormal weather as well as climate change, evaluating the year-to-year variations in crop productivity rather than the mean changes is more essential. We here propose a new framework of probabilistic model based on Bayesian inference and Monte Carlo simulation. As an example, we firstly introduce a model on paddy rice production in Japan. It is called PRYSBI (Process- based Regional rice Yield Simulator with Bayesian Inference; Iizumi et al., 2008). The model structure is the same as that of SIMRIW, which was developed and used widely in Japan. The model includes three sub- models describing phenological development, biomass accumulation and maturing of rice crop. These processes are formulated to include response nature of rice plant to weather condition. This model inherently was developed to predict rice growth and yield at plot paddy scale. We applied it to evaluate the large scale rice production with keeping the same model structure. Alternatively, we assumed the parameters as stochastic variables. In order to let the model catch up actual yield at larger scale, model parameters were determined based on agricultural statistical data of each prefecture of Japan together with weather data averaged over the region. The posterior probability distribution functions (PDFs) of parameters included in the model were obtained using Bayesian inference. The MCMC (Markov Chain Monte Carlo) algorithm was conducted to numerically solve the Bayesian theorem. For evaluating the year-to-year changes in rice growth/yield under this framework, we firstly iterate simulations with set of parameter values sampled from the estimated posterior PDF of each parameter and then take the ensemble mean weighted with the posterior PDFs. We will also present another example for maize productivity in China. The framework proposed here provides us information on uncertainties, possibilities and limitations on future improvements in crop model as well.
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
Partitioning uncertainty in streamflow projections under nonstationary model conditions
NASA Astrophysics Data System (ADS)
Chawla, Ila; Mujumdar, P. P.
2018-02-01
Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them for future streamflow projections and segregate the contribution of various sources to the uncertainty.
NASA Astrophysics Data System (ADS)
Truhanov, V. N.; Sultanov, M. M.
2017-11-01
In the present article researches of statistical material on the refusals and malfunctions influencing operability of heat power installations have been conducted. In this article the mathematical model of change of output characteristics of the turbine depending on number of the refusals revealed in use has been presented. The mathematical model is based on methods of mathematical statistics, probability theory and methods of matrix calculation. The novelty of this model is that it allows to predict the change of the output characteristic in time, and the operating influences have been presented in an explicit form. As desirable dynamics of change of the output characteristic (function, reliability) the law of distribution of Veybull which is universal is adopted since at various values of parameters it turns into other types of distributions (for example, exponential, normal, etc.) It should be noted that the choice of the desirable law of management allows to determine the necessary management parameters with use of the saved-up change of the output characteristic in general. The output characteristic can be changed both on the speed of change of management parameters, and on acceleration of change of management parameters. In this article the technique of an assessment of the pseudo-return matrix has been stated in detail by the method of the smallest squares and the standard Microsoft Excel functions. Also the technique of finding of the operating effects when finding restrictions both for the output characteristic, and on management parameters has been considered. In the article the order and the sequence of finding of management parameters has been stated. A concrete example of finding of the operating effects in the course of long-term operation of turbines has been shown.
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
The heuristic value of redundancy models of aging.
Boonekamp, Jelle J; Briga, Michael; Verhulst, Simon
2015-11-01
Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links. To this end, we explore the heuristic value of redundancy models of aging to develop a deeper insight into the mechanisms causing variation in senescence and lifespan. We start by showing (i) how different redundancy model parameters affect projected aging and mortality, and (ii) how variation in redundancy model parameters relates to variation in parameters of the Gompertz equation. Lifestyle changes or medical interventions during life can modify mortality rate, and we investigate (iii) how interventions that change specific redundancy parameters within the model affect subsequent mortality and actuarial senescence. Lastly, as an example of data-directed modelling and the insights that can be gained from this, (iv) we fit a redundancy model to mortality patterns observed by Mair et al. (2003; Science 301: 1731-1733) in Drosophila that were subjected to dietary restriction and temperature manipulations. Mair et al. found that dietary restriction instantaneously reduced mortality rate without affecting aging, while temperature manipulations had more transient effects on mortality rate and did affect aging. We show that after adjusting model parameters the redundancy model describes both effects well, and a comparison of the parameter values yields a deeper insight in the mechanisms causing these contrasting effects. We see replacement of the redundancy model parameters by more detailed sub-models of these parameters as a next step in linking demographic patterns to underlying molecular mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Yi; Zhao, Yanxia; Wang, Chunyi; Chen, Sining
2017-11-01
Assessment of the impact of climate change on crop productions with considering uncertainties is essential for properly identifying and decision-making agricultural practices that are sustainable. In this study, we employed 24 climate projections consisting of the combinations of eight GCMs and three emission scenarios representing the climate projections uncertainty, and two crop statistical models with 100 sets of parameters in each model representing parameter uncertainty within the crop models. The goal of this study was to evaluate the impact of climate change on maize ( Zea mays L.) yield at three locations (Benxi, Changling, and Hailun) across Northeast China (NEC) in periods 2010-2039 and 2040-2069, taking 1976-2005 as the baseline period. The multi-models ensembles method is an effective way to deal with the uncertainties. The results of ensemble simulations showed that maize yield reductions were less than 5 % in both future periods relative to the baseline. To further understand the contributions of individual sources of uncertainty, such as climate projections and crop model parameters, in ensemble yield simulations, variance decomposition was performed. The results indicated that the uncertainty from climate projections was much larger than that contributed by crop model parameters. Increased ensemble yield variance revealed the increasing uncertainty in the yield simulation in the future periods.
NASA Astrophysics Data System (ADS)
Chen, Y.
2017-12-01
Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to urbanization, and the results show urbanization has big impact on the watershed flood responses. The peak flow increased a few times after urbanization which is much higher than previous reports.
NASA Astrophysics Data System (ADS)
Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong
2018-06-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen models.
Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia
2017-10-27
Sequential peritoneal equilibration test (sPET) is based on the consecutive performance of the peritoneal equilibration test (PET, 4-hour, glucose 2.27%) and the mini-PET (1-hour, glucose 3.86%), and the estimation of peritoneal transport parameters with the 2-pore model. It enables the assessment of the functional transport barrier for fluid and small solutes. The objective of this study was to check whether the estimated model parameters can serve as better and earlier indicators of the changes in the peritoneal transport characteristics than directly measured transport indices that depend on several transport processes. 17 patients were examined using sPET twice with the interval of about 8 months (230 ± 60 days). There was no difference between the observational parameters measured in the 2 examinations. The indices for solute transport, but not net UF, were well correlated between the examinations. Among the estimated parameters, a significant decrease between the 2 examinations was found only for hydraulic permeability LpS, and osmotic conductance for glucose, whereas the other parameters remained unchanged. These fluid transport parameters did not correlate with D/P for creatinine, although the decrease in LpS values between the examinations was observed mostly for patients with low D/P for creatinine. We conclude that changes in fluid transport parameters, hydraulic permeability and osmotic conductance for glucose, as assessed by the pore model, may precede the changes in small solute transport. The systematic assessment of fluid transport status needs specific clinical and mathematical tools beside the standard PET tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Hou, Zhangshuan; Leung, Lai-Yung R.
2013-12-01
With the emergence of earth system models as important tools for understanding and predicting climate change and implications to mitigation and adaptation, it has become increasingly important to assess the fidelity of the land component within earth system models to capture realistic hydrological processes and their response to the changing climate and quantify the associated uncertainties. This study investigates the sensitivity of runoff simulations to major hydrologic parameters in version 4 of the Community Land Model (CLM4) by integrating CLM4 with a stochastic exploratory sensitivity analysis framework at 20 selected watersheds from the Model Parameter Estimation Experiment (MOPEX) spanning amore » wide range of climate and site conditions. We found that for runoff simulations, the most significant parameters are those related to the subsurface runoff parameterizations. Soil texture related parameters and surface runoff parameters are of secondary significance. Moreover, climate and soil conditions play important roles in the parameter sensitivity. In general, site conditions within water-limited hydrologic regimes and with finer soil texture result in stronger sensitivity of output variables, such as runoff and its surface and subsurface components, to the input parameters in CLM4. This study demonstrated the feasibility of parameter inversion for CLM4 using streamflow observations to improve runoff simulations. By ranking the significance of the input parameters, we showed that the parameter set dimensionality could be reduced for CLM4 parameter calibration under different hydrologic and climatic regimes so that the inverse problem is less ill posed.« less
Experiments and modelling of rate-dependent transition delay in a stochastic subcritical bifurcation
NASA Astrophysics Data System (ADS)
Bonciolini, Giacomo; Ebi, Dominik; Boujo, Edouard; Noiray, Nicolas
2018-03-01
Complex systems exhibiting critical transitions when one of their governing parameters varies are ubiquitous in nature and in engineering applications. Despite a vast literature focusing on this topic, there are few studies dealing with the effect of the rate of change of the bifurcation parameter on the tipping points. In this work, we consider a subcritical stochastic Hopf bifurcation under two scenarios: the bifurcation parameter is first changed in a quasi-steady manner and then, with a finite ramping rate. In the latter case, a rate-dependent bifurcation delay is observed and exemplified experimentally using a thermoacoustic instability in a combustion chamber. This delay increases with the rate of change. This leads to a state transition of larger amplitude compared with the one that would be experienced by the system with a quasi-steady change of the parameter. We also bring experimental evidence of a dynamic hysteresis caused by the bifurcation delay when the parameter is ramped back. A surrogate model is derived in order to predict the statistic of these delays and to scrutinize the underlying stochastic dynamics. Our study highlights the dramatic influence of a finite rate of change of bifurcation parameters upon tipping points, and it pinpoints the crucial need of considering this effect when investigating critical transitions.
Experiments and modelling of rate-dependent transition delay in a stochastic subcritical bifurcation
Noiray, Nicolas
2018-01-01
Complex systems exhibiting critical transitions when one of their governing parameters varies are ubiquitous in nature and in engineering applications. Despite a vast literature focusing on this topic, there are few studies dealing with the effect of the rate of change of the bifurcation parameter on the tipping points. In this work, we consider a subcritical stochastic Hopf bifurcation under two scenarios: the bifurcation parameter is first changed in a quasi-steady manner and then, with a finite ramping rate. In the latter case, a rate-dependent bifurcation delay is observed and exemplified experimentally using a thermoacoustic instability in a combustion chamber. This delay increases with the rate of change. This leads to a state transition of larger amplitude compared with the one that would be experienced by the system with a quasi-steady change of the parameter. We also bring experimental evidence of a dynamic hysteresis caused by the bifurcation delay when the parameter is ramped back. A surrogate model is derived in order to predict the statistic of these delays and to scrutinize the underlying stochastic dynamics. Our study highlights the dramatic influence of a finite rate of change of bifurcation parameters upon tipping points, and it pinpoints the crucial need of considering this effect when investigating critical transitions. PMID:29657803
Detecting hydrological changes through conceptual model
NASA Astrophysics Data System (ADS)
Viola, Francesco; Caracciolo, Domenico; Pumo, Dario; Francipane, Antonio; Valerio Noto, Leonardo
2015-04-01
Natural changes and human modifications in hydrological systems coevolve and interact in a coupled and interlinked way. If, on one hand, climatic changes are stochastic, non-steady, and affect the hydrological systems, on the other hand, human-induced changes due to over-exploitation of soils and water resources modifies the natural landscape, water fluxes and its partitioning. Indeed, the traditional assumption of static systems in hydrological analysis, which has been adopted for long time, fails whenever transient climatic conditions and/or land use changes occur. Time series analysis is a way to explore environmental changes together with societal changes; unfortunately, the not distinguishability between causes restrict the scope of this method. In order to overcome this limitation, it is possible to couple time series analysis with an opportune hydrological model, such as a conceptual hydrological model, which offers a schematization of complex dynamics acting within a basin. Assuming that model parameters represent morphological basin characteristics and that calibration is a way to detect hydrological signature at a specific moment, it is possible to argue that calibrating the model over different time windows could be a method for detecting potential hydrological changes. In order to test the capabilities of a conceptual model in detecting hydrological changes, this work presents different "in silico" experiments. A synthetic-basin is forced with an ensemble of possible future scenarios generated with a stochastic weather generator able to simulate steady and non-steady climatic conditions. The experiments refer to Mediterranean climate, which is characterized by marked seasonality, and consider the outcomes of the IPCC 5th report for describing climate evolution in the next century. In particular, in order to generate future climate change scenarios, a stochastic downscaling in space and time is carried out using realizations of an ensemble of General Circulation Models (GCMs) for the future scenarios 2046-2065 and 2081-2100. Land use changes (i.e., changes in the fraction of impervious area due to increasing urbanization) are explicitly simulated, while the reference hydrological responses are assessed by the spatially distributed, process-based hydrological model tRIBS, the TIN-based Real-time Integrated Basin Simulator. Several scenarios have been created, describing hypothetical centuries with steady conditions, climate change conditions, land use change conditions and finally complex conditions involving both transient climatic modifications and gradual land use changes. A conceptual lumped model, the EHSM (EcoHydrological Streamflow Model) is calibrated for the above mentioned scenarios with regard to different time-windows. The calibrated parameters show high sensitivity to anthropic variations in land use and/or climatic variability. Land use changes are clearly visible from parameters evolution especially when steady climatic conditions are considered. When the increase in urbanization is coupled with rainfall reduction the ability to detect human interventions through the analysis of conceptual model parameters is weakened.
Prediction of flunixin tissue residue concentrations in livers from diseased cattle.
Wu, H; Baynes, R E; Tell, L A; Riviere, J E
2013-12-01
Flunixin, a widely used non-steroidal anti-inflammatory drug, was a leading cause of violative residues in cattle. The objective of this analysis was to explore how the changes in pharmacokinetic (PK) parameters that may be associated with diseased animals affect the predicted liver residue of flunixin in cattle. Monte Carlo simulations for liver residues of flunixin were performed using the PK model structure and relevant PK parameter estimates from a previously published population PK model for flunixin in cattle. The magnitude of a change in the PK parameter value that resulted in a violative residue issue in more than one percent of a cattle population was compared. In this regard, elimination clearance and volume of distribution affected withdrawal times. Pathophysiological factors that can change these parameters may contribute to the occurrence of violative residues of flunixin.
Probabilistic projections of 21st century climate change over Northern Eurasia
NASA Astrophysics Data System (ADS)
Monier, E.; Sokolov, A. P.; Schlosser, C. A.; Scott, J. R.; Gao, X.
2013-12-01
We present probabilistic projections of 21st century climate change over Northern Eurasia using the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an earth system model of intermediate complexity, with a two-dimensional zonal-mean atmosphere, to a human activity model. Regional climate change is obtained by two downscaling methods: a dynamical downscaling, where the IGSM is linked to a three dimensional atmospheric model; and a statistical downscaling, where a pattern scaling algorithm uses climate-change patterns from 17 climate models. This framework allows for key sources of uncertainty in future projections of regional climate change to be accounted for: emissions projections; climate system parameters (climate sensitivity, strength of aerosol forcing and ocean heat uptake rate); natural variability; and structural uncertainty. Results show that the choice of climate policy and the climate parameters are the largest drivers of uncertainty. We also nd that dierent initial conditions lead to dierences in patterns of change as large as when using different climate models. Finally, this analysis reveals the wide range of possible climate change over Northern Eurasia, emphasizing the need to consider all sources of uncertainty when modeling climate impacts over Northern Eurasia.
Probabilistic projections of 21st century climate change over Northern Eurasia
NASA Astrophysics Data System (ADS)
Monier, Erwan; Sokolov, Andrei; Schlosser, Adam; Scott, Jeffery; Gao, Xiang
2013-12-01
We present probabilistic projections of 21st century climate change over Northern Eurasia using the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an Earth system model of intermediate complexity with a two-dimensional zonal-mean atmosphere to a human activity model. Regional climate change is obtained by two downscaling methods: a dynamical downscaling, where the IGSM is linked to a three-dimensional atmospheric model, and a statistical downscaling, where a pattern scaling algorithm uses climate change patterns from 17 climate models. This framework allows for four major sources of uncertainty in future projections of regional climate change to be accounted for: emissions projections, climate system parameters (climate sensitivity, strength of aerosol forcing and ocean heat uptake rate), natural variability, and structural uncertainty. The results show that the choice of climate policy and the climate parameters are the largest drivers of uncertainty. We also find that different initial conditions lead to differences in patterns of change as large as when using different climate models. Finally, this analysis reveals the wide range of possible climate change over Northern Eurasia, emphasizing the need to consider these sources of uncertainty when modeling climate impacts over Northern Eurasia.
Modeling Answer Change Behavior: An Application of a Generalized Item Response Tree Model
ERIC Educational Resources Information Center
Jeon, Minjeong; De Boeck, Paul; van der Linden, Wim
2017-01-01
We present a novel application of a generalized item response tree model to investigate test takers' answer change behavior. The model allows us to simultaneously model the observed patterns of the initial and final responses after an answer change as a function of a set of latent traits and item parameters. The proposed application is illustrated…
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
NASA Astrophysics Data System (ADS)
Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris
2017-12-01
Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
A Path Model for Evaluating Dosing Parameters for Children With Cerebral Palsy
Christy, Jennifer B.; Heathcock, Jill C.; Kolobe, Thubi H.A.
2014-01-01
Dosing of pediatric rehabilitation services for children with cerebral palsy (CP) has been identified as a national priority. Establishing dosing parameters for pediatric physical therapy interventions is critical for informing clinical decision making, health policy, and guidelines for reimbursement. The purpose of this perspective article is to describe a path model for evaluating dosing parameters of interventions for children with CP. The model is intended for dose-related and effectiveness studies of pediatric physical therapy interventions. The premise of the model is: Intervention type (focus on body structures, activity, or the environment) acts on a child first through the family, then through the dose (frequency, intensity, time), to yield structural and behavioral changes. As a result, these changes are linked to improvements in functional independence. Community factors affect dose as well as functional independence (performance and capacity), influencing the relationships between type of intervention and intervention responses. The constructs of family characteristics; child characteristics (eg, age, level of severity, comorbidities, readiness to change, preferences); plastic changes in bone, muscle, and brain; motor skill acquisition; and community access warrant consideration from researchers who are designing intervention studies. Multiple knowledge gaps are identified, and a framework is provided for conceptualizing dosing parameters for children with CP. PMID:24231231
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
Yousefzadeh, Behrooz; Hodgson, Murray
2012-09-01
A beam-tracing model was used to study the acoustical responses of three empty, rectangular rooms with different boundary conditions. The model is wave-based (accounting for sound phase) and can be applied to rooms with extended-reaction surfaces that are made of multiple layers of solid, fluid, or poroelastic materials-the acoustical properties of these surfaces are calculated using Biot theory. Three room-acoustical parameters were studied in various room configurations: sound strength, reverberation time, and RApid Speech Transmission Index. The main objective was to investigate the effects of modeling surfaces as either local or extended reaction on predicted values of these three parameters. Moreover, the significance of modeling interference effects was investigated, including the study of sound phase-change on surface reflection. Modeling surfaces as of local or extended reaction was found to be significant for surfaces consisting of multiple layers, specifically when one of the layers is air. For multilayers of solid materials with an air-cavity, this was most significant around their mass-air-mass resonance frequencies. Accounting for interference effects made significant changes in the predicted values of all parameters. Modeling phase change on reflection, on the other hand, was found to be relatively much less significant.
NASA Astrophysics Data System (ADS)
Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di
2016-09-01
Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.
Ferizi, Uran; Rossi, Ignacio; Lee, Youjin; Lendhey, Matin; Teplensky, Jason; Kennedy, Oran D; Kirsch, Thorsten; Bencardino, Jenny; Raya, José G
2017-07-01
We establish a mechanical injury model for articular cartilage to assess the sensitivity of diffusion tensor imaging (DTI) in detecting cartilage damage early in time. Mechanical injury provides a more realistic model of cartilage degradation compared with commonly used enzymatic degradation. Nine cartilage-on-bone samples were obtained from patients undergoing knee replacement. The 3 Tesla DTI (0.18 × 0.18 × 1 mm 3 ) was performed before, 1 week, and 2 weeks after (zero, mild, and severe) injury, with a clinical radial spin-echo DTI (RAISED) sequence used in our hospital. We performed stress-relaxation tests and used a quasilinear-viscoelastic (QLV) model to characterize cartilage mechanical properties. Serial histology sections were dyed with Safranin-O and given an OARSI grade. We then correlated the changes in DTI parameters with the changes in QLV-parameters and OARSI grades. After severe injury the mean diffusivity increased after 1 and 2 weeks, whereas the fractional anisotropy decreased after 2 weeks (P < 0.05). The QLV-parameters and OARSI grades of the severe injury group differed from the baseline with statistical significance. The changes in mean diffusivity across all the samples correlated with the changes in the OARSI grade (r = 0.72) and QLV-parameters (r = -0.75). DTI is sensitive in tracking early changes after mechanical injury, and its changes correlate with changes in biomechanics and histology. Magn Reson Med 78:69-78, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
Variation of the Mn I 539.4 nm line with the solar cycle
NASA Astrophysics Data System (ADS)
Danilovic, S.; Solanki, S. K.; Livingston, W.; Krivova, N.; Vince, I.
2016-03-01
Context. As a part of the long-term program at Kitt Peak National Observatory (KPNO), the Mn I 539.4 nm line has been observed for nearly three solar cycles using the McMath telescope and the 13.5 m spectrograph in double-pass mode. These full-disk spectrophotometric observations revealed an unusually strong change of this line's parameters over the solar cycle. Aims: Optical pumping by the Mg II k line was originally proposed to explain these variations. More recent studies have proposed that this is not required and that the magnetic variability (I.e., the changes in solar atmospheric structure due to faculae) might explain these changes. Magnetic variability is also the mechanism that drives the changes in total solar irradiance variations (TSI). With this work we investigate this proposition quantitatively by using the same model that was earlier successfully employed to reconstruct the irradiance. Methods: We reconstructed the changes in the line parameters using the model SATIRE-S, which takes only variations of the daily surface distribution of the magnetic field into account. We applied exactly the same model atmospheres and value of the free parameter as were used in previous solar irradiance reconstructions to now model the variation in the Mn I 539.4 nm line profile and in neighboring Fe I lines. We compared the results of the theoretical model with KPNO observations. Results: The changes in the Mn I 539.4 nm line and a neighbouring Fe I 539.52 nm line over approximately three solar cycles are reproduced well by the model without additionally tweaking the model parameters, if changes made to the instrument setup are taken into account. The model slightly overestimates the change for the strong Fe I 539.32 nm line. Conclusions: Our result confirms that optical pumping of the Mn II 539.4 nm line by Mg II k is not the main cause of its solar cycle change. It also provides independent confirmation of solar irradiance models which are based on the assumption that irradiance variations are caused by the evolution of the solar surface magnetic flux. The result obtained here also supports the spectral irradiance variations computed by these models.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
NASA Astrophysics Data System (ADS)
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R
2017-01-04
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123
Mapping an operator's perception of a parameter space
NASA Technical Reports Server (NTRS)
Pew, R. W.; Jagacinski, R. J.
1972-01-01
Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Technique for predicting high-frequency stability characteristics of gaseous-propellant combustors
NASA Technical Reports Server (NTRS)
Priem, R. J.; Jefferson, Y. S. Y.
1973-01-01
A technique for predicting the stability characteristics of a gaseous-propellant rocket combustion system is developed based on a model that assumes coupling between the flow through the injector and the oscillating chamber pressure. The theoretical model uses a lumped parameter approach for the flow elements in the injection system plus wave dynamics in the combustion chamber. The injector flow oscillations are coupled to the chamber pressure oscillations with a delay time. Frequency and decay (or growth) rates are calculated for various combustor design and operating parameters to demonstrate the influence of various parameters on stability. Changes in oxidizer design parameters had a much larger influence on stability than a similar change in fuel parameters. A complete description of the computer program used to make these calculations is given in an appendix.
NASA Astrophysics Data System (ADS)
Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.
2016-12-01
Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Taylor, Brian A.; Elliott, Andrew M.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason
2011-01-01
In order to investigate simultaneous MR temperature imaging and direct validation of tissue damage during thermal therapy, temperature-dependent signal changes in proton resonance frequency (PRF) shifts, R2* values, and T1-weighted amplitudes are measured from one technique in ex vivo tissue heated with a 980-nm laser at 1.5T and 3.0T. Using a multi-gradient echo acquisition and signal modeling with the Stieglitz-McBride algorithm, the temperature sensitivity coefficient (TSC) values of these parameters are measured in each tissue at high spatiotemporal resolutions (1.6×1.6×4mm3,≤5sec) at the range of 25-61 °C. Non-linear changes in MR parameters are examined and correlated with an Arrhenius rate dose model of thermal damage. Using logistic regression, the probability of changes in these parameters is calculated as a function of thermal dose to determine if changes correspond to thermal damage. Temperature calibrations demonstrate TSC values which are consistent with previous studies. Temperature sensitivity of R2* and, in some cases, T1-weighted amplitudes are statistically different before and after thermal damage occurred. Significant changes in the slopes of R2* as a function of temperature are observed. Logistic regression analysis shows that these changes could be accurately predicted using the Arrhenius rate dose model (Ω=1.01±0.03), thereby showing that the changes in R2* could be direct markers of protein denaturation. Overall, by using a chemical shift imaging technique with simultaneous temperature estimation, R2* mapping and T1-W imaging, it is shown that changes in the sensitivity of R2* and, to a lesser degree, T1-W amplitudes are measured in ex vivo tissue when thermal damage is expected to occur according to Arrhenius rate dose models. These changes could possibly be used for direct validation of thermal damage in contrast to model-based predictions. PMID:21721063
NASA Astrophysics Data System (ADS)
Kilb, Debi
2003-01-01
The 1992 M7.3 Landers earthquake may have played a role in triggering the 1999 M7.1 Hector Mine earthquake as suggested by their close spatial (˜20 km) proximity. Current investigations of triggering by static stress changes produce differing conclusions when small variations in parameter values are employed. Here I test the hypothesis that large-amplitude dynamic stress changes, induced by the Landers rupture, acted to promote the Hector Mine earthquake. I use a flat layer reflectivity method to model the Landers earthquake displacement seismograms. By requiring agreement between the model seismograms and data, I can constrain the Landers main shock parameters and velocity model. A similar reflectivity method is used to compute the evolution of stress changes. I find a strong positive correlation between the Hector Mine hypocenter and regions of large (>4 MPa) dynamic Coulomb stress changes (peak Δσf(t)) induced by the Landers main shock. A positive correlation is also found with large dynamic normal and shear stress changes. Uncertainties in peak Δσf(t) (1.3 MPa) are only 28% of the median value (4.6 MPa) determined from an extensive set (160) of model parameters. Therefore the correlation with dynamic stresses is robust to a range of Hector Mine main shock parameters, as well as to variations in the friction and Skempton's coefficients used in the calculations. These results imply dynamic stress changes may be an important part of earthquake trigging, such that large-amplitude stress changes alter the properties of an existing fault in a way that promotes fault failure.
A travel time forecasting model based on change-point detection method
NASA Astrophysics Data System (ADS)
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Steil, Garry M; Hipszer, Brian; Reifman, Jaques
2010-05-01
One year after its initial meeting, the Glycemia Modeling Working Group reconvened during the 2009 Diabetes Technology Meeting in San Francisco, CA. The discussion, involving 39 scientists, again focused on the need for individual investigators to have access to the clinical data required to develop and refine models of glucose metabolism, the need to understand the differences among the distinct models and control algorithms, and the significance of day-to-day subject variability. The key conclusion was that model-based comparisons of different control algorithms, or the models themselves, are limited by the inability to access individual model-patient parameters. It was widely agreed that these parameters, as opposed to the average parameters that are typically reported, are necessary to perform such comparisons. However, the prevailing view was that, if investigators were to make the parameters available, it would limit their ability (and that of their institution) to benefit from the invested work in developing their models. A general agreement was reached regarding the importance of each model having an insulin pharmacokinetic/pharmacodynamic profile that is not different from profiles reported in the literature (88% of the respondents agreed that the model should have similar curves or be analyzed separately) and the importance of capturing intraday variance in insulin sensitivity (91% of the respondents indicated that this could result in changes in fasting glucose of >or=15%, with 52% of the respondents believing that the variability could effect changes of >or=30%). Seventy-six percent of the participants indicated that high-fat meals were thought to effect changes in other model parameters in addition to gastric emptying. There was also widespread consensus as to how a closed-loop controller should respond to day-to-day changes in model parameters (with 76% of the participants indicating that fasting glucose should be within 15% of target, with 30% of the participants believing that it should be at target). The group was evenly divided as to whether the glucose sensor per se continues to be the major obstacle in achieving closed-loop control. Finally, virtually all participants agreed that a future two-day workshop should be organized to compare, contrast, and understand the differences among the different models and control algorithms. (c) 2010 Diabetes Technology Society.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Control and Diagnostic Model of Brushless Dc Motor
NASA Astrophysics Data System (ADS)
Abramov, Ivan V.; Nikitin, Yury R.; Abramov, Andrei I.; Sosnovich, Ella V.; Božek, Pavol
2014-09-01
A simulation model of brushless DC motor (BLDC) control and diagnostics is considered. The model has been developed using a freeware complex "Modeling in technical devices". Faults and diagnostic parameters of BLDC are analyzed. A logicallinguistic diagnostic model of BLDC has been developed on basis of fuzzy logic. The calculated rules determine dependence of technical condition on diagnostic parameters, their trends and utilized lifetime of BLDC. Experimental results of BLDC technical condition diagnostics are discussed. It is shown that in the course of BLDC degradation the motor condition change depends on diagnostic parameter values
NASA Astrophysics Data System (ADS)
Harpold, R. E.; Urban, T. J.; Schutz, B. E.
2008-12-01
Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.
Trybula, Elizabeth M.; Cibin, Raj; Burks, Jennifer L.; ...
2014-06-13
The Soil and Water Assessment Tool (SWAT) is increasingly used to quantify hydrologic and water quality impacts of bioenergy production, but crop-growth parameters for candidate perennial rhizomatous grasses (PRG) Miscanthus × giganteus and upland ecotypes of Panicum virgatum (switchgrass) are limited by the availability of field data. Crop-growth parameter ranges and suggested values were developed in this study using agronomic and weather data collected at the Purdue University Water Quality Field Station in northwestern Indiana. During the process of parameterization, the comparison of measured data with conceptual representation of PRG growth in the model led to three changes in themore » SWAT 2009 code: the harvest algorithm was modified to maintain belowground biomass over winter, plant respiration was extended via modified-DLAI to better reflect maturity and leaf senescence, and nutrient uptake algorithms were revised to respond to temperature, water, and nutrient stress. Parameter values and changes to the model resulted in simulated biomass yield and leaf area index consistent with reported values for the region. Code changes in the SWAT model improved nutrient storage during dormancy period and nitrogen and phosphorus uptake by both switchgrass and Miscanthus.« less
Symons, J E; Hawkins, D A; Fyhrie, D P; Upadhyaya, S K; Stover, S M
2017-09-01
The metacarpophalangeal joint (fetlock) is the most commonly affected site of racehorse injury, with multiple observed pathologies consistent with extreme fetlock dorsiflexion. Race surface mechanics affect musculoskeletal structure loading and injury risk because surface forces applied to the hoof affect limb motions. Race surface mechanics are a function of controllable factors. Thus, race surface design has the potential to reduce the incidence of musculoskeletal injury through modulation of limb motions. However, the relationship between race surface mechanics and racehorse limb motions is unknown. To determine the effect of changing race surface and racehorse limb model parameters on distal limb motions. Sensitivity analysis of in silico fetlock motion to changes in race surface and racehorse limb parameters using a validated, integrated racehorse and race surface computational model. Fetlock motions were determined during gallop stance from simulations on virtual surfaces with differing average vertical stiffness, upper layer (e.g. cushion) depth and linear stiffness, horizontal friction, tendon and ligament mechanics, as well as fetlock position at heel strike. Upper layer depth produced the greatest change in fetlock motion, with lesser depths yielding greater fetlock dorsiflexion. Lesser fetlock changes were observed for changes in lower layer (e.g. base or pad) mechanics (nonlinear), as well as palmar ligament and tendon stiffness. Horizontal friction and fetlock position contributed less than 1° change in fetlock motion. Simulated fetlock motions are specific to one horse's anatomy reflected in the computational model. Anatomical differences among horses may affect the magnitude of limb flexion, but will likely have similar limb motion responses to varied surface mechanics. Race surface parameters affected by maintenance produced greater changes in fetlock motion than other parameters studied. Simulations can provide evidence to inform race surface design and management to reduce the incidence of injury. © 2017 EVJ Ltd.
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Numerical modeling techniques for flood analysis
NASA Astrophysics Data System (ADS)
Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.
2016-12-01
Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.
Simulations of a epidemic model with parameters variation analysis for the dengue fever
NASA Astrophysics Data System (ADS)
Jardim, C. L. T. F.; Prates, D. B.; Silva, J. M.; Ferreira, L. A. F.; Kritz, M. V.
2015-09-01
Mathematical models can be widely found in the literature for describing and analyzing epidemics. The models that use differential equations to represent mathematically such description are specially sensible to parameters involved in the modelling. In this work, an already developed model, called SIR, is analyzed when applied to a scenario of a dengue fever epidemic. Such choice is powered by the existence of useful tools presented by a variation of this original model, which allow an inclusion of different aspects of the dengue fever disease, as its seasonal characteristics, the presence of more than one strain of the vector and of the biological factor of cross-immunity. The analysis and results interpretation are performed through numerical solutions of the model in question, and a special attention is given to the different solutions generated by the use of different values for the parameters present in this model. Slight variations are performed either dynamically or statically in those parameters, mimicking hypothesized changes in the biological scenario of this simulation and providing a source of evaluation of how those changes would affect the outcomes of the epidemic in a population.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.
NASA Astrophysics Data System (ADS)
Goswami, B. B.; Khouider, B.; Krishna, R. P. M.; Mukhopadhyay, P.; Majda, A.
2017-12-01
A stochastic multicloud (SMCM) cumulus parameterization is implemented in the National Centres for Environmental Predictions (NCEP) Climate Forecast System version 2 (CFSv2) model, named as the CFSsmcm model. We present here results from a systematic attempt to understand the CFSsmcm model's sensitivity to the SMCM parameters. To asses the model-sentivity to the different SMCM parameters, we have analized a set of 14 5-year long climate simulations produced by the CFSsmcm model. The model is found to be resilient to minor changes in the parameter values. The middle tropospheric dryness (MTD) and the stratiform cloud decay timescale are found to be most crucial parameters in the SMCM formulation in the CFSsmcm model.
The sensitivity of conduit flow models to basic input parameters: there is no need for magma trolls!
NASA Astrophysics Data System (ADS)
Thomas, M. E.; Neuberg, J. W.
2012-04-01
Many conduit flow models now exist and some of these models are becoming extremely complicated, conducted in three dimensions and incorporating the physics of compressible three phase fluids (magmas), intricate conduit geometries and fragmentation processes, to name but a few examples. These highly specialised models are being used to explain observations of the natural system, and there is a danger that possible explanations may be getting needlessly complex. It is coherent, for instance, to propose the involvement of sub-surface dwelling magma trolls as an explanation for the change in a volcanoes eruptive style, but assuming the simplest explanation would prevent such additions, unless they were absolutely necessary. While the understanding of individual, often small scale conduit processes is increasing rapidly, is this level of detail necessary? How sensitive are these models to small changes in the most basic of governing parameters? Can these changes be used to explain observed behaviour? Here we will examine the sensitivity of conduit flow models to changes in the melt viscosity, one of the fundamental inputs to any such model. However, even addressing this elementary issue is not straight forward. There are several viscosity models in existence, how do they differ? Can models that use different viscosity models be realistically compared? Each of these viscosity models is also heavily dependent on the magma composition and/or temperature, and how well are these variables constrained? Magma temperatures and water contents are often assumed as "ball-park" figures, and are very rarely exactly known for the periods of observation the models are attempting to explain, yet they exhibit a strong controlling factor on the melt viscosity. The role of both these variables will be discussed. For example, using one of the available viscosity models a 20 K decrease in temperature of the melt results in a greater than 100% increase in the melt viscosity. With changes of this magnitude resulting from small alterations in the basic governing parameters does this render any changes in individual conduit processes of secondary importance? As important as the melt viscosity is to any conduit flow model, it is a meaningless parameter unless there is a conduit through which to flow. The shape and size of a volcanic conduit are even less well constrained than magma's temperature and water content, but have an equally important role to play. Rudimentary changes such as simply increasing or decreasing the radius of a perfectly cylindrical conduit can have large effects, and when coupled with the range of magma viscosities that may be flowing through them can completely change interpretations. Although we present results specifically concerning the variables of magma temperature and water content and the radius of a cylindrical conduit, this is just the start, by systematically identifying the effect each parameter has on the conduit flow models it will be possible to identify which areas are most requiring of future attention.
Guan, Zheng; Zhang, Guan-min; Ma, Ping; Liu, Li-hong; Zhou, Tian-yan; Lu, Wei
2010-07-01
In this study, we evaluated the influence of different variance from each of the parameters on the output of tacrolimus population pharmacokinetic (PopPK) model in Chinese healthy volunteers, using Fourier amplitude sensitivity test (FAST). Besides, we estimated the index of sensitivity within whole course of blood sampling, designed different sampling times, and evaluated the quality of parameters' and the efficiency of prediction. It was observed that besides CL1/F, the index of sensitivity for all of the other four parameters (V1/F, V2/F, CL2/F and k(a)) in tacrolimus PopPK model showed relatively high level and changed fast with the time passing. With the increase of the variance of k(a), its indices of sensitivity increased obviously, associated with significant decrease in sensitivity index for the other parameters, and obvious change in peak time as well. According to the simulation of NONMEM and the comparison among different fitting results, we found that the sampling time points designed according to FAST surpassed the other time points. It suggests that FAST can access the sensitivities of model parameters effectively, and assist the design of clinical sampling times and the construction of PopPK model.
Analysis of trend changes in Northern African palaeo-climate by using Bayesian inference
NASA Astrophysics Data System (ADS)
Schütz, Nadine; Trauth, Martin H.; Holschneider, Matthias
2010-05-01
Climate variability of Northern Africa is of high interest due to climate-evolutionary linkages under study. The reconstruction of the palaeo-climate over long time scales, including the expected linkages (> 3 Ma), is mainly accessible by proxy data from deep sea drilling cores. By concentrating on published data sets, we try to decipher rhythms and trends to detect correlations between different proxy time series by advanced mathematical methods. Our preliminary data is dust concentration, as an indicator for climatic changes such as humidity, from the ODP sites 659, 721 and 967 situated around Northern Africa. Our interest is in challenging the available time series with advanced statistical methods to detect significant trend changes and to compare different model assumptions. For that purpose, we want to avoid the rescaling of the time axis to obtain equidistant time steps for filtering methods. Additionally we demand an plausible description of the errors for the estimated parameters, in terms of confidence intervals. Finally, depending on what model we restrict on, we also want an insight in the parameter structure of the assumed models. To gain this information, we focus on Bayesian inference by formulating the problem as a linear mixed model, so that the expectation and deviation are of linear structure. By using the Bayesian method we can formulate the posteriori density as a function of the model parameters and calculate this probability density in the parameter space. Depending which parameters are of interest, we analytically and numerically marginalize the posteriori with respect to the remaining parameters of less interest. We apply a simple linear mixed model to calculate the posteriori densities of the ODP sites 659 and 721 concerning the last 5 Ma at maximum. From preliminary calculations on these data sets, we can confirm results gained by the method of breakfit regression combined with block bootstrapping ([1]). We obtain a significant change point around (1.63 - 1.82) Ma, which correlates with a global climate transition due to the establishment of the Walker circulation ([2]). Furthermore we detect another significant change point around (2.7 - 3.2) Ma, which correlates with the end of the Pliocene warm period (permanent El Niño-like conditions) and the onset of a colder global climate ([3], [4]). The discussion on the algorithm, the results of calculated confidence intervals, the available information about the applied model in the parameter space and the comparison of multiple change point models will be presented. [1] Trauth, M.H., et al., Quaternary Science Reviews, 28, 2009 [2] Wara, M.W., et al., Science, Vol. 309, 2005 [3] Chiang, J.C.H., Annual Review of Earth and Planetary Sciences, Vol. 37, 2009 [4] deMenocal, P., Earth and Planetary Science Letters, 220, 2004
Model-based recovery of histological parameters from multispectral images of the colon
NASA Astrophysics Data System (ADS)
Hidovic-Rowe, Dzena; Claridge, Ela
2005-04-01
Colon cancer alters the macroarchitecture of the colon tissue. Common changes include angiogenesis and the distortion of the tissue collagen matrix. Such changes affect the colon colouration. This paper presents the principles of a novel optical imaging method capable of extracting parameters depicting histological quantities of the colon. The method is based on a computational, physics-based model of light interaction with tissue. The colon structure is represented by three layers: mucosa, submucosa and muscle layer. Optical properties of the layers are defined by molar concentration and absorption coefficients of haemoglobins; the size and density of collagen fibres; the thickness of the layer and the refractive indexes of collagen and the medium. Using the entire histologically plausible ranges for these parameters, a cross-reference is created computationally between the histological quantities and the associated spectra. The output of the model was compared to experimental data acquired in vivo from 57 histologically confirmed normal and abnormal tissue samples and histological parameters were extracted. The model produced spectra which match well the measured data, with the corresponding spectral parameters being well within histologically plausible ranges. Parameters extracted for the abnormal spectra showed the increase in blood volume fraction and changes in collagen pattern characteristic of the colon cancer. The spectra extracted from multi-spectral images of ex-vivo colon including adenocarcinoma show the characteristic features associated with normal and abnormal colon tissue. These findings suggest that it should be possible to compute histological quantities for the colon from the multi-spectral images.
Design study of the geometry of the blanking tool to predict the burr formation of Zircaloy-4 sheet
NASA Astrophysics Data System (ADS)
Ha, Jisun; Lee, Hyungyil; Kim, Dongchul; Kim, Naksoo
2013-12-01
In this work, we investigated factors that influence burr formation for zircaloy-4 sheet used for spacer grids of nuclear fuel roads. Factors we considered are geometric factors of punch. We changed clearance and velocity in order to consider the failure parameters, and we changed shearing angle and corner radius of L-shaped punch in order to consider geometric factors of punch. First, we carried out blanking test with failure parameter of GTN model using L-shaped punch. The tendency of failure parameters and geometric factors that affect burr formation by analyzing sheared edges is investigated. Consequently, geometric factor's influencing on the burr formation is also high as failure parameters. Then, the sheared edges and burr formation with failure parameters and geometric factors is investigated using FE analysis model. As a result of analyzing sheared edges with the variables, we checked geometric factors more affect burr formation than failure parameters. To check the reliability of the FE model, the blanking force and the sheared edges obtained from experiments are compared with the computations considering heat transfer.
Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie-Eve; Snow, Debra; Mettetal, Jerome T; Bialecki, Russell A
2016-10-01
While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen-mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular-relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Han-Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg(-1) , p.o.) at 4 h intervals and baclofen-mediated changes in parameters recorded. A pharmacokinetic-pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. The systems pharmacology model developed fits baclofen-mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. © 2016 The British Pharmacological Society.
Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie‐Eve; Snow, Debra
2016-01-01
Background and Purpose While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen‐mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular‐relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Experimental Approach Han‐Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg−1, p.o.) at 4 h intervals and baclofen‐mediated changes in parameters recorded. A pharmacokinetic–pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Key Results Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. Conclusions and Implications The systems pharmacology model developed fits baclofen‐mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. PMID:27448216
Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes
NASA Astrophysics Data System (ADS)
Pietsch, Stephan
2017-04-01
DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong, since: (i) No given ecosystem ever is at steady state! (ii) Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.
Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes.
NASA Astrophysics Data System (ADS)
Pietsch, S.
2016-12-01
DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong. Any given ecosystem never is at steady state! Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.
New model for colour kinetics of plum under infrared vacuum condition and microwave drying.
Chayjan, Reza Amiri; Alaei, Behnam
2016-01-01
Quality of dried foods is affected by the drying method and physiochemical changes in tissue. The drying method affects properties such as colour. The colour of processed food is one of the most important quality indices and plays a determinant role in consumer acceptability of food materials and the processing method. The colour of food materials can be used as an indirect factor to determine changes in quality, since it is simpler and faster than chemical methods. The study focused on the kinetics of colour changes of plum slices, under infrared vacuum and microwave conditions. Drying the samples was implemented at the absolute pressures of 20 and 60 kPa, drying temperatures of 50 and 60°C and microwave power of 90, 270, 450 and 630 W. Colour changes were quantified by the tri-stimulus L* (whiteness/darkness), a* (redness/greenness) and b* (yellowness/blueness) model, which is an international standard for color measurement developed by the Commission Internationale d'Eclairage (CIE). These values were also used to calculate total colour change (∆E), chroma, hue angle, and browning index (BI). A new model was used for mathematical modelling of colour change kinetics. The drying process changed the colour parameters of L*, a*, and b*, causing a colour shift toward the darker region. The values of L* and hue angle decreased, whereas the values of a*, b*, ∆E, chroma and browning index increased during exposure to infrared vacuum conditions and microwave drying. Comparing the results obtained using the new model with two conventional models of zero-order and first-order kinetics indicated that the new model presented more compatibility with the data of colour kinetics for all colour parameters and drying conditions. All kinetic changes in colour parameters can be explained by the new model presented in this study. The hybrid drying system included infrared vacuum conditions and microwave power for initial slow drying of plum slices and provided the desired results for colour change.
NASA Astrophysics Data System (ADS)
Verbeke, C.; Asvestari, E.; Scolini, C.; Pomoell, J.; Poedts, S.; Kilpua, E.
2017-12-01
Coronal Mass Ejections (CMEs) are one of the big influencers on the coronal and interplanetary dynamics. Understanding their origin and evolution from the Sun to the Earth is crucial in order to determine the impact on our Earth and society. One of the key parameters that determine the geo-effectiveness of the coronal mass ejection is its internal magnetic configuration. We present a detailed parameter study of the Gibson-Low flux rope model. We focus on changes in the input parameters and how these changes affect the characteristics of the CME at Earth. Recently, the Gibson-Low flux rope model has been implemented into the inner heliosphere model EUHFORIA, a magnetohydrodynamics forecasting model of large-scale dynamics from 0.1 AU up to 2 AU. Coronagraph observations can be used to constrain the kinematics and morphology of the flux rope. One of the key parameters, the magnetic field, is difficult to determine directly from observations. In this work, we approach the problem by conducting a parameter study in which flux ropes with varying magnetic configurations are simulated. We then use the obtained dataset to look for signatures in imaging observations and in-situ observations in order to find an empirical way of constraining the parameters related to the magnetic field of the flux rope. In particular, we focus on events observed by at least two spacecraft (STEREO + L1) in order to discuss the merits of using observations from multiple viewpoints in constraining the parameters.
Design Change Model for Effective Scheduling Change Propagation Paths
NASA Astrophysics Data System (ADS)
Zhang, Hai-Zhu; Ding, Guo-Fu; Li, Rong; Qin, Sheng-Feng; Yan, Kai-Yin
2017-09-01
Changes in requirements may result in the increasing of product development project cost and lead time, therefore, it is important to understand how requirement changes propagate in the design of complex product systems and be able to select best options to guide design. Currently, a most approach for design change is lack of take the multi-disciplinary coupling relationships and the number of parameters into account integrally. A new design change model is presented to systematically analyze and search change propagation paths. Firstly, a PDS-Behavior-Structure-based design change model is established to describe requirement changes causing the design change propagation in behavior and structure domains. Secondly, a multi-disciplinary oriented behavior matrix is utilized to support change propagation analysis of complex product systems, and the interaction relationships of the matrix elements are used to obtain an initial set of change paths. Finally, a rough set-based propagation space reducing tool is developed to assist in narrowing change propagation paths by computing the importance of the design change parameters. The proposed new design change model and its associated tools have been demonstrated by the scheduling change propagation paths of high speed train's bogie to show its feasibility and effectiveness. This model is not only supportive to response quickly to diversified market requirements, but also helpful to satisfy customer requirements and reduce product development lead time. The proposed new design change model can be applied in a wide range of engineering systems design with improved efficiency.
The time-lapse AVO difference inversion for changes in reservoir parameters
NASA Astrophysics Data System (ADS)
Longxiao, Zhi; Hanming, Gu; Yan, Li
2016-12-01
The result of conventional time-lapse seismic processing is the difference between the amplitude and the post-stack seismic data. Although stack processing can improve the signal-to-noise ratio (SNR) of seismic data, it also causes a considerable loss of important information about the amplitude changes and only gives the qualitative interpretation. To predict the changes in reservoir fluid more precisely and accurately, we also need the quantitative information of the reservoir. To achieve this aim, we develop the method of time-lapse AVO (amplitude versus offset) difference inversion. For the inversion of reservoir changes in elastic parameters, we apply the Gardner equation as the constraint and convert the three-parameter inversion of elastic parameter changes into a two-parameter inversion to make the inversion more stable. For the inversion of variations in the reservoir parameters, we infer the relation between the difference of the reflection coefficient and variations in the reservoir parameters, and then invert reservoir parameter changes directly. The results of the theoretical modeling computation and practical application show that our method can estimate the relative variations in reservoir density, P-wave and S-wave velocity, calculate reservoir changes in water saturation and effective pressure accurately, and then provide reference for the rational exploitation of the reservoir.
A global food demand model for the assessment of complex human-earth systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
EDMONDS, JAMES A.; LINK, ROBERT; WALDHOFF, STEPHANIE T.
Demand for agricultural products is an important problem in climate change economics. Food consumption will shape and shaped by climate change and emissions mitigation policies through interactions with bioenergy and afforestation, two critical issues in meeting international climate goals such as two-degrees. We develop a model of food demand for staple and nonstaple commodities that evolves with changing incomes and prices. The model addresses a long-standing issue in estimating food demands, the evolution of demand relationships across large changes in income and prices. We discuss the model, some of its properties and limitations. We estimate parameter values using pooled cross-sectional-time-seriesmore » observations and the Metropolis Monte Carlo method and cross-validate the model by estimating parameters using a subset of the observations and test its ability to project into the unused observations. Finally, we apply bias correction techniques borrowed from the climate-modeling community and report results.« less
Identification of time-varying structural dynamic systems - An artificial intelligence approach
NASA Technical Reports Server (NTRS)
Glass, B. J.; Hanagud, S.
1992-01-01
An application of the artificial intelligence-derived methodologies of heuristic search and object-oriented programming to the problem of identifying the form of the model and the associated parameters of a time-varying structural dynamic system is presented in this paper. Possible model variations due to changes in boundary conditions or configurations of a structure are organized into a taxonomy of models, and a variant of best-first search is used to identify the model whose simulated response best matches that of the current physical structure. Simulated model responses are verified experimentally. An output-error approach is used in a discontinuous model space, and an equation-error approach is used in the parameter space. The advantages of the AI methods used, compared with conventional programming techniques for implementing knowledge structuring and inheritance, are discussed. Convergence conditions and example problems have been discussed. In the example problem, both the time-varying model and its new parameters have been identified when changes occur.
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Aspen succession in the Intermountain West: A deterministic model
Dale L. Bartos; Frederick R. Ward; George S. Innis
1983-01-01
A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...
NASA Astrophysics Data System (ADS)
Mehrdad Mirsanjari, Mir; Mohammadyari, Fatemeh
2018-03-01
Underground water is regarded as considerable water source which is mainly available in arid and semi arid with deficient surface water source. Forecasting of hydrological variables are suitable tools in water resources management. On the other hand, time series concepts is considered efficient means in forecasting process of water management. In this study the data including qualitative parameters (electrical conductivity and sodium adsorption ratio) of 17 underground water wells in Mehran Plain has been used to model the trend of parameters change over time. Using determined model, the qualitative parameters of groundwater is predicted for the next seven years. Data from 2003 to 2016 has been collected and were fitted by AR, MA, ARMA, ARIMA and SARIMA models. Afterward, the best model is determined using information criterion or Akaike (AIC) and correlation coefficient. After modeling parameters, the map of agricultural land use in 2016 and 2023 were generated and the changes between these years were studied. Based on the results, the average of predicted SAR (Sodium Adsorption Rate) in all wells in the year 2023 will increase compared to 2016. EC (Electrical Conductivity) average in the ninth and fifteenth holes and decreases in other wells will be increased. The results indicate that the quality of groundwater for Agriculture Plain Mehran will decline in seven years.
Linking 1D coastal ocean modelling to environmental management: an ensemble approach
NASA Astrophysics Data System (ADS)
Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia
2017-12-01
The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.
Bursting endemic bubbles in an adaptive network
NASA Astrophysics Data System (ADS)
Sherborne, N.; Blyuss, K. B.; Kiss, I. Z.
2018-04-01
The spread of an infectious disease is known to change people's behavior, which in turn affects the spread of disease. Adaptive network models that account for both epidemic and behavioral change have found oscillations, but in an extremely narrow region of the parameter space, which contrasts with intuition and available data. In this paper we propose a simple susceptible-infected-susceptible epidemic model on an adaptive network with time-delayed rewiring, and show that oscillatory solutions are now present in a wide region of the parameter space. Altering the transmission or rewiring rates reveals the presence of an endemic bubble—an enclosed region of the parameter space where oscillations are observed.
Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1990-01-01
The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.
Stirling Engine Dynamic System Modeling
NASA Technical Reports Server (NTRS)
Nakis, Christopher G.
2004-01-01
The Thermo-Mechanical systems branch at the Glenn Research Center focuses a large amount time on Stirling engines. These engines will be used on missions where solar power is inefficient, especially in deep space. I work with Tim Regan and Ed Lewandowski who are currently developing and validating a mathematical model for the Stirling engines. This model incorporates all aspects of the system including, mechanical, electrical and thermodynamic components. Modeling is done through Simplorer, a program capable of running simulations of the model. Once created and then proven to be accurate, a model is used for developing new ideas for engine design. My largest specific project involves varying key parameters in the model and quantifying the results. This can all be done relatively trouble-free with the help of Simplorer. Once the model is complete, Simplorer will do all the necessary calculations. The more complicated part of this project is determining which parameters to vary. Finding key parameters depends on the potential for a value to be independently altered in the design. For example, a change in one dimension may lead to a proportional change to the rest of the model, and no real progress is made. Also, the ability for a changed value to have a substantial impact on the outputs of the system is important. Results will be condensed into graphs and tables with the purpose of better communication and understanding of the data. With the changing of these parameters, a more optimal design can be created without having to purchase or build any models. Also, hours and hours of results can be simulated in minutes. In the long run, using mathematical models can save time and money. Along with this project, I have many other smaller assignments throughout the summer. My main goal is to assist in the processes of model development, validation and testing.
Glassy dynamics in three-dimensional embryonic tissues
Schötz, Eva-Maria; Lanio, Marcos; Talbot, Jared A.; Manning, M. Lisa
2013-01-01
Many biological tissues are viscoelastic, behaving as elastic solids on short timescales and fluids on long timescales. This collective mechanical behaviour enables and helps to guide pattern formation and tissue layering. Here, we investigate the mechanical properties of three-dimensional tissue explants from zebrafish embryos by analysing individual cell tracks and macroscopic mechanical response. We find that the cell dynamics inside the tissue exhibit features of supercooled fluids, including subdiffusive trajectories and signatures of caging behaviour. We develop a minimal, three-parameter mechanical model for these dynamics, which we calibrate using only information about cell tracks. This model generates predictions about the macroscopic bulk response of the tissue (with no fit parameters) that are verified experimentally, providing a strong validation of the model. The best-fit model parameters indicate that although the tissue is fluid-like, it is close to a glass transition, suggesting that small changes to single-cell parameters could generate a significant change in the viscoelastic properties of the tissue. These results provide a robust framework for quantifying and modelling mechanically driven pattern formation in tissues. PMID:24068179
NASA Astrophysics Data System (ADS)
Davis, S. L.; Moran, E.
2015-12-01
Many predictions about how trees will respond to climate change have been made, but these often rely on extrapolating into the future one of two extremes: purely correlative factors like climate, or purely physiological factors unique to a particular species or plant functional group. We are working towards a model that combines both phenotypic and genotypic traits to better predict responses of trees to climate change. We have worked to parameterize a neighborhood dynamics, individual tree forest-gap model called SORTIE-ND, using open data from both the USDA Forest Service Forest Inventory & Analysis (FIA) datasets in California and 30-yr old permanent plots established by the USGS. We generated individual species factors including stage-specific mortality and growth rates, and species-specific allometric equations for ten species, including Abies concolor, A. magnifica, Calocedrus decurrens, Pinus contorta, P. jeffreyi, P. lambertiana, P. monticola, P. ponderosa, and the two hardwoods Quercus chrysolepis and Q. kelloggii. During this process, we also developed two R packages to aid in parameter development for SORTIE-ND in other ecological systems. MakeMyForests is an R package that parses FIA datasets and calculates parameters based on the state averages of growth, light, and allometric parameters. disperseR is an R package that uses extensive plot data, with individual tree, sapling, and seedling measurements, to calculate finely tuned mortality and growth parameters for SORTIE-ND. Both are freely available on GitHub, and future updates will be available on CRAN. To validate the model, we withheld several plots from the 30-yr USGS data while calculating parameters. We tested for differences between the actual withheld data and the simulated forest data, in basal area, seedling density, seed dispersal, and species composition. The similarity of our model to the real system suggests that the model parameters we generated with our R packages accurately represent the system, and that our model can be extended to include changes in precipitation, temperature, and disturbance with very little manipulaton. We hope that our examples, R package development, and SORTIE-ND module development will enable other ecologists to utilize SORTIE-ND to predict changes in local and important ecoystems around the world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim
2014-01-01
To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VICmore » simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.« less
NASA Astrophysics Data System (ADS)
Davis, A. D.; Heimbach, P.; Marzouk, Y.
2017-12-01
We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.
Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan
2015-05-13
We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to -2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.
Advance strategy for climate change adaptation and mitigation in cities
NASA Astrophysics Data System (ADS)
Varquez, A. C. G.; Kanda, M.; Darmanto, N. S.; Sueishi, T.; Kawano, N.
2017-12-01
An on-going 5-yr project financially supported by the Ministry of Environment, Japan, has been carried out to specifically address the issue of prescribing appropriate adaptation and mitigation measures to climate change in cities. Entitled "Case Study on Mitigation and Local Adaptation to Climate Change in an Asian Megacity, Jakarta", the project's relevant objectives is to develop a research framework that can consider both urbanization and climate change with the main advantage of being readily implementable for all cities around the world. The test location is the benchmark city, Jakarta, Indonesia, with the end focus of evaluating the benefits of various mitigation and adaptation strategies in Jakarta and other megacities. The framework was designed to improve representation of urban areas when conducting climate change investigations in cities; and to be able to quantify separately the impacts of urbanization and climate change to all cities globally. It is comprised of a sophisticated, top-down, multi-downscaling approach utilizing a regional model (numerical weather model) and a microscale model (energy balance model and CFD model), with global circulation models (GCM) as input. The models, except the GCM, were configured to reasonably consider land cover, urban morphology, and anthropogenic heating (AH). Equally as important, methodologies that can collect and estimate global distribution of urban parametric and AH datasets are continually being developed. Urban growth models, climate scenario matrices that match representative concentration pathways with shared socio-economic pathways, present distribution of socio-demographic indicators such as population and GDP, existing GIS datasets of urban parameters, are utilized. From these tools, future urbanization (urban morphological parameters and AH) can be introduced into the models. Sensitivity using various combinations of GCM and urbanization can be conducted. Furthermore, since the models utilize parameters that can be readily modified to suit certain countermeasures, adaptation and mitigation strategies can be evaluated using thermal comfort and other social indicators. With the approaches introduced through this project, a deeper understanding of urban-climate interactions in the changing global climate can be achieved.
Practical identifiability analysis of a minimal cardiovascular system model.
Pironet, Antoine; Docherty, Paul D; Dauby, Pierre C; Chase, J Geoffrey; Desaive, Thomas
2017-01-17
Parameters of mathematical models of the cardiovascular system can be used to monitor cardiovascular state, such as total stressed blood volume status, vessel elastance and resistance. To do so, the model parameters have to be estimated from data collected at the patient's bedside. This work considers a seven-parameter model of the cardiovascular system and investigates whether these parameters can be uniquely determined using indices derived from measurements of arterial and venous pressures, and stroke volume. An error vector defined the residuals between the simulated and reference values of the seven clinically available haemodynamic indices. The sensitivity of this error vector to each model parameter was analysed, as well as the collinearity between parameters. To assess practical identifiability of the model parameters, profile-likelihood curves were constructed for each parameter. Four of the seven model parameters were found to be practically identifiable from the selected data. The remaining three parameters were practically non-identifiable. Among these non-identifiable parameters, one could be decreased as much as possible. The other two non-identifiable parameters were inversely correlated, which prevented their precise estimation. This work presented the practical identifiability analysis of a seven-parameter cardiovascular system model, from limited clinical data. The analysis showed that three of the seven parameters were practically non-identifiable, thus limiting the use of the model as a monitoring tool. Slight changes in the time-varying function modeling cardiac contraction and use of larger values for the reference range of venous pressure made the model fully practically identifiable. Copyright © 2017. Published by Elsevier B.V.
Geographically explicit urban land use change scenarios for Mega cities: a case study in Tokyo
NASA Astrophysics Data System (ADS)
Yamagata, Y.; Bagan, H.; Seya, H.; Nakamichi, K.
2010-12-01
In preparation for the IPCC 5th assessment report, the international modeling community is developing four Representative Concentration Paths employing the scenarios developed by four different Integrated Assessment Models. These RCPs will be employed as an input to climate models, such as Earth System Models. In these days, the importance of assessment of not only global but also local (city/zone level) impacts of global change has gradually been recognized, thereby downscaling climate models are one of the urgent problems to be solved. Needless to say, reliable downscaling requires spatially high resolution land use change scenarios. So far, there has been proposed a lot of methods for constructing land use change scenarios with considering economic behavior of human, such as agent-based model (e.g., Parker et al., 2001), and land use transport (LUT) model (e.g., Anas and Liu, 2007). The latter approach in particular has widely been applied to actual urban/transport policy; hence modeling the interaction between them is very important for creating reliable land use change scenarios. However, the LUT models are usually built based on the zones of cities/municipalities whose spatial resolutions are too low to derive sensible parameters of the climate models. Moreover, almost all of the works which attempt to build spatially high resolution LUT model employs very small regions as the study area. The objective of this research is deriving various input parameters to climate models such as population density, fractional green vegetation cover, and anthropogenic heat emission with spatially high resolution land use change scenarios constructed with LUT model. The study area of this research is Tokyo metropolitan area, which is the largest urban area in the world (United Nations., 2010). Firstly, this study employs very high ground resolution zones composed of micro districts around 1km2. Secondly, the research attempt to combine remote sensing techniques and LUT models to derive future distribution of fractional green vegetation cover. The study has created two extreme land-use scenarios: urban concentration (compact city) and dispersion scenarios in order to show possible range of future land use change, and derives the input parameters for the climate models. The authors are planning to open the scenarios and derived parameters to relate researches. Anas, A. and Y. Liu. (2007). A Regional Economy, Land Use, and Transportation Model (REULU-TRAN): Formulation, Algorithm Design, and Testing. Journal of Regional Science, 47, 415-455. Parker, D.C., T. Berger, S.M. Manson, Editors (2001). Agent-Based Models of Land-Use and Land-Cover Change. LUCC Report Series No. 6, (Accessed: 27 AUG. 2009; http://www.globallandproject.org/Documents/LUCC_No_6.pdf) United Nations. (2010). World urbanization prospects: City population.
A new ODE tumor growth modeling based on tumor population dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oroji, Amin; Omar, Mohd bin; Yarahmadian, Shantia
2015-10-22
In this paper a new mathematical model for the population of tumor growth treated by radiation is proposed. The cells dynamics population in each state and the dynamics of whole tumor population are studied. Furthermore, a new definition of tumor lifespan is presented. Finally, the effects of two main parameters, treatment parameter (q), and repair mechanism parameter (r) on tumor lifespan are probed, and it is showed that the change in treatment parameter (q) highly affects the tumor lifespan.
Interactive model evaluation tool based on IPython notebook
NASA Astrophysics Data System (ADS)
Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet
2015-04-01
In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).
NASA Astrophysics Data System (ADS)
Steinman, B. A.; Rosenmeier, M.; Abbott, M.
2008-12-01
The economy of the Pacific Northwest relies heavily on water resources from the drought-prone Columbia River and its tributaries, as well as the many lakes and reservoirs of the region. Proper management of these water resources requires a thorough understanding of local drought histories that extends well beyond the instrumental record of the twentieth century, a time frame too short to capture the full range of drought variability in the Pacific Northwest. Here we present a lumped parameter, mass-balance model that provides insight into the influence of hydroclimatological changes on two small, closed-basin systems located in north- central Washington. Steady state model simulations of lake water oxygen isotope ratios using modern climate and catchment parameter datasets demonstrate a strong sensitivity to both the amount and timing of precipitation, and to changes in summertime relative humidity, particularly at annual and decadal time scales. Model tests also suggest that basin hypsography can have a significant impact on lake water oxygen isotope variations, largely through surface area to volume and consequent evaporative flux to volume ratio changes in response to drought and pluvial sequences. Additional simulations using input parameters derived from both on-site and National Climatic Data Center historical climate datasets accurately approximate three years of continuous lake observations (seasonal water sampling and continuous lake level monitoring) and twentieth century oxygen isotope ratios in sediment core authigenic carbonate recovered from the lakes. Results from these model simulations suggest that small, closed-basin lakes in north-central Washington are highly sensitive to changes in the drought-related climate variables, and that long (8000 year), high resolution records of quantitative changes in precipitation and evaporation are obtainable from sediment cores recovered from water bodies of the Pacific Northwest.
NASA Astrophysics Data System (ADS)
Švajdlenková, H.; Ruff, A.; Lunkenheimer, P.; Loidl, A.; Bartoš, J.
2017-08-01
We report a broadband dielectric spectroscopic (BDS) study on the clustering fragile glass-former meta-toluidine (m-TOL) from 187 K up to 289 K over a wide frequency range of 10-3-109 Hz with focus on the primary α relaxation and the secondary β relaxation above the glass temperature Tg. The broadband dielectric spectra were fitted by using the Havriliak-Negami (HN) and Cole-Cole (CC) models. The β process disappearing at Tβ,disap = 1.12Tg exhibits non-Arrhenius dependence fitted by the Vogel-Fulcher-Tamman-Hesse equation with T0βVFTH in accord with the characteristic differential scanning calorimetry (DSC) limiting temperature of the glassy state. The essential feature of the α process consists in the distinct changes of its spectral shape parameter βHN marked by the characteristic BDS temperatures TB1βHN and TB2βHN. The primary α relaxation times were fitted over the entire temperature and frequency range by several current three-parameter up to six-parameter dynamic models. This analysis reveals that the crossover temperatures of the idealized mode coupling theory model (TcMCT), the extended free volume model (T0EFV), and the two-order parameter (TOP) model (Tmc) are close to TB1βHN, which provides a consistent physical rationalization for the first change of the shape parameter. In addition, the other two characteristic TOP temperatures T0TOP and TA are coinciding with the thermodynamic Kauzmann temperature TK and the second change of the shape parameter at around TB2βHN, respectively. These can be related to the onset of the liquid-like domains in the glassy state or the disappearance of the solid-like domains in the normal liquid state.
Herrmann, Frank; Baghdadi, Nicolas; Blaschek, Michael; Deidda, Roberto; Duttmann, Rainer; La Jeunesse, Isabelle; Sellami, Haykel; Vereecken, Harry; Wendland, Frank
2016-02-01
We used observed climate data, an ensemble of four GCM-RCM combinations (global and regional climate models) and the water balance model mGROWA to estimate present and future groundwater recharge for the intensively-used Thau lagoon catchment in southern France. In addition to a highly resolved soil map, soil moisture distributions obtained from SAR-images (Synthetic Aperture Radar) were used to derive the spatial distribution of soil parameters covering the full simulation domain. Doing so helped us to assess the impact of different soil parameter sources on the modelled groundwater recharge levels. Groundwater recharge was simulated in monthly time steps using the ensemble approach and analysed in its spatial and temporal variability. The soil parameters originating from both sources led to very similar groundwater recharge rates, proving that soil parameters derived from SAR images may replace traditionally used soil maps in regions where soil maps are sparse or missing. Additionally, we showed that the variance in different GCM-RCMs influences the projected magnitude of future groundwater recharge change significantly more than the variance in the soil parameter distributions derived from the two different sources. For the period between 1950 and 2100, climate change impacts based on the climate model ensemble indicated that overall groundwater recharge will possibly show a low to moderate decrease in the Thau catchment. However, as no clear trend resulted from the ensemble simulations, reliable recommendations for adapting the regional groundwater management to changed available groundwater volumes could not be derived. Copyright © 2015 Elsevier B.V. All rights reserved.
Ho Hoang, Khai-Long; Mombaur, Katja
2015-10-15
Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.
Hussain, S A; Perrier, M; Tartakovsky, B
2018-04-01
Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.
Quarter-Wave buncher for NICA project
NASA Astrophysics Data System (ADS)
Trushin, M.; Fatkullin, R.; Sitnikov, A.; Seleznev, D.; Koshelev, V. A.; Plastun, A. S.; Barabin, S. V.; Kozlov, A. V.; Kuzmichev, V. G.; Kropachev, G. N.; Kulevoy, T.
2017-12-01
This paper represents the results of modeling the electrodynamic characteristics (EDC) for a quarter-wave coaxial beam buncher, simulation of thermal loads of the buncher, modeling of the mechanical changes in the geometric parameters caused by the thermal load of the buncher and modeling of the new EDC depended on this changes.
The Predicted Influence of Climate Change on Lesser Prairie-Chicken Reproductive Parameters
Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, Dawn M.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001–2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter’s linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival. PMID:23874549
Automated dynamic analytical model improvement for damped structures
NASA Technical Reports Server (NTRS)
Fuh, J. S.; Berman, A.
1985-01-01
A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.
The Mathematics of Psychotherapy: A Nonlinear Model of Change Dynamics.
Schiepek, Gunter; Aas, Benjamin; Viol, Kathrin
2016-07-01
Psychotherapy is a dynamic process produced by a complex system of interacting variables. Even though there are qualitative models of such systems the link between structure and function, between network and network dynamics is still missing. The aim of this study is to realize these links. The proposed model is composed of five state variables (P: problem severity, S: success and therapeutic progress, M: motivation to change, E: emotions, I: insight and new perspectives) interconnected by 16 functions. The shape of each function is modified by four parameters (a: capability to form a trustful working alliance, c: mentalization and emotion regulation, r: behavioral resources and skills, m: self-efficacy and reward expectation). Psychologically, the parameters play the role of competencies or traits, which translate into the concept of control parameters in synergetics. The qualitative model was transferred into five coupled, deterministic, nonlinear difference equations generating the dynamics of each variable as a function of other variables. The mathematical model is able to reproduce important features of psychotherapy processes. Examples of parameter-dependent bifurcation diagrams are given. Beyond the illustrated similarities between simulated and empirical dynamics, the model has to be further developed, systematically tested by simulated experiments, and compared to empirical data.
A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins
NASA Astrophysics Data System (ADS)
Gronewold, A.; Alameddine, I.; Anderson, R. M.
2009-12-01
Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Segurado, Pedro; Branco, Paulo; Jauch, Eduardo; Neves, Ramiro; Ferreira, M Teresa
2016-08-15
Climate change will predictably change hydrological patterns and processes at the catchment scale, with impacts on habitat conditions for fish. The main goal of this study is to assess how shifts in fish habitat favourability under climate change scenarios are affected by hydrological stressors. The interplay between climate and hydrological stressors has important implications in river management under climate change because management actions to control hydrological parameters are more feasible than controlling climate. This study was carried out in the Tamega catchment of the Douro basin. A set of hydrological stressor variables were generated through a process-based modelling based on current climate data (2008-2014) and also considering a high-end future climate change scenario. The resulting parameters, along with climatic and site-descriptor variables were used as explanatory variables in empirical habitat models for nine fish species using boosted regression trees. Models were calibrated for the whole Douro basin using 254 fish sampling sites and predictions under future climate change scenarios were made for the Tamega catchment. Results show that models using climatic variables but not hydrological stressors produce more stringent predictions of future favourability, predicting more distribution contractions or stronger range shifts. The use of hydrological stressors strongly influences projections of habitat favourability shifts; the integration of these stressors in the models thinned shifts in range due to climate change. Hydrological stressors were retained in the models for most species and had a high importance, demonstrating that it is important to integrate hydrology in studies of impacts of climate change on freshwater fishes. This is a relevant result because it means that management actions to control hydrological parameters in rivers will have an impact on the effects of climate change and may potentially be helpful to mitigate its negative effects on fish populations and assemblages. Copyright © 2016 Elsevier B.V. All rights reserved.
What Drives the Variability of the Mid-Latitude Ionosphere?
NASA Astrophysics Data System (ADS)
Goncharenko, L. P.; Zhang, S.; Erickson, P. J.; Harvey, L.; Spraggs, M. E.; Maute, A. I.
2016-12-01
The state of the ionosphere is determined by the superposition of the regular changes and stochastic variations of the ionospheric parameters. Regular variations are represented by diurnal, seasonal and solar cycle changes, and can be well described by empirical models. Short-term perturbations that vary from a few seconds to a few hours or days can be induced in the ionosphere by solar flares, changes in solar wind, coronal mass ejections, travelling ionospheric disturbances, or meteorological influences. We use over 40 years of observations by the Millstone Hill incoherent scatter radar (42.6oN, 288.5oE) to develop an updated empirical model of ionospheric parameters, and wintertime data collected in 2004-2016 to study variability in ionospheric parameters. We also use NASA MERRA2 atmospheric reanalysis data to examine possible connections between the state of the stratosphere & mesosphere and the upper atmosphere (250-400km). A case of major SSW of January 2013 is selected for in-depth study and reveals large anomalies in ionospheric parameters. Modeling with the NCAR Thermospheric-Ionospheric-Mesospheric-Electrodynamics general Circulation Model (TIME-GCM) nudged by WACCM-GEOS5 simulation indicates that during the 2013 SSW the neutral and ion temperature in the polar through mid-latitude region deviates from the seasonal behavior.
The effect of Earth's oblateness on the seismic moment estimation from satellite gravimetry
NASA Astrophysics Data System (ADS)
Dai, Chunli; Guo, Junyi; Shang, Kun; Shum, C. K.; Wang, Rongjiang
2018-05-01
Over the last decade, satellite gravimetry, as a new class of geodetic sensors, has been increasingly studied for its use in improving source model inversion for large undersea earthquakes. When these satellite-observed gravity change data are used to estimate source parameters such as seismic moment, the forward modelling of earthquake seismic deformation is crucial because imperfect modelling could lead to errors in the resolved source parameters. Here, we discuss several modelling issues and focus on one modelling deficiency resulting from the upward continuation of gravity change considering the Earth's oblateness, which is ignored in contemporary studies. For the low degree (degree 60) time-variable gravity solutions from Gravity Recovery and Climate Experiment mission data, the model-predicted gravity change would be overestimated by 9 per cent for the 2011 Tohoku earthquake, and about 6 per cent for the 2010 Maule earthquake. For high degree gravity solutions, the model-predicted gravity change at degree 240 would be overestimated by 30 per cent for the 2011 Tohoku earthquake, resulting in the seismic moment to be systematically underestimated by 30 per cent.
Preliminary gravity inversion model of Frenchman Flat Basin, Nevada Test Site, Nevada
Phelps, Geoffrey A.; Graham, Scott E.
2002-01-01
The depth of the basin beneath Frenchman Flat is estimated using a gravity inversion method. Gamma-gamma density logs from two wells in Frenchman Flat constrained the density profiles used to create the gravity inversion model. Three initial models were considered using data from one well, then a final model is proposed based on new information from the second well. The preferred model indicates that a northeast-trending oval-shaped basin underlies Frenchman Flat at least 2,100 m deep, with a maximum depth of 2,400 m at its northeast end. No major horst and graben structures are predicted. Sensitivity analysis of the model indicates that each parameter contributes the same magnitude change to the model, up to 30 meters change in depth for a 1% change in density, but some parameters affect a broader area of the basin. The horizontal resolution of the model was determined by examining the spacing between data stations, and was set to 500 square meters.
Standardization of End-to-End Performance of Digital Video Teleconferencing/Video Telephony Systems
1991-12-01
SYSTEM 3-1 end-to-end video transmission system including both firmly specified and peripheral flexible functions. The format converter changes either...which manifests itself in both subjective evaluations and objective tests. The relative importance of performance parameters is likely to change with...conventional analog performance parameters to be largely independent of bit rate, and only slightly changed between different codec models. The
Paddock, Susan M.; Ebener, Patricia
2010-01-01
Substance abuse treatment research is complicated by the pervasive problem of non-ignorable missing data – i.e., the occurrence of the missing data is related to the unobserved outcomes. Missing data frequently arise due to early client departure from treatment. Pattern-mixture models (PMMs) are often employed in such situations to jointly model the outcome and the missing data mechanism. PMMs require non-testable assumptions to identify model parameters. Several approaches to parameter identification have therefore been explored for longitudinal modeling of continuous outcomes, and informative priors have been developed in other contexts. In this paper, we describe an expert interview conducted with five substance abuse treatment clinical experts who have familiarity with the Therapeutic Community modality of substance abuse treatment and with treatment process scores collected using the Dimensions of Change Instrument. The goal of the interviews was to obtain expert opinion about the rate of change in continuous client-level treatment process scores for clients who leave before completing two assessments and whose rate of change (slope) in treatment process scores is unidentified by the data. We find that the experts’ opinions differed dramatically from widely-utilized assumptions used to identify parameters in the PMM. Further, subjective prior assessment allows one to properly address the uncertainty inherent in the subjective decisions required to identify parameters in the PMM and to measure their effect on conclusions drawn from the analysis. PMID:19012279
Piezoceramic devices and artificial intelligence time varying concepts in smart structures
NASA Technical Reports Server (NTRS)
Hanagud, S.; Calise, A. J.; Glass, B. J.
1990-01-01
The problem of development of smart structures and their vibration control by the use of piezoceramic sensors and actuators have been discussed. In particular, these structures are assumed to have time varying model form and parameters. The model form may change significantly and suddenly. Combined identification of the model from parameters of these structures and model adaptive control of these structures are discussed in this paper.
Capturing Context-Related Change in Emotional Dynamics via Fixed Moderated Time Series Analysis.
Adolf, Janne K; Voelkle, Manuel C; Brose, Annette; Schmiedek, Florian
2017-01-01
Much of recent affect research relies on intensive longitudinal studies to assess daily emotional experiences. The resulting data are analyzed with dynamic models to capture regulatory processes involved in emotional functioning. Daily contexts, however, are commonly ignored. This may not only result in biased parameter estimates and wrong conclusions, but also ignores the opportunity to investigate contextual effects on emotional dynamics. With fixed moderated time series analysis, we present an approach that resolves this problem by estimating context-dependent change in dynamic parameters in single-subject time series models. The approach examines parameter changes of known shape and thus addresses the problem of observed intra-individual heterogeneity (e.g., changes in emotional dynamics due to observed changes in daily stress). In comparison to existing approaches to unobserved heterogeneity, model estimation is facilitated and different forms of change can readily be accommodated. We demonstrate the approach's viability given relatively short time series by means of a simulation study. In addition, we present an empirical application, targeting the joint dynamics of affect and stress and how these co-vary with daily events. We discuss potentials and limitations of the approach and close with an outlook on the broader implications for understanding emotional adaption and development.
Color changes kinetics during deep fat frying of carrot slice
NASA Astrophysics Data System (ADS)
Salehi, Fakhreddin
2018-05-01
Heat and mass transfer phenomena take place during frying cause physicochemical changes, which affect the colour and surface of the fried products. The effect of frying temperature on the colour changes and heat transfer during deep fat frying of carrot has been investigated. The colour scale parameters redness (a*), yellowness (b*) and lightness (L*), and color change intensity (ΔE) were used to estimate colour changes during frying as a function of oil temperature. L* value of fried carrot decreased during frying. The redness of fried carrot decreased during the early stages of frying, while it increased afterwards (become more red). A first-order kinetic equation was used for each one of the three colour parameters, in which the rate constant is a function of oil temperatures. The results showed that oil temperature has a significant effect on the colour parameters. Different kinetic models were used to fit the experimental data and the results revealed that the quadratic model was the most suitable to describe the color change intensity (ΔE) (R > 0.96). Center temperature of carrot slice increased with increase in oil temperature and time during frying.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1991-01-01
A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
A simple approach for the modeling of an ODS steel mechanical behavior in pilgering conditions
NASA Astrophysics Data System (ADS)
Vanegas-Márquez, E.; Mocellin, K.; Toualbi, L.; de Carlan, Y.; Logé, R. E.
2012-01-01
The optimization of the forming of ODS tubes is linked to the choice of an appropriated constitutive model for modeling the metal forming process. In the framework of a unified plastic constitutive theory, the strain-controlled cyclic characteristics of a ferritic ODS steel were analyzed and modeled with two different tests. The first test is a classical tension-compression test, and leads to cyclic softening at low to intermediate strain amplitudes. The second test consists in alternated uniaxial compressions along two perpendicular axes, and is selected based on the similarities with the loading path induced by the Fe-14Cr-1W-Ti ODS cladding tube pilgering process. This second test exhibits cyclic hardening at all tested strain amplitudes. Since variable strain amplitudes prevail in pilgering conditions, the parameters of the considered constitutive law were identified based on a loading sequence including strain amplitude changes. A proposed semi automated inverse analysis methodology is shown to efficiently provide optimal sets of parameters for the considered loading sequences. When compared to classical approaches, the model involves a reduced number of parameters, while keeping a good ability to capture stress changes induced by strain amplitude changes. Furthermore, the methodology only requires one test, which is an advantage when the amount of available material is limited. As two distinct sets of parameters were identified for the two considered tests, it is recommended to consider the loading path when modeling cold forming of the ODS steel.
NASA Astrophysics Data System (ADS)
Csáki, Péter; Kalicz, Péter; Gribovszki, Zoltán
2016-04-01
Water balance of sand regions of Hungary was analysed using remote-sensing based evapotranspiration (ET) maps (1*1 km spatial resolution) by CREMAP model over the 2000-2008 period. The mean annual (2000-2008) net groundwater recharge (R) estimated as the difference in mean annual precipitation (P) and ET, taking advantage that for sand regions the surface runoff is commonly negligible. For the examined nine-year period (2000-2008) the ET and R were about 90 percent and 10 percent of the P. The mean annual ET and R were analysed in the context of land cover types. A Budyko-model was used in spatially-distributed mode for the climate change impact analysis. The parameters of the Budyko-model (α) was calculated for pixels without surplus water. For the extra-water affected pixels a linear model with β-parameters (actual evapotranspiration / pan-evapotranspiration) was used. These parameter maps can be used for evaluating future ET and R in spatially-distributed mode (1*1 km resolution). By using the two parameter maps (α and β) and data of regional climate models (mean annual temperature and precipitation) evapotranspiration and net groundwater recharge projections have been done for three future periods (2011-2040, 2041-2070, 2071-2100). The expected ET and R changes have been determined relative to a reference period (1981-2010). According to the projections, by the end of the 21th century, ET may increase while in case of R a heavy decrease can be detected for the sand regions of Hungary. This research has been supported by Agroclimate.2 VKSZ_12-1-2013-0034 project. Keywords: evapotranspiration, net groundwater recharge, climate change, Budyko-model
NASA Astrophysics Data System (ADS)
Tagaris, Efthimios; -Eleni Sotiropoulou, Rafaella; Sotiropoulos, Andreas; Spanos, Ioannis; Milonas, Panayiotis; Michaelakis, Antonios
2017-04-01
Establishment and seasonal abundance of a region for Invasive Mosquito Species (IMS) are related to climatic parameters such as temperature and precipitation. In this work the current state is assessed using data from the European Climate Assessment and Dataset (ECA&D) project over Greece and Italy for the development of current spatial risk databases of IMS. Results are validated from the installation of a prototype IMS monitoring device that has been designed and developed in the framework of the LIFE CONOPS project at key points across the two countries. Since climate models suggest changes in future temperature and precipitation rates, the future potentiality of IMS establishment and spread over Greece and Italy is assessed using the climatic parameters in 2050's provided by the NASA GISS GCM ModelE under the IPCC-A1B emissions scenarios. The need for regional climate projections in a finer grid size is assessed using the Weather Research and Forecasting (WRF) model to dynamically downscale GCM simulations. The estimated changes in the future meteorological parameters are combined with the observation data in order to estimate the future levels of the climatic parameters of interest. The final product includes spatial distribution maps presenting the future suitability of a region for the establishment and seasonal abundance of the IMS over Greece and Italy. Acknowledgement: LIFE CONOPS project "Development & demonstration of management plans against - the climate change enhanced - invasive mosquitoes in S. Europe" (LIFE12 ENV/GR/000466).
NASA Astrophysics Data System (ADS)
Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd
2018-04-01
Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.
Variations in embodied energy and carbon emission intensities of construction materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Omar, Wan-Mohd-Sabki; School of Environmental Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis; Doh, Jeung-Hwan, E-mail: j.doh@griffith.edu.au
2014-11-15
Identification of parameter variation allows us to conduct more detailed life cycle assessment (LCA) of energy and carbon emission material over their lifecycle. Previous research studies have demonstrated that hybrid LCA (HLCA) can generally overcome the problems of incompleteness and accuracy of embodied energy (EE) and carbon (EC) emission assessment. Unfortunately, the current interpretation and quantification procedure has not been extensively and empirically studied in a qualitative manner, especially in hybridising between the process LCA and I-O LCA. To determine this weakness, this study empirically demonstrates the changes in EE and EC intensities caused by variations to key parameters inmore » material production. Using Australia and Malaysia as a case study, the results are compared with previous hybrid models to identify key parameters and issues. The parameters considered in this study are technological changes, energy tariffs, primary energy factors, disaggregation constant, emission factors, and material price fluctuation. It was found that changes in technological efficiency, energy tariffs and material prices caused significant variations in the model. Finally, the comparison of hybrid models revealed that non-energy intensive materials greatly influence the variations due to high indirect energy and carbon emission in upstream boundary of material production, and as such, any decision related to these materials should be considered carefully. - Highlights: • We investigate the EE and EC intensity variation in Australia and Malaysia. • The influences of parameter variations on hybrid LCA model were evaluated. • Key significant contribution to the EE and EC intensity variation were identified. • High indirect EE and EC content caused significant variation in hybrid LCA models. • Non-energy intensive material caused variation between hybrid LCA models.« less
Formulation, General Features and Global Calibration of a Bioenergetically-Constrained Fishery Model
Bianchi, Daniele; Galbraith, Eric D.
2017-01-01
Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change. PMID:28103280
Carozza, David A; Bianchi, Daniele; Galbraith, Eric D
2017-01-01
Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change.
ERIC Educational Resources Information Center
Boutis, Kathy; Pecaric, Martin; Seeto, Brian; Pusic, Martin
2010-01-01
Signal detection theory (SDT) parameters can describe a learner's ability to discriminate (d[prime symbol]) normal from abnormal and the learner's criterion ([lambda]) to under or overcall abnormalities. To examine the serial changes in SDT parameters with serial exposure to radiological cases. 46 participants were recruited for this study: 20…
Study of the method of water-injected meat identifying based on low-field nuclear magnetic resonance
NASA Astrophysics Data System (ADS)
Xu, Jianmei; Lin, Qing; Yang, Fang; Zheng, Zheng; Ai, Zhujun
2018-01-01
The aim of this study to apply low-field nuclear magnetic resonance technique was to study regular variation of the transverse relaxation spectral parameters of water-injected meat with the proportion of water injection. Based on this, the method of one-way ANOVA and discriminant analysis was used to analyse the differences between these parameters in the capacity of distinguishing water-injected proportion, and established a model for identifying water-injected meat. The results show that, except for T 21b, T 22e and T 23b, the other parameters of the T 2 relaxation spectrum changed regularly with the change of water-injected proportion. The ability of different parameters to distinguish water-injected proportion was different. Based on S, P 22 and T 23m as the prediction variable, the Fisher model and the Bayes model were established by discriminant analysis method, qualitative and quantitative classification of water-injected meat can be realized. The rate of correct discrimination of distinguished validation and cross validation were 88%, the model was stable.
NASA Astrophysics Data System (ADS)
Doummar, J.; Kassem, A.; Gurdak, J. J.
2017-12-01
In the framework of a three-year USAID/NSF- funded PEER Science project, flow in a karst system in Lebanon (Assal Spring; discharge 0.2-2.5 m3/s yearly volume of 22-30 Mm3) dominated by snow and semi arid conditions was simulated using an integrated numerical model (Mike She 2016). The calibrated model (Nash-Sutcliffe coefficient of 0.77) is based on high resolution input data (2014-2017) and detailed catchment characterization. The approach is to assess the influence of various model parameters on recharge signals in the different hydrological karst compartments (Atmosphere, unsaturated zone, and saturated zone) based on an integrated numerical model. These parameters include precipitation intensity and magnitude, temperature, snow-melt parameters, in addition to karst specific spatially distributed features such as fast infiltration points, soil properties and thickness, topographical slopes, Epikarst and thickness of unsaturated zone, and hydraulic conductivity among others. Moreover, the model is currently simulated forward using various scenarios for future climate (Global Climate Models GCM; daily downscaled temperature and precipitation time series for Lebanon 2020-2045) in order to depict the flow rates expected in the future and the effect of climate change on hydrographs recession coefficients, discharge maxima and minima, and total spring discharge volume . Additionally, a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge or indirectly on the vulnerability of the system (soil thickness, soil and rock hydraulic conductivity appear to be amongst the highly sensitive parameters). This study particularly unravels the normalized single effect of rain magnitude and intensity, snow, and temperature change on the flow rate (e.g., a change of temperature of 3° on the catchment yields a Residual Mean Square Error RMSE of 0.15 m3/s in the spring discharge and a 16% error in the total annual volume with respect to the calibrated model). Finally, such a study can allow decision makers to implement best informed management practices, especially in complex karst systems, to overcome impacts of climate change on water resources.
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Hydrologic Modeling in the Kenai River Watershed using Event Based Calibration
NASA Astrophysics Data System (ADS)
Wells, B.; Toniolo, H. A.; Stuefer, S. L.
2015-12-01
Understanding hydrologic changes is key for preparing for possible future scenarios. On the Kenai Peninsula in Alaska the yearly salmon runs provide a valuable stimulus to the economy. It is the focus of a large commercial fishing fleet, but also a prime tourist attraction. Modeling of anadromous waters provides a tool that assists in the prediction of future salmon run size. Beaver Creek, in Kenai, Alaska, is a lowlands stream that has been modeled using the Army Corps of Engineers event based modeling package HEC-HMS. With the use of historic precipitation and discharge data, the model was calibrated to observed discharge values. The hydrologic parameters were measured in the field or calculated, while soil parameters were estimated and adjusted during the calibration. With the calibrated parameter for HEC-HMS, discharge estimates can be used by other researches studying the area and help guide communities and officials to make better-educated decisions regarding the changing hydrology in the area and the tied economic drivers.
Giessler, Mathias; Tränckner, Jens
2018-02-01
The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluating growth of the Porcupine Caribou Herd using a stochastic model
Walsh, Noreen E.; Griffith, Brad; McCabe, Thomas R.
1995-01-01
Estimates of the relative effects of demographic parameters on population rates of change, and of the level of natural variation in these parameters, are necessary to address potential effects of perturbations on populations. We used a stochastic model, based on survival and reproduction estimates of the Porcupine Caribou (Rangifer tarandus granti) Herd (PCH), during 1983-89 and 1989-92 to obtain distributions of potential population rates of change (r). The distribution of r produced by 1,000 trajectories of our simulation model (1983-89, r̄ = 0.013; 1989-92, r̄ = 0.003) encompassed the rate of increase calculated from an independent series of photo-survey data over the same years (1983-89, r = 0.048; 1989-92, r = -0.035). Changes in adult female survival had the largest effect on r, followed by changes in calf survival. We hypothesized that petroleum development on calving grounds, or changes in calving and post-calving habitats due to global climate change, would affect model input parameters. A decline in annual adult female survival from 0.871 to 0.847, or a decline in annual calf survival from 0.518 to 0.472, would be sufficient to cause a declining population, if all other input estimates remained the same. We then used these lower survival rates, in conjunction with our estimated amount of among-year variation, to determine a range of resulting population trajectories. Stochastic models can be used to better understand dynamics of populations, optimize sampling investment, and evaluate potential effects of various factors on population growth.
Modelling uncertainties in the climate of the last millennium: the ASTER project
NASA Astrophysics Data System (ADS)
Loutre, M. F.; Mouchet, A.; Fichefet, T.; Goosse, H.; Goelzer, H.; Huybrechts, P.
2009-04-01
The LOVECLIM model (Driesschaert et al., 2007; Goosse et al., 2007) is used to simulate the climate of the last millennium with several ‘climate' parameter sets yielding different sensitivities of the climate and the carbon cycle model. The purpose of these simulations is twofold. We intend to assess first the role of the carbon cycle on the climate, and second, the ability of the different selected parameter sets to drive the model within the range of the observed climate, and further to assess the uncertainty related to these parameters. The high frequency variability of the forcings is taken into account. For each set of parameters, LOVECLIM is driven by the natural evolution of insolation, solar irradiance and stratospheric aerosol concentrations due to volcanic activity as well as by changes caused by human activities such as deforestation, CO2 emission or concentration changes, changes in concentrations of greenhouse gases other than CO2 (including ozone) and in sulphate aerosol load. Several transient experiments are conducted for each parameter set. A first transient simulation (Conc) is forced with reconstructed atmospheric CO2 concentration. In the next two simulations, the emissions of carbon were taken into account, the model computing the corresponding atmospheric CO2 concentration. First (EMIS), the emissions due both to the land use changes and the fossil fuel burning are provided. Second (Efor), only the emissions from fossil fuel burning are provided in addition to the vegetation change related to deforestation. The Northern Hemisphere annual mean temperatures simulated by the model according to the different parameter sets and carbon cycle sensitivities and the different experimental setups do not show striking differences compared to the NH temperature recontructions (IPCC, 2007). However, the simulated values are generally in the lower range of the reconstructions in the interval 900-1200 AD. Moreover some experiments are displaying a too large warming during the last century as well as a large variability occasionally out of the range of observation. The increase in atmospheric CO2 concentration over the last century is strongly depending on how the anthropogenic emission and the land-use scenario are taken into account. Difference in atmospheric CO2 concentration can reach up to 50 ppmv. All the parameter sets are not able to reproduce the decreasing trend of the Arctic summer sea ice as recorded over the last decades. Parameter sets corresponding to the largest climate sensitivity lead to a strong reduction of the summer sea ice. However, different scenarios for deforestation lead to significantly different time evolution of the NH Summer sea ice area for the same parameter set. The ocean C storage remains within the range of estimates when CO2 is prescribed. However, values are much larger when both fossil fuel and land cover change emission are prescribed. The deforestation emissions as computed by the model lead to intermediate cumulative CO2 fluxes to the atmosphere. Driesschaert E., Fichefet T., Goosse H., Huybrechts P., Janssens I., Mouchet A., Munhoven G., Brovkin V., and Weber S. L., 2007. Modelling the influence of Greenland ice sheet melting on the Atlantic meridional overturning circulation during the next millennia. Geophys. Res. Lett., 34:L1070, 2007. Goosse H., Driesschaert E., Fichefet T., and Loutre M.F., 2007. Information on the early Holocene climate constrains the summer sea ice projections for the 21st century Clim. Past 3, 683-692. IPCC (2007). Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp.
Tracking slow modulations in synaptic gain using dynamic causal modelling: validation in epilepsy.
Papadopoulou, Margarita; Leite, Marco; van Mierlo, Pieter; Vonck, Kristl; Lemieux, Louis; Friston, Karl; Marinazzo, Daniele
2015-02-15
In this work we propose a proof of principle that dynamic causal modelling can identify plausible mechanisms at the synaptic level underlying brain state changes over a timescale of seconds. As a benchmark example for validation we used intracranial electroencephalographic signals in a human subject. These data were used to infer the (effective connectivity) architecture of synaptic connections among neural populations assumed to generate seizure activity. Dynamic causal modelling allowed us to quantify empirical changes in spectral activity in terms of a trajectory in parameter space - identifying key synaptic parameters or connections that cause observed signals. Using recordings from three seizures in one patient, we considered a network of two sources (within and just outside the putative ictal zone). Bayesian model selection was used to identify the intrinsic (within-source) and extrinsic (between-source) connectivity. Having established the underlying architecture, we were able to track the evolution of key connectivity parameters (e.g., inhibitory connections to superficial pyramidal cells) and test specific hypotheses about the synaptic mechanisms involved in ictogenesis. Our key finding was that intrinsic synaptic changes were sufficient to explain seizure onset, where these changes showed dissociable time courses over several seconds. Crucially, these changes spoke to an increase in the sensitivity of principal cells to intrinsic inhibitory afferents and a transient loss of excitatory-inhibitory balance. Copyright © 2014. Published by Elsevier Inc.
Formal Techniques for Organization Analysis: Task and Resource Management
1984-06-01
typical approach has been to base new entities on stereotypical structures and make changes as problems are recognized. Clearly, this is not an...human resources; and provide the means to change and track all 4 L I _ _ _ ____ I I these parameters as they interact with each other and respond to...functioning under internal and external change . 3. Data gathering techniques to allow one to efficiently r,’lect reliable modeling parameters from
Miner, Nadine E.; Caudell, Thomas P.
2004-06-08
A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.
NASA Technical Reports Server (NTRS)
Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.
2016-01-01
The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.
Hands-on parameter search for neural simulations by a MIDI-controller.
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting--exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.
Hands-On Parameter Search for Neural Simulations by a MIDI-Controller
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems. PMID:22066027
The use of generalized estimating equations in the analysis of motor vehicle crash data.
Hutchings, Caroline B; Knight, Stacey; Reading, James C
2003-01-01
The purpose of this study was to determine if it is necessary to use generalized estimating equations (GEEs) in the analysis of seat belt effectiveness in preventing injuries in motor vehicle crashes. The 1992 Utah crash dataset was used, excluding crash participants where seat belt use was not appropriate (n=93,633). The model used in the 1996 Report to Congress [Report to congress on benefits of safety belts and motorcycle helmets, based on data from the Crash Outcome Data Evaluation System (CODES). National Center for Statistics and Analysis, NHTSA, Washington, DC, February 1996] was analyzed for all occupants with logistic regression, one level of nesting (occupants within crashes), and two levels of nesting (occupants within vehicles within crashes) to compare the use of GEEs with logistic regression. When using one level of nesting compared to logistic regression, 13 of 16 variance estimates changed more than 10%, and eight of 16 parameter estimates changed more than 10%. In addition, three of the independent variables changed from significant to insignificant (alpha=0.05). With the use of two levels of nesting, two of 16 variance estimates and three of 16 parameter estimates changed more than 10% from the variance and parameter estimates in one level of nesting. One of the independent variables changed from insignificant to significant (alpha=0.05) in the two levels of nesting model; therefore, only two of the independent variables changed from significant to insignificant when the logistic regression model was compared to the two levels of nesting model. The odds ratio of seat belt effectiveness in preventing injuries was 12% lower when a one-level nested model was used. Based on these results, we stress the need to use a nested model and GEEs when analyzing motor vehicle crash data.
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
NASA Astrophysics Data System (ADS)
Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya
2016-11-01
We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.
Influence of parameter changes to stability behavior of rotors
NASA Technical Reports Server (NTRS)
Fritzen, C. P.; Nordmann, R.
1982-01-01
The occurrence of unstable vibrations in rotating machinery requires corrective measures for improvement of the stability behavior. A simple approximate method is represented to find out the influence of parameter changes to the stability behavior. The method is based on an expansion of the eigenvalues in terms of system parameters. Influence coefficients show the effect of structural modifications. The method first of all was applied to simple nonconservative rotor models. It was approved for an unsymmetric rotor of a test rig.
Elik, Aysel; Yanık, Derya Koçak; Maskan, Medeni; Göğüş, Fahrettin
2016-05-01
The present study was undertaken to assess the effects of three different concentration processes open-pan, rotary vacuum evaporator and microwave heating on evaporation rate, the color and phenolics content of blueberry juice. Kinetics model study for changes in soluble solids content (°Brix), color parameters and phenolics content during evaporation was also performed. The final juice concentration of 65° Brix was achieved in 12, 15, 45 and 77 min, for microwave at 250 and 200 W, rotary vacuum and open-pan evaporation processes, respectively. Color changes associated with heat treatment were monitored using Hunter colorimeter (L*, a* and b*). All Hunter color parameters decreased with time and dependently studied concentration techniques caused color degradation. It was observed that the severity of color loss was higher in open-pan technique than the others. Evaporation also affected total phenolics content in blueberry juice. Total phenolics loss during concentration was highest in open-pan technique (36.54 %) and lowest in microwave heating at 200 W (34.20 %). So, the use of microwave technique could be advantageous in food industry because of production of blueberry juice concentrate with a better quality and short time of operation. A first-order kinetics model was applied to modeling changes in soluble solids content. A zero-order kinetics model was used to modeling changes in color parameters and phenolics content.
Image analysis and green tea color change kinetics during thin-layer drying.
Shahabi, Mohammad; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinpour, Soleiman
2014-09-01
This study was conducted to investigate the effect of air temperature and air flow velocity on kinetics of color parameter changes during hot-air drying of green tea, to obtain the best model for hot-air drying of green tea, to apply a computer vision system and to study the color changes during drying. In the proposed computer vision system system, at first RGB values of the images were converted into XYZ values and then to Commission International d'Eclairage L*a*b* color coordinates. The obtained color parameters of L*, a* and b* were calibrated with Hunter-Lab colorimeter. These values were also used for calculation of the color difference, chroma, hue angle and browning index. The values of L* and b* decreased, while the values of a* and color difference (ΔE*ab ) increased during hot-air drying. Drying data were fitted to three kinetic models. Zero, first-order and fractional conversion models were utilized to describe the color changes of green tea. The suitability of fitness was determined using the coefficient of determination (R (2)) and root-mean-square error. Results showed that the fraction conversion model had more acceptable fitness than the other two models in most of color parameters. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Estimation of population trajectories from count data
Link, W.A.; Sauer, J.R.
1997-01-01
Monitoring of changes in animal population size is rarely possible through complete censuses; frequently, the only feasible means of monitoring changes in population size is to use counts of animals obtained by skilled observers as indices to abundance. Analysis of changes in population size can be severely biased if factors related to the acquisition of data are not adequately controlled for. In particular we identify two types of observer effects: these correspond to baseline differences in observer competence, and to changes through time in the ability of individual observers. We present a family of models for count data in which the first of these observer effects is treated as a nuisance parameter. Conditioning on totals of negative binomial counts yields a Dirichlet compound multinomial vector for each observer. Quasi-likelihood is used to estimate parameters related to population trajectory and other parameters of interest; model selection is carried out on the basis of Akaike's information criterion. An example is presented using data on Wood thrush from the North American Breeding Bird Survey.
The human as a detector of changes in variance and bandwidth
NASA Technical Reports Server (NTRS)
Curry, R. E.; Govindaraj, T.
1977-01-01
The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2017-12-01
Data assimilation (DA) is the widely accepted procedure for estimating parameters within predictive models because of the adaptability and uncertainty quantification offered by Bayesian methods. DA applications in phenology modeling offer critical insights into how extreme weather or changes in climate impact the vegetation life cycle. Changes in leaf onset and senescence, root phenology, and intermittent leaf shedding imply large changes in the surface radiative, water, and carbon budgets at multiple scales. Models of leaf phenology require concurrent atmospheric and soil conditions to determine how biophysical plant properties respond to changes in temperature, light and water demand. Presently, climatological records for fraction of photosynthetically active radiation (FPAR) and leaf area index (LAI), the modelled states indicative of plant phenology, are not available. Further, DA models are typically trained on short periods of record (e.g. less than 10 years). Using limited records with a DA framework imposes non-stationarity on estimated parameters and the resulting predicted model states. This talk discusses how uncertainty introduced by the inherent non-stationarity of the modeled processes propagates through a land-surface hydrology model coupled to a predictive phenology model. How water demand is accounted for in the upscaling of DA model inputs and analysis period serves as a key source of uncertainty in the FPAR and LAI predictions. Parameters estimated from different DA effectively calibrate a plant water-use strategy within the land-surface hydrology model. For example, when extreme droughts are included in the DA period, the plants are trained to uptake water, transpire, and assimilate carbon under favorable conditions and quickly shut down at the onset of water stress.
Hydraulic modeling analysis of the Middle Rio Grande - Escondida Reach, New Mexico
Amanda K. Larsen
2007-01-01
Human influence on the Middle Rio Grande has resulted in major changes throughout the Middle Rio Grande region in central New Mexico, including problems with erosion and sedimentation. Hydraulic modeling analyses have been performed on the Middle Rio Grande to determine changes in channel morphology and other important parameters. Important changes occurring in the...
NASA Technical Reports Server (NTRS)
Moghaddam, Mahta
1995-01-01
In this work, the application of an inversion algorithm based on a nonlinear opimization technique to retrieve forest parameters from multifrequency polarimetric SAR data is discussed. The approach discussed here allows for retrieving and monitoring changes in forest parameters in a quantative and systematic fashion using SAR data. The parameters to be inverted directly from the data are the electromagnetic scattering properties of the forest components such as their dielectric constants and size characteristics. Once these are known, attributes such as canopy moisture content can be obtained, which are useful in the ecosystem models.
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Fanselow, J. L.
1987-01-01
This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.
NASA Astrophysics Data System (ADS)
Sovers, O. J.; Fanselow, J. L.
1987-12-01
This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.
NASA Astrophysics Data System (ADS)
Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.
2017-07-01
Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Part-to-itself model inversion in process compensated resonance testing
NASA Astrophysics Data System (ADS)
Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Aldrin, John C.; Goodlet, Brent; Mazdiyasni, Siamack
2018-04-01
Process Compensated Resonance Testing (PCRT) is a non-destructive evaluation (NDE) method involving the collection and analysis of a part's resonance spectrum to characterize its material or damage state. Prior work used the finite element method (FEM) to develop forward modeling and model inversion techniques. In many cases, the inversion problem can become confounded by multiple parameters having similar effects on a part's resonance frequencies. To reduce the influence of confounding parameters and isolate the change in a part (e.g., creep), a part-to-itself (PTI) approach can be taken. A PTI approach involves inverting only the change in resonance frequencies from the before and after states of a part. This approach reduces the possible inversion parameters to only those that change in response to in-service loads and damage mechanisms. To evaluate the effectiveness of using a PTI inversion approach, creep strain and material properties were estimated in virtual and real samples using FEM inversion. Virtual and real dog bone samples composed of nickel-based superalloy Mar-M-247 were examined. Virtual samples were modeled with typically observed variations in material properties and dimensions. Creep modeling was verified with the collected resonance spectra from an incrementally crept physical sample. All samples were inverted against a model space that allowed for change in the creep damage state and the material properties but was blind to initial part dimensions. Results quantified the capabilities of PTI inversion in evaluating creep strain and material properties, as well as its sensitivity to confounding initial dimensions.
Multi-nutrient, multi-group model of present and future oceanic phytoplankton communities
NASA Astrophysics Data System (ADS)
Litchman, E.; Klausmeier, C. A.; Miller, J. R.; Schofield, O. M.; Falkowski, P. G.
2006-11-01
Phytoplankton community composition profoundly affects patterns of nutrient cycling and the dynamics of marine food webs; therefore predicting present and future phytoplankton community structure is crucial to understand how ocean ecosystems respond to physical forcing and nutrient limitations. We develop a mechanistic model of phytoplankton communities that includes multiple taxonomic groups (diatoms, coccolithophores and prasinophytes), nutrients (nitrate, ammonium, phosphate, silicate and iron), light, and a generalist zooplankton grazer. Each taxonomic group was parameterized based on an extensive literature survey. We test the model at two contrasting sites in the modern ocean, the North Atlantic (North Atlantic Bloom Experiment, NABE) and subarctic North Pacific (ocean station Papa, OSP). The model successfully predicts general patterns of community composition and succession at both sites: In the North Atlantic, the model predicts a spring diatom bloom, followed by coccolithophore and prasinophyte blooms later in the season. In the North Pacific, the model reproduces the low chlorophyll community dominated by prasinophytes and coccolithophores, with low total biomass variability and high nutrient concentrations throughout the year. Sensitivity analysis revealed that the identity of the most sensitive parameters and the range of acceptable parameters differed between the two sites. We then use the model to predict community reorganization under different global change scenarios: a later onset and extended duration of stratification, with shallower mixed layer depths due to increased greenhouse gas concentrations; increase in deep water nitrogen; decrease in deep water phosphorus and increase or decrease in iron concentration. To estimate uncertainty in our predictions, we used a Monte Carlo sampling of the parameter space where future scenarios were run using parameter combinations that produced acceptable modern day outcomes and the robustness of the predictions was determined. Change in the onset and duration of stratification altered the timing and the magnitude of the spring diatom bloom in the North Atlantic and increased total phytoplankton and zooplankton biomass in the North Pacific. Changes in nutrient concentrations in some cases changed dominance patterns of major groups, as well as total chlorophyll and zooplankton biomass. Based on these scenarios, our model suggests that global environmental change will inevitably alter phytoplankton community structure and potentially impact global biogeochemical cycles.
NASA Astrophysics Data System (ADS)
Sampath, D. M. R.; Boski, T.
2018-05-01
Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost-effective HESM model will be suitable for estimating the morphological impacts of sea-level rise on estuarine systems on a decadal timescale.
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
Integral projection models for finite populations in a stochastic environment.
Vindenes, Yngvild; Engen, Steinar; Saether, Bernt-Erik
2011-05-01
Continuous types of population structure occur when continuous variables such as body size or habitat quality affect the vital parameters of individuals. These structures can give rise to complex population dynamics and interact with environmental conditions. Here we present a model for continuously structured populations with finite size, including both demographic and environmental stochasticity in the dynamics. Using recent methods developed for discrete age-structured models we derive the demographic and environmental variance of the population growth as functions of a continuous state variable. These two parameters, together with the expected population growth rate, are used to define a one-dimensional diffusion approximation of the population dynamics. Thus, a substantial reduction in complexity is achieved as the dynamics of the complex structured model can be described by only three population parameters. We provide methods for numerical calculation of the model parameters and demonstrate the accuracy of the diffusion approximation by computer simulation of specific examples. The general modeling framework makes it possible to analyze and predict future dynamics and extinction risk of populations with various types of structure, and to explore consequences of changes in demography caused by, e.g., climate change or different management decisions. Our results are especially relevant for small populations that are often of conservation concern.
Huber, Heinrich J; Connolly, Niamh M C; Dussmann, Heiko; Prehn, Jochen H M
2012-03-01
We devised an approach to extract control principles of cellular bioenergetics for intact and impaired mitochondria from ODE-based models and applied it to a recently established bioenergetic model of cancer cells. The approach used two methods for varying ODE model parameters to determine those model components that, either alone or in combination with other components, most decisively regulated bioenergetic state variables. We found that, while polarisation of the mitochondrial membrane potential (ΔΨ(m)) and, therefore, the protomotive force were critically determined by respiratory complex I activity in healthy mitochondria, complex III activity was dominant for ΔΨ(m) during conditions of cytochrome-c deficiency. As a further important result, cellular bioenergetics in healthy, ATP-producing mitochondria was regulated by three parameter clusters that describe (1) mitochondrial respiration, (2) ATP production and consumption and (3) coupling of ATP-production and respiration. These parameter clusters resembled metabolic blocks and their intermediaries from top-down control analyses. However, parameter clusters changed significantly when cells changed from low to high ATP levels or when mitochondria were considered to be impaired by loss of cytochrome-c. This change suggests that the assumption of static metabolic blocks by conventional top-down control analyses is not valid under these conditions. Our approach is complementary to both ODE and top-down control analysis approaches and allows a better insight into cellular bioenergetics and its pathological alterations.
NASA Astrophysics Data System (ADS)
Germer, S.; Bens, O.; Hüttl, R. F.
2008-12-01
The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.
NASA Astrophysics Data System (ADS)
Yang, Bo; Li, Xiao-Teng; Chen, Wei; Liu, Jian; Chen, Xiao-Song
2016-10-01
Self-questioning mechanism which is similar to single spin-flip of Ising model in statistical physics is introduced into spatial evolutionary game model. We propose a game model with altruistic to spiteful preferences via weighted sums of own and opponent's payoffs. This game model can be transformed into Ising model with an external field. Both interaction between spins and the external field are determined by the elements of payoff matrix and the preference parameter. In the case of perfect rationality at zero social temperature, this game model has three different phases which are entirely cooperative phase, entirely non-cooperative phase and mixed phase. In the investigations of the game model with Monte Carlo simulation, two paths of payoff and preference parameters are taken. In one path, the system undergoes a discontinuous transition from cooperative phase to non-cooperative phase with the change of preference parameter. In another path, two continuous transitions appear one after another when system changes from cooperative phase to non-cooperative phase with the prefenrence parameter. The critical exponents v, β, and γ of two continuous phase transitions are estimated by the finite-size scaling analysis. Both continuous phase transitions have the same critical exponents and they belong to the same universality class as the two-dimensional Ising model. Supported by the National Natural Science Foundation of China under Grant Nos. 11121403 and 11504384
NASA Astrophysics Data System (ADS)
Ahmadalipour, A.; Rana, A.; Qin, Y.; Moradkhani, H.
2014-12-01
Trends and changes in future climatic parameters, such as, precipitation and temperature have been a central part of climate change studies. In the present work, we have analyzed the seasonal and yearly trends and uncertainties of prediction in all the 10 sub-basins of Columbia River Basin (CRB) for future time period of 2010-2099. The work is carried out using 2 different sets of statistically downscaled Global Climate Model (GCMs) projection datasets i.e. Bias correction and statistical downscaling (BCSD) generated at Portland State University and The Multivariate Adaptive Constructed Analogs (MACA) generated at University of Idaho. The analysis is done for with 10 GCM downscaled products each from CMIP5 daily dataset totaling to 40 different downscaled products for robust analysis. Summer, winter and yearly trend analysis is performed for all the 10 sub-basins using linear regression (significance tested by student t test) and Mann Kendall test (0.05 percent significance level), for precipitation (P), temperature maximum (Tmax) and temperature minimum (Tmin). Thereafter, all the parameters are modelled for uncertainty, across all models, in all the 10 sub-basins and across the CRB for future scenario periods. Results have indicated in varied degree of trends for all the sub-basins, mostly pointing towards a significant increase in all three climatic parameters, for all the seasons and yearly considerations. Uncertainty analysis have reveled very high change in all the parameters across models and sub-basins under consideration. Basin wide uncertainty analysis is performed to corroborate results from smaller, sub-basin scale. Similar trends and uncertainties are reported on the larger scale as well. Interestingly, both trends and uncertainties are higher during winter period than during summer, contributing to large part of the yearly change.
USDA-ARS?s Scientific Manuscript database
Quantifying magnitudes and frequencies of rainless times between storms (TBS), or storm occurrence, is required for generating continuous sequences of precipitation for modeling inputs to small watershed models for conservation studies. Two parameters characterize TBS, minimum TBS (MTBS) and averag...
Mapping (dis)agreement in hydrologic projections
NASA Astrophysics Data System (ADS)
Melsen, Lieke A.; Addor, Nans; Mizukami, Naoki; Newman, Andrew J.; Torfs, Paul J. J. F.; Clark, Martyn P.; Uijlenhoet, Remko; Teuling, Adriaan J.
2018-03-01
Hydrologic projections are of vital socio-economic importance. However, they are also prone to uncertainty. In order to establish a meaningful range of storylines to support water managers in decision making, we need to reveal the relevant sources of uncertainty. Here, we systematically and extensively investigate uncertainty in hydrologic projections for 605 basins throughout the contiguous US. We show that in the majority of the basins, the sign of change in average annual runoff and discharge timing for the period 2070-2100 compared to 1985-2008 differs among combinations of climate models, hydrologic models, and parameters. Mapping the results revealed that different sources of uncertainty dominate in different regions. Hydrologic model induced uncertainty in the sign of change in mean runoff was related to snow processes and aridity, whereas uncertainty in both mean runoff and discharge timing induced by the climate models was related to disagreement among the models regarding the change in precipitation. Overall, disagreement on the sign of change was more widespread for the mean runoff than for the discharge timing. The results demonstrate the need to define a wide range of quantitative hydrologic storylines, including parameter, hydrologic model, and climate model forcing uncertainty, to support water resource planning.
Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.
2007-01-01
In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921
A one dimensional moving bed biofilm reactor model for nitrification of municipal wastewaters.
Barry, Ugo; Choubert, Jean-Marc; Canler, Jean-Pierre; Pétrimaux, Olivier; Héduit, Alain; Lessard, Paul
2017-08-01
This work presents a one-dimensional model of a moving bed bioreactor (MBBR) process designed for the removal of nitrogen from raw wastewaters. A comprehensive experimental strategy was deployed at a semi-industrial pilot-scale plant fed with a municipal wastewater operated at 10-12 °C, and surface loading rates of 1-2 g filtered COD/m 2 d and 0.4-0.55 g NH 4 -N/m 2 d. Data were collected on influent/effluent composition, and on measurement of key variables or parameters (biofilm mass and maximal thickness, thickness of the limit liquid layer, maximal nitrification rate, oxygen mass transfer coefficient). Based on time-course variations in these variables, the MBBR model was calibrated at two time-scales and magnitudes of dynamic conditions, i.e., short-term (4 days) calibration under dynamic conditions and long-term (33 days) calibration, and for three types of carriers. A set of parameters suitable for the conditions was proposed, and the calibrated parameter set is able to simulate the time-course change of nitrogen forms in the effluent of the MBBR tanks, under the tested operated conditions. Parameters linked to diffusion had a strong influence on how robustly the model is able to accurately reproduce time-course changes in effluent quality. Then the model was used to optimize the operations of MBBR layout. It was shown that the main optimization track consists of the limitation of the aeration supply without changing the overall performance of the process. Further work would investigate the influence of the hydrodynamic conditions onto the thickness of the limit liquid layer and the "apparent" diffusion coefficient in the biofilm parameters.
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Estimating varying coefficients for partial differential equation models.
Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J
2017-09-01
Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.
Förster-type energy transfer as a probe for changes in local fluctuations of the protein matrix.
Somogyi, B; Matkó, J; Papp, S; Hevessy, J; Welch, G R; Damjanovich, S
1984-07-17
Much evidence, on both theoretical and experimental sides, indicates the importance of local fluctuations (in energy levels, conformational substates, etc.) of the macromolecular matrix in the biological activity of proteins. We describe here a novel application of the Förster-type energy-transfer process capable of monitoring changes both in local fluctuations and in conformational states of macromolecules. A new energy-transfer parameter, f, is defined as an average transfer efficiency, [E], normalized by the actual average quantum efficiency of the donor fluorescence, [phi D]. A simple oscillator model (for a one donor-one acceptor system) is presented to show the sensitivity of this parameter to changes in amplitudes of local fluctuations. The different modes of averaging (static, dynamic, and intermediate cases) occurring for a given value of the average transfer rate, [kt], and the experimental requirements as well as limitations of the method are also discussed. The experimental tests were performed on the ribonuclease T1-pyridoxamine 5'-phosphate conjugate (a one donor-one acceptor system) by studying the change of the f parameter with temperature, an environmental parameter expectedly perturbing local fluctuations of proteins. The parameter f increased with increasing temperature as expected on the basis of the oscillator model, suggesting that it really reflects changes of fluctuation amplitudes (significant changes in the orientation factor, k2, as well as in the spectral properties of the fluorophores can be excluded by anisotropy measurements and spectral investigations). Possibilities of the general applicability of the method are also discussed.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Fractional blood flow in oscillatory arteries with thermal radiation and magnetic field effects
NASA Astrophysics Data System (ADS)
Bansi, C. D. K.; Tabi, C. B.; Motsumi, T. G.; Mohamadou, A.
2018-06-01
A fractional model is proposed to study the effect of heat transfer and magnetic field on the blood flowing inside oscillatory arteries. The flow is due to periodic pressure gradient and the fractional model equations include body acceleration. The proposed velocity and temperature distribution equations are solved using the Laplace and Hankel transforms. The effect of the fluid parameters such as the Reynolds number (Re), the magnetic parameter (M) and the radiation parameter (N) is studied graphically with changing the fractional-order parameter. It is found that the fractional derivative is a valuable tool to control both the temperature and velocity of blood when flow parameters change under treatment, for example. Besides, this work highlights the fact that in the presence of strong magnetic field, blood velocity and temperature reduce. A reversed effect is observed where the applied thermal radiation increase; the velocity and temperature of blood increase. However, the temperature remains high around the artery centerline, which is appropriate during treatment to avoid tissues damage.
NASA Astrophysics Data System (ADS)
Schön, Peter; Prokop, Alexander; Naaim-Bouvet, Florence; Vionnet, Vincent; Guyomarc'h, Gilbert; Heiser, Micha; Nishimura, Kouichi
2015-04-01
Wind and the associated snow drift are dominating factors determining the snow distribution and accumulation in alpine areas, resulting in a high spatial variability of snow depth that is difficult to evaluate and quantify. The terrain-based parameter Sx characterizes the degree of shelter or exposure of a grid point provided by the upwind terrain, without the computational complexity of numerical wind field models. The parameter has shown to qualitatively predict snow redistribution with good reproduction of spatial patterns. It does not, however, provide a quantitative estimate of changes in snow depths. The objective of our research was to introduce a new parameter to quantify changes in snow depths in our research area, the Col du Lac Blanc in the French Alps. The area is at an elevation of 2700 m and particularly suited for our study due to its consistently bi-modal wind directions. Our work focused on two pronounced, approximately 10 m high terrain breaks, and we worked with 1 m resolution digital snow surface models (DSM). The DSM and measured changes in snow depths were obtained with high-accuracy terrestrial laser scan (TLS) measurements. First we calculated the terrain-based parameter Sx on a digital snow surface model and correlated Sx with measured changes in snow-depths (Δ SH). Results showed that Δ SH can be approximated by Δ SHestimated = α * Sx, where α is a newly introduced parameter. The parameter α has shown to be linked to the amount of snow deposited influenced by blowing snow flux. At the Col du Lac Blanc test side, blowing snow flux is recorded with snow particle counters (SPC). Snow flux is the number of drifting snow particles per time and area. Hence, the SPC provide data about the duration and intensity of drifting snow events, two important factors not accounted for by the terrain parameter Sx. We analyse how the SPC snow flux data can be used to estimate the magnitude of the new variable parameter α . To simulate the development of the snow surface in dependency of Sx, SPC flux and time, we apply a simple cellular automata system. The system consists of raster cells that develop through discrete time steps according to a set of rules. The rules are based on the states of neighboring cells. Our model assumes snow transport in dependency of Sx gradients between neighboring cells. The cells evolve based on difference quotients between neighbouring cells. Our analyses and results are steps towards using the terrain-based parameter Sx, coupled with SPC data, to quantitatively estimate changes in snow depths, using high raster resolutions of 1 m.
Optimal structure of metaplasticity for adaptive learning
2017-01-01
Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning. PMID:28658247
An impact of environmental changes on flows in the reach scale under a range of climatic conditions
NASA Astrophysics Data System (ADS)
Karamuz, Emilia; Romanowicz, Renata J.
2016-04-01
The present paper combines detection and adequate identification of causes of changes in flow regime at cross-sections along the Middle River Vistula reach using different methods. Two main experimental set ups (designs) have been applied to study the changes, a moving three-year window and low- and high-flow event based approach. In the first experiment, a Stochastic Transfer Function (STF) model and a quantile-based statistical analysis of flow patterns were compared. These two methods are based on the analysis of changes of the STF model parameters and standardised differences of flow quantile values. In the second experiment, in addition to the STF-based also a 1-D distributed model, MIKE11 was applied. The first step of the procedure used in the study is to define the river reaches that have recorded information on land use and water management changes. The second task is to perform the moving window analysis of standardised differences of flow quantiles and moving window optimisation of the STF model for flow routing. The third step consists of an optimisation of the STF and MIKE11 models for high- and low-flow events. The final step is to analyse the results and relate the standardised quantile changes and model parameter changes to historical land use changes and water management practices. Results indicate that both models give consistent assessment of changes in the channel for medium and high flows. ACKNOWLEDGEMENTS This research was supported by the Institute of Geophysics Polish Academy of Sciences through the Young Scientist Grant no. 3b/IGF PAN/2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
NASA Astrophysics Data System (ADS)
Jensen, Robert K.; Fletcher, P.; Abraham, C.
1991-04-01
The segment mass mass proportions and moments of inertia of a sample of twelve females and seven males with mean ages of 67. 4 and 69. 5 years were estimated using textbook proportions based on cadaver studies. These were then compared with the parameters calculated using a mathematical model the zone method. The methodology of the model was fully evaluated for accuracy and precision and judged to be adequate. The results of the comparisons show that for some segments female parameters are quite different from male parameters and inadequately predicted by the cadaver proportions. The largest discrepancies were for the thigh and the trunk. The cadaver predictions were generally less than satisfactory although the common variance for some segments was moderately high. The use ofnon-linear regression and segment anthropometry was illustrated for the thigh moments of inertia and appears to be appropriate. However the predictions from cadaver data need to be examined fully. These results are dependent on the changes in mass and density distribution which occur with aging and the changes which occur with cadaver samples prior to and following death.
Singh, Jyotsna; Singh, Phool; Malik, Vikas
2017-01-01
Parkinson disease alters the information patterns in movement related pathways in brain. Experimental results performed on rats show that the activity patterns changes from single spike activity to mixed burst mode in Parkinson disease. However the cause of this change in activity pattern is not yet completely understood. Subthalamic nucleus is one of the main nuclei involved in the origin of motor dysfunction in Parkinson disease. In this paper, a single compartment conductance based model is considered which focuses on subthalamic nucleus and synaptic input from globus pallidus (external). This model shows highly nonlinear behavior with respect to various intrinsic parameters. Behavior of model has been presented with the help of activity patterns generated in healthy and Parkinson condition. These patterns have been compared by calculating their correlation coefficient for different values of intrinsic parameters. Results display that the activity patterns are very sensitive to various intrinsic parameters and calcium shows some promising results which provide insights into the motor dysfunction.
Combinatorial influence of environmental parameters on transcription factor activity.
Knijnenburg, T A; Wessels, L F A; Reinders, M J T
2008-07-01
Cells receive a wide variety of environmental signals, which are often processed combinatorially to generate specific genetic responses. Changes in transcript levels, as observed across different environmental conditions, can, to a large extent, be attributed to changes in the activity of transcription factors (TFs). However, in unraveling these transcription regulation networks, the actual environmental signals are often not incorporated into the model, simply because they have not been measured. The unquantified heterogeneity of the environmental parameters across microarray experiments frustrates regulatory network inference. We propose an inference algorithm that models the influence of environmental parameters on gene expression. The approach is based on a yeast microarray compendium of chemostat steady-state experiments. Chemostat cultivation enables the accurate control and measurement of many of the key cultivation parameters, such as nutrient concentrations, growth rate and temperature. The observed transcript levels are explained by inferring the activity of TFs in response to combinations of cultivation parameters. The interplay between activated enhancers and repressors that bind a gene promoter determine the possible up- or downregulation of the gene. The model is translated into a linear integer optimization problem. The resulting regulatory network identifies the combinatorial effects of environmental parameters on TF activity and gene expression. The Matlab code is available from the authors upon request. Supplementary data are available at Bioinformatics online.
Crystalline lens paradoxes revisited: significance of age-related restructuring of the GRIN.
Sheil, Conor J; Goncharov, Alexander V
2017-09-01
The accommodating volume-constant age-dependent optical (AVOCADO) model of the crystalline lens is used to explore the age-related changes in ocular power and spherical aberration. The additional parameter m in the GRIN lens model allows decoupling of the axial and radial GRIN profiles, and is used to stabilise the age-related change in ocular power. Data for age-related changes in ocular geometry and lens parameter P in the axial GRIN profile were taken from published experimental data. In our age-dependent eye model, the ocular refractive power shows behaviour similar to the previously unexplained "lens paradox". Furthermore, ocular spherical aberration agrees with the data average, in contrast to the proposed "spherical aberration paradox". The additional flexibility afforded by parameter m , which controls the ratio of the axial and radial GRIN profile exponents, has allowed us to study the restructuring of the lens GRIN medium with age, resulting in a new interpretation of the origin of the power and spherical aberration paradoxes. Our findings also contradict the conceptual idea that the ageing eye is similar to the accommodating eye.
Precipitation-runoff modeling system; user's manual
Leavesley, G.H.; Lichty, R.W.; Troutman, B.M.; Saindon, L.G.
1983-01-01
The concepts, structure, theoretical development, and data requirements of the precipitation-runoff modeling system (PRMS) are described. The precipitation-runoff modeling system is a modular-design, deterministic, distributed-parameter modeling system developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow, sediment yields, and general basin hydrology. Basin response to normal and extreme rainfall and snowmelt can be simulated to evaluate changes in water balance relationships, flow regimes, flood peaks and volumes, soil-water relationships, sediment yields, and groundwater recharge. Parameter-optimization and sensitivity analysis capabilites are provided to fit selected model parameters and evaluate their individual and joint effects on model output. The modular design provides a flexible framework for continued model system enhancement and hydrologic modeling research and development. (Author 's abstract)
Fletcher, Patrick; Bertram, Richard; Tabak, Joel
2016-06-01
Models of electrical activity in excitable cells involve nonlinear interactions between many ionic currents. Changing parameters in these models can produce a variety of activity patterns with sometimes unexpected effects. Further more, introducing new currents will have different effects depending on the initial parameter set. In this study we combined global sampling of parameter space and local analysis of representative parameter sets in a pituitary cell model to understand the effects of adding K (+) conductances, which mediate some effects of hormone action on these cells. Global sampling ensured that the effects of introducing K (+) conductances were captured across a wide variety of contexts of model parameters. For each type of K (+) conductance we determined the types of behavioral transition that it evoked. Some transitions were counterintuitive, and may have been missed without the use of global sampling. In general, the wide range of transitions that occurred when the same current was applied to the model cell at different locations in parameter space highlight the challenge of making accurate model predictions in light of cell-to-cell heterogeneity. Finally, we used bifurcation analysis and fast/slow analysis to investigate why specific transitions occur in representative individual models. This approach relies on the use of a graphics processing unit (GPU) to quickly map parameter space to model behavior and identify parameter sets for further analysis. Acceleration with modern low-cost GPUs is particularly well suited to exploring the moderate-sized (5-20) parameter spaces of excitable cell and signaling models.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
On the Way to Appropriate Model Complexity
NASA Astrophysics Data System (ADS)
Höge, M.
2016-12-01
When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.
Švajdlenková, H; Ruff, A; Lunkenheimer, P; Loidl, A; Bartoš, J
2017-08-28
We report a broadband dielectric spectroscopic (BDS) study on the clustering fragile glass-former meta-toluidine (m-TOL) from 187 K up to 289 K over a wide frequency range of 10 -3 -10 9 Hz with focus on the primary α relaxation and the secondary β relaxation above the glass temperature T g . The broadband dielectric spectra were fitted by using the Havriliak-Negami (HN) and Cole-Cole (CC) models. The β process disappearing at T β,disap = 1.12T g exhibits non-Arrhenius dependence fitted by the Vogel-Fulcher-Tamman-Hesse equation with T 0β VFTH in accord with the characteristic differential scanning calorimetry (DSC) limiting temperature of the glassy state. The essential feature of the α process consists in the distinct changes of its spectral shape parameter β HN marked by the characteristic BDS temperatures T B1 βHN and T B2 βHN . The primary α relaxation times were fitted over the entire temperature and frequency range by several current three-parameter up to six-parameter dynamic models. This analysis reveals that the crossover temperatures of the idealized mode coupling theory model (T c MCT ), the extended free volume model (T 0 EFV ), and the two-order parameter (TOP) model (T m c ) are close to T B1 βHN , which provides a consistent physical rationalization for the first change of the shape parameter. In addition, the other two characteristic TOP temperatures T 0 TOP and T A are coinciding with the thermodynamic Kauzmann temperature T K and the second change of the shape parameter at around T B2 βHN , respectively. These can be related to the onset of the liquid-like domains in the glassy state or the disappearance of the solid-like domains in the normal liquid state.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Retrospective forecast of ETAS model with daily parameters estimate
NASA Astrophysics Data System (ADS)
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
An iterative hyperelastic parameters reconstruction for breast cancer assessment
NASA Astrophysics Data System (ADS)
Mehrabian, Hatef; Samani, Abbas
2008-03-01
In breast elastography, breast tissues usually undergo large compressions resulting in significant geometric and structural changes, and consequently nonlinear mechanical behavior. In this study, an elastography technique is presented where parameters characterizing tissue nonlinear behavior is reconstructed. Such parameters can be used for tumor tissue classification. To model the nonlinear behavior, tissues are treated as hyperelastic materials. The proposed technique uses a constrained iterative inversion method to reconstruct the tissue hyperelastic parameters. The reconstruction technique uses a nonlinear finite element (FE) model for solving the forward problem. In this research, we applied Yeoh and Polynomial models to model the tissue hyperelasticity. To mimic the breast geometry, we used a computational phantom, which comprises of a hemisphere connected to a cylinder. This phantom consists of two types of soft tissue to mimic adipose and fibroglandular tissues and a tumor. Simulation results show the feasibility of the proposed method in reconstructing the hyperelastic parameters of the tumor tissue.
Sensitivity analysis of the space shuttle to ascent wind profiles
NASA Technical Reports Server (NTRS)
Smith, O. E.; Austin, L. D., Jr.
1982-01-01
A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.
The Effect of Roughness Model on Scattering Properties of Ice Crystals.
NASA Technical Reports Server (NTRS)
Geogdzhayev, Igor V.; Van Diedenhoven, Bastiaan
2016-01-01
We compare stochastic models of microscale surface roughness assuming uniform and Weibull distributions of crystal facet tilt angles to calculate scattering by roughened hexagonal ice crystals using the geometric optics (GO) approximation. Both distributions are determined by similar roughness parameters, while the Weibull model depends on the additional shape parameter. Calculations were performed for two visible wavelengths (864 nm and 410 nm) for roughness values between 0.2 and 0.7 and Weibull shape parameters between 0 and 1.0 for crystals with aspect ratios of 0.21, 1 and 4.8. For this range of parameters we find that, for a given roughness level, varying the Weibull shape parameter can change the asymmetry parameter by up to about 0.05. The largest effect of the shape parameter variation on the phase function is found in the backscattering region, while the degree of linear polarization is most affected at the side-scattering angles. For high roughness, scattering properties calculated using the uniform and Weibull models are in relatively close agreement for a given roughness parameter, especially when a Weibull shape parameter of 0.75 is used. For smaller roughness values, a shape parameter close to unity provides a better agreement. Notable differences are observed in the phase function over the scattering angle range from 5deg to 20deg, where the uniform roughness model produces a plateau while the Weibull model does not.
Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J
2013-10-28
Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Huiping; Qian, Yun; Zhao, Chun
2015-09-09
In this study, we adopt a parametric sensitivity analysis framework that integrates the quasi-Monte Carlo parameter sampling approach and a surrogate model to examine aerosol effects on the East Asian Monsoon climate simulated in the Community Atmosphere Model (CAM5). A total number of 256 CAM5 simulations are conducted to quantify the model responses to the uncertain parameters associated with cloud microphysics parameterizations and aerosol (e.g., sulfate, black carbon (BC), and dust) emission factors and their interactions. Results show that the interaction terms among parameters are important for quantifying the sensitivity of fields of interest, especially precipitation, to the parameters. Themore » relative importance of cloud-microphysics parameters and emission factors (strength) depends on evaluation metrics or the model fields we focused on, and the presence of uncertainty in cloud microphysics imposes an additional challenge in quantifying the impact of aerosols on cloud and climate. Due to their different optical and microphysical properties and spatial distributions, sulfate, BC, and dust aerosols have very different impacts on East Asian Monsoon through aerosol-cloud-radiation interactions. The climatic effects of aerosol do not always have a monotonic response to the change of emission factors. The spatial patterns of both sign and magnitude of aerosol-induced changes in radiative fluxes, cloud, and precipitation could be different, depending on the aerosol types, when parameters are sampled in different ranges of values. We also identify the different cloud microphysical parameters that show the most significant impact on climatic effect induced by sulfate, BC and dust, respectively, in East Asia.« less
Unified phenology model with Bayesian calibration for several European species in Belgium
NASA Astrophysics Data System (ADS)
Fu, Y. S. H.; Demarée, G.; Hamdi, R.; Deckmyn, A.; Deckmyn, G.; Janssens, I. A.
2009-04-01
Plant phenology is a good bio-indicator for climate change, and this has brought a significant increase of interest. Many kinds of phenology models have been developed to analyze and predict the phenological response to climate change, and those models have been summarized into one kind of unified model, which could be applied to different species and environments. In our study, we selected seven European woody plant species (Betula verrucosa, Quercus robur pedunculata, Fagus sylvatica, Fraxinus excelsior, Symphoricarpus racemosus, Aesculus hippocastanum, Robinia pseudoacacia) occurring in five sites distributed across Belgium. For those sites and tree species, phenological observations such as bud burst were available for the period 1956 - 2002. We also obtained regional downscaled climatic data for each of these sites, and combined both data sets to test the unified model. We used a Bayesian approach to generate distributions of model parameters from the observation data. In this poster presentation, we compare parameter distributions between different species and between different sites for individual species. The results of the unified model show a good agreement with the observations, except for Fagus sylvatica. The failure to reproduce the bud burst data for Fagus sylvatica suggest that the other factors not included in the unified model affect the phenology of this species. The parameter series show differences among species as we expected. However, they also differed strongly for the same species among sites.Further work should elucidate the mechanism that explains why model parameters differ among species and sites.
Using Simplistic Shape/Surface Models to Predict Brightness in Estimation Filters
NASA Astrophysics Data System (ADS)
Wetterer, C.; Sheppard, D.; Hunt, B.
The prerequisite for using brightness (radiometric flux intensity) measurements in an estimation filter is to have a measurement function that accurately predicts a space objects brightness for variations in the parameters of interest. These parameters include changes in attitude and articulations of particular components (e.g. solar panel east-west offsets to direct sun-tracking). Typically, shape models and bidirectional reflectance distribution functions are combined to provide this forward light curve modeling capability. To achieve precise orbit predictions with the inclusion of shape/surface dependent forces such as radiation pressure, relatively complex and sophisticated modeling is required. Unfortunately, increasing the complexity of the models makes it difficult to estimate all those parameters simultaneously because changes in light curve features can now be explained by variations in a number of different properties. The classic example of this is the connection between the albedo and the area of a surface. If, however, the desire is to extract information about a single and specific parameter or feature from the light curve, a simple shape/surface model could be used. This paper details an example of this where a complex model is used to create simulated light curves, and then a simple model is used in an estimation filter to extract out a particular feature of interest. In order for this to be successful, however, the simple model must be first constructed using training data where the feature of interest is known or at least known to be constant.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.
2017-07-01
Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.
Numerical experiments on short-term meteorological effects on solar variability
NASA Technical Reports Server (NTRS)
Somerville, R. C. J.; Hansen, J. E.; Stone, P. H.; Quirk, W. J.; Lacis, A. A.
1975-01-01
A set of numerical experiments was conducted to test the short-range sensitivity of a large atmospheric general circulation model to changes in solar constant and ozone amount. On the basis of the results of 12-day sets of integrations with very large variations in these parameters, it is concluded that realistic variations would produce insignificant meteorological effects. Any causal relationships between solar variability and weather, for time scales of two weeks or less, rely upon changes in parameters other than solar constant or ozone amounts, or upon mechanisms not yet incorporated in the model.
Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.
2015-01-01
Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972
Clinical profiles associated with influenza disease in the ferret model.
Stark, Gregory V; Long, James P; Ortiz, Diana I; Gainey, Melicia; Carper, Benjamin A; Feng, Jingyu; Miller, Stephen M; Bigger, John E; Vela, Eric M
2013-01-01
Influenza A viruses continue to pose a threat to human health; thus, various vaccines and prophylaxis continue to be developed. Testing of these products requires various animal models including mice, guinea pigs, and ferrets. However, because ferrets are naturally susceptible to infection with human influenza viruses and because the disease state resembles that of human influenza, these animals have been widely used as a model to study influenza virus pathogenesis. In this report, a statistical analysis was performed to evaluate data involving 269 ferrets infected with seasonal influenza, swine influenza, and highly pathogenic avian influenza (HPAI) from 16 different studies over a five year period. The aim of the analyses was to better qualify the ferret model by identifying relationships among important animal model parameters (endpoints) and variables of interest, which include survival, time-to-death, changes in body temperature and weight, and nasal wash samples containing virus, in addition to significant changes from baseline in selected hematology and clinical chemistry parameters. The results demonstrate that a disease clinical profile, consisting of various changes in the biological parameters tested, is associated with various influenza A infections in ferrets. Additionally, the analysis yielded correlates of protection associated with HPAI disease in ferrets. In all, the results from this study further validate the use of the ferret as a model to study influenza A pathology and to evaluate product efficacy.
Evaluation of MRI Models in the Measurement of CMRO2 and Its Relationship With CBF
Lin, Ai-Ling; Fox, Peter T.; Yang, Yihong; Lu, Hanzhang; Tan, Li-Hai; Gao, Jia-Hong
2008-01-01
The aim of this study was to investigate the various MRI biophysical models in the measurements of local cerebral metabolic rate of oxygen (CMRO2) and the corresponding relationship with cerebral blood flow (CBF) during brain activation. This aim was addressed by simultaneously measuring the relative changes in CBF, cerebral blood volume (CBV), and blood oxygen level dependent (BOLD) MRI signals in the human visual cortex during visual stimulation. A radial checkerboard delivered flash stimulation at five different frequencies. Two MRI models, the single-compartment model (SCM) and the multi-compartment model (MCM), were used to determine the relative changes in CMRO2 using three methods: [1] SCM with parameters identical to those used in a prior MRI study (M = 0.22; α = 0.38); [2] SCM with directly measured parameters (M from hypercapnia and α from measured δCBV and δCBF); and [3] MCM. The magnitude of relative changes in CMRO2 and the nonlinear relationship between CBF and CMRO2 obtained with Methods [2] and [3] were not in agreement with those obtained using Method [1]. However, the results of Methods [2] and [3] were aligned with positron emission tomography findings from the literature. Our results indicate that if appropriate parameters are used, the SCM and MCM models are equivalent for quantifying the values of CMRO2 and determining the flow-metabolism relationship. PMID:18666102
Barnes, Samuel R; Ng, Thomas S C; Montagne, Axel; Law, Meng; Zlokovic, Berislav V; Jacobs, Russell E
2016-05-01
To determine optimal parameters for acquisition and processing of dynamic contrast-enhanced MRI (DCE-MRI) to detect small changes in near normal low blood-brain barrier (BBB) permeability. Using a contrast-to-noise ratio metric (K-CNR) for Ktrans precision and accuracy, the effects of kinetic model selection, scan duration, temporal resolution, signal drift, and length of baseline on the estimation of low permeability values was evaluated with simulations. The Patlak model was shown to give the highest K-CNR at low Ktrans . The Ktrans transition point, above which other models yielded superior results, was highly dependent on scan duration and tissue extravascular extracellular volume fraction (ve ). The highest K-CNR for low Ktrans was obtained when Patlak model analysis was combined with long scan times (10-30 min), modest temporal resolution (<60 s/image), and long baseline scans (1-4 min). Signal drift as low as 3% was shown to affect the accuracy of Ktrans estimation with Patlak analysis. DCE acquisition and modeling parameters are interdependent and should be optimized together for the tissue being imaged. Appropriately optimized protocols can detect even the subtlest changes in BBB integrity and may be used to probe the earliest changes in neurodegenerative diseases such as Alzheimer's disease and multiple sclerosis. © 2015 Wiley Periodicals, Inc.
Robustness of a cellular automata model for the HIV infection
NASA Astrophysics Data System (ADS)
Figueirêdo, P. H.; Coutinho, S.; Zorzenon dos Santos, R. M.
2008-11-01
An investigation was conducted to study the robustness of the results obtained from the cellular automata model which describes the spread of the HIV infection within lymphoid tissues [R.M. Zorzenon dos Santos, S. Coutinho, Phys. Rev. Lett. 87 (2001) 168102]. The analysis focused on the dynamic behavior of the model when defined in lattices with different symmetries and dimensionalities. The results illustrated that the three-phase dynamics of the planar models suffered minor changes in relation to lattice symmetry variations and, while differences were observed regarding dimensionality changes, qualitative behavior was preserved. A further investigation was conducted into primary infection and sensitiveness of the latency period to variations of the model’s stochastic parameters over wide ranging values. The variables characterizing primary infection and the latency period exhibited power-law behavior when the stochastic parameters varied over a few orders of magnitude. The power-law exponents were approximately the same when lattice symmetry varied, but there was a significant variation when dimensionality changed from two to three. The dynamics of the three-dimensional model was also shown to be insensitive to variations of the deterministic parameters related to cell resistance to the infection, and the necessary time lag to mount the specific immune response to HIV variants. The robustness of the model demonstrated in this work reinforce that its basic hypothesis are consistent with the three-stage dynamic of the HIV infection observed in patients.
Runoff projection under climate change over Yarlung Zangbo River, Southwest China
NASA Astrophysics Data System (ADS)
Xuan, Weidong; Xu, Yue-Ping
2017-04-01
The Yarlung Zangbo River is located in southwest of China, one of the major source of "Asian water tower". The river has great hydropower potential and provides vital water resource for local and downstream agricultural production and livestock husbandry. Compared to its drainage area, gauge observation is sometimes not enough for good hydrological modeling in order to project future runoff. In this study, we employ a semi-distributed hydrologic model SWAT to simulate hydrological process of the river with rainfall observation and TRMM 3B4V7 respectively and the hydrological model performance is evaluated based on not only total runoff but snowmelt, precipitation and groundwater components. Firstly, calibration and validation of the hydrological model are executed to find behavioral parameter sets for both gauge observation and TRMM data respectively. Then, behavioral parameter sets with diverse efficiency coefficient (NS) values are selected and corresponding runoff components are analyzed. Robust parameter sets are further employed in SWAT coupled with CMIP5 GCMs to project future runoff. The final results show that precipitation is the dominating contributor nearly all year around, while snowmelt and groundwater are important in the summer and winter alternatively. Also sufficient robust parameter sets help reduce uncertainty in hydrological modeling. Finally, future possible runoff changes will have major consequences for water and flood security.
Modular Aero-Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Guo, Ten-Huei
2006-01-01
The Modular Aero-Propulsion System Simulation (MAPSS) is a graphical simulation environment designed for the development of advanced control algorithms and rapid testing of these algorithms on a generic computational model of a turbofan engine and its control system. MAPSS is a nonlinear, non-real-time simulation comprising a Component Level Model (CLM) module and a Controller-and-Actuator Dynamics (CAD) module. The CLM module simulates the dynamics of engine components at a sampling rate of 2,500 Hz. The controller submodule of the CAD module simulates a digital controller, which has a typical update rate of 50 Hz. The sampling rate for the actuators in the CAD module is the same as that of the CLM. MAPSS provides a graphical user interface that affords easy access to engine-operation, engine-health, and control parameters; is used to enter such input model parameters as power lever angle (PLA), Mach number, and altitude; and can be used to change controller and engine parameters. Output variables are selectable by the user. Output data as well as any changes to constants and other parameters can be saved and reloaded into the GUI later.
An uncertainty model of acoustic metamaterials with random parameters
NASA Astrophysics Data System (ADS)
He, Z. C.; Hu, J. Y.; Li, Eric
2018-01-01
Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
Endogenous Business Cycle Dynamics within Metzlers Inventory Model: Adding an Inventory Floor.
Sushko, Irina; Wegener, Michael; Westerhoff, Frank; Zaklan, Georg
2009-04-01
Metzlers inventory model may produce dampened fluctuations in economic activity, thus contributing to our understanding of business cycle dynamics. For some parameter combinations, however, the model generates oscillations with increasing amplitude, implying that the inventory stock of firms eventually turns negative. Taking this observation into account, we reformulate Metzlers model by simply putting a floor to the inventory level. Within the new piecewise linear model, endogenous business cycle dynamics may now be triggered via a center bifurcation, i.e. for certain parameter combinations production changes are (quasi-)periodic.
Validation and upgrading of physically based mathematical models
NASA Technical Reports Server (NTRS)
Duval, Ronald
1992-01-01
The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.
Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei
2007-10-01
Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.
NASA Astrophysics Data System (ADS)
Zhang, Benfeng; Han, Tao; Li, Xinyi; Huang, Yulin; Omori, Tatsuya; Hashimoto, Ken-ya
2018-07-01
This paper investigates how lateral propagation of Rayleigh and shear horizontal (SH) surface acoustic waves (SAWs) changes with rotation angle θ and SiO2 and electrode thicknesses, h SiO2 and h Cu, respectively. The extended thin plate model is used for purpose. First, the extraction method is presented for determining parameters appearing in the extended thin plate model. Then, the model parameters are expressed in polynomials in terms of h SiO2, h Cu, and θ. Finally, a piston mode structure without phase shifters is designed using the extracted parameters. The possible piston mode structures can be searched automatically by use of the polynomial expression. The resonance characteristics are analyzed by both the extended thin plate model and three-dimensional (3D) finite element method (FEM). Agreement between the results of both methods confirms validity and effectiveness of the parameter extraction process and the design technique.
NASA Astrophysics Data System (ADS)
Zhang, Yu-Xia; Liao, Hao; Medo, Matus; Shang, Ming-Sheng; Yeung, Chi Ho
2016-05-01
In this paper we analyze the contrary behaviors of the informed investors and uniformed investors, and then construct a competition model with two groups of agents, namely agents who intend to stay in minority and those who intend to stay in majority. We find two kinds of competitions, inter- and intra-groups. The model shows periodic fluctuation feature. The average distribution of strategies illustrates a prominent central peak which is relevant to the peak-fat-tail character of price change distribution in stock markets. Furthermore, in the modified model the tolerance time parameter makes the agents diversified. Finally, we compare the strategies distribution with the price change distribution in real stock market, and we conclude that contrary behavior rules and tolerance time parameter are indeed valid in the description of market model.
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; McGinnis, S. A.
2017-12-01
Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.
Trend analysis for daily rainfall series of Barcelona
NASA Astrophysics Data System (ADS)
Ortego, M. I.; Gibergans-Báguena, J.; Tolosana-Delgado, R.; Egozcue, J. J.; Llasat, M. C.
2009-09-01
Frequency analysis of hydrological series is a key point to acquire an in-depth understanding of the behaviour of hydrologic events. The occurrence of extreme hydrologic events in an area may imply great social and economical impacts. A good understanding of hazardous events improves the planning of human activities. A useful model for hazard assessment of extreme hydrologic events in an area is the point-over-threshold (POT) model. Time-occurrence of events is assumed to be Poisson distributed, and the magnitude X of each event is modeled as an arbitrary random variable, whose excesses over the threshold x0, Y = X - x0, given X > x0, have a Generalized Pareto Distribution (GPD), ( ? )- 1? FY (y|β,?) = 1 - 1+ βy , 0 ? y < ysup , where ysup = +? if ? 0, and ysup = -β? ? if ? < 0. The limiting distribution for ? = 0 is an exponential one. Independence between this magnitude and occurrence in time is assumed, as well as independence from event to event. In order to take account for uncertainty of the estimation of the GPD parameters, a Bayesian approach is chosen. This approach allows to include necessary conditions on the parameters of the distribution for our particular phenomena, as well as propagate adequately the uncertainty of estimations to the hazard parameters, such as return periods. A common concern is to know whether magnitudes of hazardous events have changed in the last decades. Long data series are very appreciated in order to properly study these issues. The series of daily rainfall in Barcelona (1854-2006) has been selected. This is one of the longer european daily rainfall series available. Daily rainfall is better described using a relative scale and therefore it is suitably treated in a log-scale. Accordingly, log-precipitation is identified with X. Excesses over a threshold are modeled by a GPD with a limited maximum value. An additional assumption is that the distribution of the excesses Y has limited upper tail and, therefore, ? < 0, ysup = -β?. Such a long data series provides valuable information about the phenomena on hand, and therefore a very first step is to have a look to its reliability. The first part of the work focuses on the possible existence of abrupt changes in the parameters of the GPD. These abrupt changes may be due to changes in the location of the observatories and/or technological advances introduced in the measuring instruments. The second part of the work examines the possible existence of trends. The parameters of the model are considered as a function of time. A new parameterisation of the GPD distribution is suggested, in order to parsimoniously deal with this climate variation, ? = ln(-? ?;β) and ? = ln(-? ? β) The classical scale and shape parameters of the GPD (β,?) are reformulated as a location parameter ? "linked to the upper limit of the distribution", and a shape parameter ?. In this reparameterisation, the parsimonious choice is to consider shape as a linear function of time, ?(t) = ?0 + t? while keeping location fixed, ?(t) = ?0. Then, the climate change is assessed by checking the hypothesis ? 0. Results show no significant abrupt changes in excesses distribution of the Barcelona daily rainfall series but suggest a significant change for the parameters, and therefore the existence of a trend in daily rainfall for this period.
Garabedian, Stephen P.
1986-01-01
A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.
ERIC Educational Resources Information Center
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet
2012-01-01
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Analyzing Strategic Business Rules through Simulation Modeling
NASA Astrophysics Data System (ADS)
Orta, Elena; Ruiz, Mercedes; Toro, Miguel
Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Kulkarni, Chetan S.
2016-01-01
As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.
NASA Astrophysics Data System (ADS)
Smith, M. M.; Hao, Y.; Carroll, S.
2017-12-01
Improving our ability to better forecast the extent and impact of changes in porosity and permeability due to CO2-brine-carbonate reservoir interactions should lower uncertainty in long-term geologic CO2 storage capacity estimates. We have developed a continuum-scale reactive transport model that simulates spatial and temporal changes to porosity, permeability, mineralogy, and fluid composition within carbonate rocks exposed to CO2 and brine at storage reservoir conditions. The model relies on two primary parameters to simulate brine-CO2-carbonate mineral reaction: kinetic rate constant(s), kmineral, for carbonate dissolution; and an exponential parameter, n, relating porosity change to resulting permeability. Experimental data collected from fifteen core-flooding experiments conducted on samples from the Weyburn (Saskatchewan, Canada) and Arbuckle (Kansas, USA) carbonate reservoirs were used to calibrate the reactive-transport model and constrain the useful range of k and n values. Here we present the results of our current efforts to validate this model and the use of these parameter values, by comparing predictions of extent and location of dissolution and the evolution of fluid permeability against our results from new core-flood experiments conducted on samples from the Duperow Formation (Montana, USA). Agreement between model predictions and experimental data increase our confidence that these parameter ranges need not be considered site-specific but may be applied (within reason) at various locations and reservoirs. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Taisne, B.; Pansino, S.; Manta, F.; Tay Wen Jing, C.
2017-12-01
Have you ever dreamed about continuous, high resolution InSAR data? Have you ever dreamed about a transparent earth allowing you to see what is actually going on under a volcano? Well, you likely dreamed about an analogue facility that allows you to scale down the natural system to fit into a room, with a controlled environment and complex visualisation system. Analogue modeling has been widely used to understand magmatic processes and thanks to a transparent analogue for the elastic Earth's crust, we can see, as it evolves with time, the migration of a dyke, the volume change of a chamber or the rise of a bubble in a conduit. All those phenomena are modeled theoretically or numerically, with their own simplifications. Therefore, how well are we really constraining the physical parameters describing the evolution of a dyke or a chamber? Getting access to those parameters, in real time and with high level of confidence is of paramount importance while dealing with unrest at volcanoes. The aim of this research is to estimate the uncertainties of the widely used Okada and Mogi models. To do so, we design a set of analogue experiments allowing us to explore different elastic properties of the medium, the characteristic of the fluid injected into the medium as well as the depth, size and volume change of a reservoir. The associated surface deformation is extracted using an array of synchronised cameras and using digital image correlation and structure from motion for horizontal and vertical deformation respectively. The surface deformation are then inverted to retrieve the controlling parameters (e.g. location and volume change of a chamber, or orientation, position, length, breadth and opening of a dyke). By comparing those results with the known parameters, that we can see and measure independently, we estimate the uncertainties of the models themself, and the associated level of confidence for each of the inverted parameters.
NASA Astrophysics Data System (ADS)
Norton, Andrew S.
An integral component of managing game species is an understanding of population dynamics and relative abundance. Harvest data are frequently used to estimate abundance of white-tailed deer. Unless harvest age-structure is representative of the population age-structure and harvest vulnerability remains constant from year to year, these data alone are of limited value. Additional model structure and auxiliary information has accommodated this shortcoming. Specifically, integrated age-at-harvest (AAH) state-space population models can formally combine multiple sources of data, and regularization via hierarchical model structure can increase flexibility of model parameters. I collected known fates data, which I evaluated and used to inform trends in survival parameters for an integrated AAH model. I used temperature and snow depth covariates to predict survival outside of the hunting season, and opening weekend temperature and percent of corn harvest covariates to predict hunting season survival. When auxiliary empirical data were unavailable for the AAH model, moderately informative priors provided sufficient information for convergence and parameter estimates. The AAH model was most sensitive to errors in initial abundance, but this error was calibrated after 3 years. Among vital rates, the AAH model was most sensitive to reporting rates (percentage of mortality during the hunting season related to harvest). The AAH model, using only harvest data, was able to track changing abundance trends due to changes in survival rates even when prior models did not inform these changes (i.e. prior models were constant when truth varied). I also compared AAH model results with estimates from the Wisconsin Department of Natural Resources (WIDNR). Trends in abundance estimates from both models were similar, although AAH model predictions were systematically higher than WIDNR estimates in the East study area. When I incorporated auxiliary information (i.e. integrated AAH model) about survival outside the hunting season from known fates data, predicted trends appeared more closely related to what was expected. Disagreements between the AAH model and WIDNR estimates in the East were likely related to biased predictions for reporting and survival rates from the AAH model.
DOT National Transportation Integrated Search
1971-05-01
The report describes a dynamic model of a traffic circle which has been implemented on a CRT display terminal. The model includes sufficient parameters to allow changes in the structure of the traffic circle, the frequency of traffic introduced to th...
Dealing with Non-stationarity in Intensity-Frequency-Duration Curve
NASA Astrophysics Data System (ADS)
Rengaraju, S.; Rajendran, V.; C T, D.
2017-12-01
Extremes like flood and drought are becoming frequent and more vulnerable in recent times, generally attributed to the recent revelation of climate change. One of the main concerns is that whether the present infrastructures like dams, storm water drainage networks, etc., which were designed following the so called `stationary' assumption, are capable of withstanding the expected severe extremes. Stationary assumption considers that extremes are not changing with respect to time. However, recent studies proved that climate change has altered the climate extremes both temporally and spatially. Traditionally, the observed non-stationary in the extreme precipitation is incorporated in the extreme value distributions in terms of changing parameters. Nevertheless, this raises a question which parameter needs to be changed, i.e. location or scale or shape, since either one or more of these parameters vary at a given location. Hence, this study aims to detect the changing parameters to reduce the complexity involved in the development of non-stationary IDF curve and to provide the uncertainty bound of estimated return level using Bayesian Differential Evolutionary Monte Carlo (DE-MC) algorithm. Firstly, the extreme precipitation series is extracted using Peak Over Threshold. Then, the time varying parameter(s) is(are) detected for the extracted series using Generalized Additive Models for Location Scale and Shape (GAMLSS). Then, the IDF curve is constructed using Generalized Pareto Distribution incorporating non-stationarity only if the parameter(s) is(are) changing with respect to time, otherwise IDF curve will follow stationary assumption. Finally, the posterior probability intervals of estimated return revel are computed through Bayesian DE-MC approach and the non-stationary based IDF curve is compared with the stationary based IDF curve. The results of this study emphasize that the time varying parameters also change spatially and the IDF curves should incorporate non-stationarity only if there is change in the parameters, though there may be significant change in the extreme rainfall series. Our results evoke the importance of updating the infrastructure design strategies for the changing climate, by adopting the non-stationary based IDF curves.
The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.
ERIC Educational Resources Information Center
Dunivant, Noel
The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…
Gradient parameter and axial and field rays in the gradient-index crystalline lens model
NASA Astrophysics Data System (ADS)
Pérez, M. V.; Bao, C.; Flores-Arias, M. T.; Rama, M. A.; Gómez-Reino, C.
2003-09-01
Gradient-index models of the human lens have received wide attention in optometry and vision sciences for considering how changes in the refractive index profile with age and accommodation may affect refractive power. This paper uses the continuous asymmetric bi-elliptical model to determine gradient parameter and axial and field rays of the human lens in order to study the paraxial propagation of light through the crystalline lens of the eye.
NASA Astrophysics Data System (ADS)
Stinziano, J. R.; Way, D.; Bauerle, W.
2017-12-01
Photosynthetic temperature acclimation could strongly affect coupled vegetation-atmosphere feedbacks in the global carbon cycle, especially as the climate warms. Thermal acclimation of photosynthesis can be modelled as changes in the parameters describing the direct effect of temperature on photosynthetic capacity (activation energy, Ea; deactivation energy, Hd; entropy parameter, ΔS) or the basal value of photosynthetic capacity (i.e. photosynthetic capacity measured at 25 °C), however the impact of acclimating these parameters (individually or in combination) on vegetative carbon gain is relatively unexplored. Here we compare the ability of 66 photosynthetic temperature acclimation scenarios to improve predictions of a spatially explicit canopy carbon flux model, MAESTRA, for eddy covariance data from a loblolly pine forest. We show that: 1) incorporating seasonal temperature acclimation of basal photosynthetic capacity improves the model's ability to capture seasonal changes in carbon fluxes; 2) multifactor scenarios of photosynthetic temperature acclimation provide minimal (if any) improvement in model performance over single factor acclimation scenarios; 3) acclimation of enzyme activation energies should be restricted to the temperature ranges of the data from which the equations are derived; and 4) model performance is strongly affected by the choice of deactivation energy. We suggest that a renewed effort be made into understanding the thermal acclimation of enzyme activation and deactivation energies across broad temperature ranges to better understand the mechanisms underlying thermal photosynthetic acclimation.
Future possible crop yield scenarios under multiple SSP and RCP scenarios.
NASA Astrophysics Data System (ADS)
Sakurai, G.; Yokozawa, M.; Nishimori, M.; Okada, M.
2016-12-01
Understanding the effect of future climate change on global crop yields is one of the most important tasks for global food security. Future crop yields would be influenced by climatic factors such as the changes of temperature, precipitation and atmospheric carbon dioxide concentration. On the other hand, the effect of the changes of agricultural technologies such as crop varieties, pesticide and fertilizer input on crop yields have large uncertainty. However, not much is available on the contribution ratio of each factor under the future climate change scenario. We estimated the future global yields of four major crops (maize, soybean, rice and wheat) under three Shared Socio Economic Pathways (SSPs) and four Representative Concentration Pathways (RCPs). For this purpose, firstly, we estimated a parameter of a process based model (PRYSBI2) using a Bayesian method for each 1.125 degree spatial grid. The model parameter is relevant to the agricultural technology (we call "technological parameter" here after). Then, we analyzed the relationship between the values of technological parameter and GDP values. We found that the estimated values of the technological parameter were positively correlated with the GDP. Using the estimated relationship, we predicted future crop yield during 2020 and 2100 under SSP1, SSP2 and SSP3 scenarios and RCP 2.6, 4.5, 6.0 and 8.5. The estimated crop yields were different among SSP scenarios. However, we found that the yield difference attributable to SSPs were smaller than those attributable to CO2 fertilization effects and climate change. Particularly, the estimated effect of the change of atmospheric carbon dioxide concentration on global yields was more than four times larger than that of GDP for C3 crops.
Mathematical modelling of flow distribution in the human cardiovascular system
NASA Technical Reports Server (NTRS)
Sud, V. K.; Srinivasan, R. S.; Charles, J. B.; Bungo, M. W.
1992-01-01
The paper presents a detailed model of the entire human cardiovascular system which aims to study the changes in flow distribution caused by external stimuli, changes in internal parameters, or other factors. The arterial-venous network is represented by 325 interconnected elastic segments. The mathematical description of each segment is based on equations of hydrodynamics and those of stress/strain relationships in elastic materials. Appropriate input functions provide for the pumping of blood by the heart through the system. The analysis employs the finite-element technique which can accommodate any prescribed boundary conditions. Values of model parameters are from available data on physical and rheological properties of blood and blood vessels. As a representative example, simulation results on changes in flow distribution with changes in the elastic properties of blood vessels are discussed. They indicate that the errors in the calculated overall flow rates are not significant even in the extreme case of arteries and veins behaving as rigid tubes.
NASA Astrophysics Data System (ADS)
Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li
2017-01-01
Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.
Dudley, Robert W.; Nielsen, Martha G.
2011-01-01
The U.S. Geological Survey (USGS) began a study in 2008 to investigate anticipated changes in summer streamflows and stream temperatures in four coastal Maine river basins and the potential effects of those changes on populations of endangered Atlantic salmon. To achieve this purpose, it was necessary to characterize the quantity and timing of streamflow in these rivers by developing and evaluating a distributed-parameter watershed model for a part of each river basin by using the USGS Precipitation-Runoff Modeling System (PRMS). The GIS (geographic information system) Weasel, a USGS software application, was used to delineate the four study basins and their many subbasins, and to derive parameters for their geographic features. The models were calibrated using a four-step optimization procedure in which model output was evaluated against four datasets for calibrating solar radiation, potential evapotranspiration, annual and seasonal water balances, and daily streamflows. The calibration procedure involved thousands of model runs that used the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The calibrated watershed models performed satisfactorily, in that Nash-Sutcliffe efficiency (NSE) statistic values for the calibration periods ranged from 0.59 to 0.75 (on a scale of negative infinity to 1) and NSE statistic values for the evaluation periods ranged from 0.55 to 0.73. The calibrated watershed models simulate daily streamflow at many locations in each study basin. These models enable natural resources managers to characterize the timing and amount of streamflow in order to support a variety of water-resources efforts including water-quality calculations, assessments of water use, modeling of population dynamics and migration of Atlantic salmon, modeling and assessment of habitat, and simulation of anticipated changes to streamflow and water temperature resulting from changes forecast for air temperature and precipitation.
Antropometric parameters problem solving of shoe lasts by deforming membranes with medium weight
NASA Astrophysics Data System (ADS)
Albu, A. V.; Anghel Drugarin, C. V.; Barla, E. M.; Porav, V.
2018-01-01
The paper presents research results into getting a virtual model of shoe last and anthropometric parameters change. The most important change occurs in the fingers region. Alternatives CAD-CAM technology for next generation is based on DELCAM software for the CAM procedure and simulation of MATLAB software. This research has led to the virtual changes of the last, anthropometric parameter - the width of the fingers (ld) and shoe last length - (Lp) and images have been achieved with the representation in section of the shoe last changed from the original shoe lasts by FEM method (Finite element method) in MATLAB environment. The results are applied in the textile industry and in the elaboration of linings consumption or in the development of leather substitutes on fabric, knitted or woven material type.
NASA Technical Reports Server (NTRS)
Scholtz, P.; Smyth, P.
1992-01-01
This article describes an investigation of a statistical hypothesis testing method for detecting changes in the characteristics of an observed time series. The work is motivated by the need for practical automated methods for on-line monitoring of Deep Space Network (DSN) equipment to detect failures and changes in behavior. In particular, on-line monitoring of the motor current in a DSN 34-m beam waveguide (BWG) antenna is used as an example. The algorithm is based on a measure of the information theoretic distance between two autoregressive models: one estimated with data from a dynamic reference window and one estimated with data from a sliding reference window. The Hinkley cumulative sum stopping rule is utilized to detect a change in the mean of this distance measure, corresponding to the detection of a change in the underlying process. The basic theory behind this two-model test is presented, and the problem of practical implementation is addressed, examining windowing methods, model estimation, and detection parameter assignment. Results from the five fault-transition simulations are presented to show the possible limitations of the detection method, and suggestions for future implementation are given.
Bai, Yu; Katahira, Kentaro; Ohira, Hideki
2014-01-01
Humans are capable of correcting their actions based on actions performed in the past, and this ability enables them to adapt to a changing environment. The computational field of reinforcement learning (RL) has provided a powerful explanation for understanding such processes. Recently, the dual learning system, modeled as a hybrid model that incorporates value update based on reward-prediction error and learning rate modulation based on the surprise signal, has gained attention as a model for explaining various neural signals. However, the functional significance of the hybrid model has not been established. In the present study, we used computer simulation in a reversal learning task to address functional significance in a probabilistic reversal learning task. The hybrid model was found to perform better than the standard RL model in a large parameter setting. These results suggest that the hybrid model is more robust against the mistuning of parameters compared with the standard RL model when decision-makers continue to learn stimulus-reward contingencies, which can create abrupt changes. The parameter fitting results also indicated that the hybrid model fit better than the standard RL model for more than 50% of the participants, which suggests that the hybrid model has more explanatory power for the behavioral data than the standard RL model. PMID:25161635
TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, H; Gordon, J; Chetty, I
2014-06-15
Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less
NASA Astrophysics Data System (ADS)
Eisner, Stephanie; Huang, Shaochun; Majasalmi, Titta; Bright, Ryan; Astrup, Rasmus; Beldring, Stein
2017-04-01
Forests are recognized for their decisive effect on landscape water balance with structural forest characteristics as stand density or species composition determining energy partitioning and dominant flow paths. However, spatial and temporal variability in forest structure is often poorly represented in hydrological modeling frameworks, in particular in regional to large scale hydrological modeling and impact analysis. As a common practice, prescribed land cover classes (including different generic forest types) are linked to parameter values derived from literature, or parameters are determined by calibration. While national forest inventory (NFI) data provide comprehensive, detailed information on hydrologically relevant forest characteristics, their potential to inform hydrological simulation over larger spatial domains is rarely exploited. In this study we present a modeling framework that couples the distributed hydrological model HBV with forest structural information derived from the Norwegian NFI and multi-source remote sensing data. The modeling framework, set up for the entire of continental Norway at 1 km spatial resolution, is explicitly designed to study the combined and isolated impacts of climate change, forest management and land use change on hydrological fluxes. We use a forest classification system based on forest structure rather than biomes which allows to implicitly account for impacts of forest management on forest structural attributes. In the hydrological model, different forest classes are represented by three parameters: leaf area index (LAI), mean tree height and surface albedo. Seasonal cycles of LAI and surface albedo are dynamically simulated to make the framework applicable under climate change conditions. Based on a hindcast for the pilot regions Nord-Trøndelag and Sør-Trøndelag, we show how forest management has affected regional hydrological fluxes during the second half of the 20th century as contrasted to climate variability.
An Investigation on the Sensitivity of the Parameters of Urban Flood Model
NASA Astrophysics Data System (ADS)
M, A. B.; Lohani, B.; Jain, A.
2015-12-01
Global climatic change has triggered weather patterns which lead to heavy and sudden rainfall in different parts of world. The impact of heavy rainfall is severe especially on urban areas in the form of urban flooding. In order to understand the effect of heavy rainfall induced flooding, it is necessary to model the entire flooding scenario more accurately, which is now becoming possible with the availability of high resolution airborne LiDAR data and other real time observations. However, there is not much understanding on the optimal use of these data and on the effect of other parameters on the performance of the flood model. This study aims at developing understanding on these issues. In view of the above discussion, the aim of this study is to (i) understand that how the use of high resolution LiDAR data improves the performance of urban flood model, and (ii) understand the sensitivity of various hydrological parameters on urban flood modelling. In this study, modelling of flooding in urban areas due to heavy rainfall is carried out considering Indian Institute of Technology (IIT) Kanpur, India as the study site. The existing model MIKE FLOOD, which is accepted by Federal Emergency Management Agency (FEMA), is used along with the high resolution airborne LiDAR data. Once the model is setup it is made to run by changing the parameters such as resolution of Digital Surface Model (DSM), manning's roughness, initial losses, catchment description, concentration time, runoff reduction factor. In order to realize this, the results obtained from the model are compared with the field observations. The parametric study carried out in this work demonstrates that the selection of catchment description plays a very important role in urban flood modelling. Results also show the significant impact of resolution of DSM, initial losses and concentration time on urban flood model. This study will help in understanding the effect of various parameters that should be part of a flood model for its accurate performance.
Latent Growth and Dynamic Structural Equation Models.
Grimm, Kevin J; Ram, Nilam
2018-05-07
Latent growth models make up a class of methods to study within-person change-how it progresses, how it differs across individuals, what are its determinants, and what are its consequences. Latent growth methods have been applied in many domains to examine average and differential responses to interventions and treatments. In this review, we introduce the growth modeling approach to studying change by presenting different models of change and interpretations of their model parameters. We then apply these methods to examining sex differences in the development of binge drinking behavior through adolescence and into adulthood. Advances in growth modeling methods are then discussed and include inherently nonlinear growth models, derivative specification of growth models, and latent change score models to study stochastic change processes. We conclude with relevant design issues of longitudinal studies and considerations for the analysis of longitudinal data.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
Kolodny, Oren; Creanza, Nicole; Feldman, Marcus W
2016-12-01
One of the most puzzling features of the prehistoric record of hominid stone tools is its apparent punctuation: it consists of abrupt bursts of dramatic change that separate long periods of largely unchanging technology. Within each such period, small punctuated cultural modifications take place. Punctuation on multiple timescales and magnitudes is also found in cultural trajectories from historical times. To explain these sharp cultural bursts, researchers invoke such external factors as sudden environmental change, rapid cognitive or morphological change in the hominids that created the tools, or replacement of one species or population by another. Here we propose a dynamic model of cultural evolution that accommodates empirical observations: without invoking external factors, it gives rise to a pattern of rare, dramatic cultural bursts, interspersed by more frequent, smaller, punctuated cultural modifications. Our model includes interdependent innovation processes that occur at different rates. It also incorporates a realistic aspect of cultural evolution: cultural innovations, such as those that increase food availability or that affect cultural transmission, can change the parameters that affect cultural evolution, thereby altering the population's cultural dynamics and steady state. This steady state can be regarded as a cultural carrying capacity. These parameter-changing cultural innovations occur very rarely, but whenever one occurs, it triggers a dramatic shift towards a new cultural steady state. The smaller and more frequent punctuated cultural changes, on the other hand, are brought about by innovations that spur the invention of further, related, technology, and which occur regardless of whether the population is near its cultural steady state. Our model suggests that common interpretations of cultural shifts as evidence of biological change, for example the appearance of behaviorally modern humans, may be unwarranted.
2016-01-01
One of the most puzzling features of the prehistoric record of hominid stone tools is its apparent punctuation: it consists of abrupt bursts of dramatic change that separate long periods of largely unchanging technology. Within each such period, small punctuated cultural modifications take place. Punctuation on multiple timescales and magnitudes is also found in cultural trajectories from historical times. To explain these sharp cultural bursts, researchers invoke such external factors as sudden environmental change, rapid cognitive or morphological change in the hominids that created the tools, or replacement of one species or population by another. Here we propose a dynamic model of cultural evolution that accommodates empirical observations: without invoking external factors, it gives rise to a pattern of rare, dramatic cultural bursts, interspersed by more frequent, smaller, punctuated cultural modifications. Our model includes interdependent innovation processes that occur at different rates. It also incorporates a realistic aspect of cultural evolution: cultural innovations, such as those that increase food availability or that affect cultural transmission, can change the parameters that affect cultural evolution, thereby altering the population’s cultural dynamics and steady state. This steady state can be regarded as a cultural carrying capacity. These parameter-changing cultural innovations occur very rarely, but whenever one occurs, it triggers a dramatic shift towards a new cultural steady state. The smaller and more frequent punctuated cultural changes, on the other hand, are brought about by innovations that spur the invention of further, related, technology, and which occur regardless of whether the population is near its cultural steady state. Our model suggests that common interpretations of cultural shifts as evidence of biological change, for example the appearance of behaviorally modern humans, may be unwarranted. PMID:28036346
Sensitivity analysis of machine-learning models of hydrologic time series
NASA Astrophysics Data System (ADS)
O'Reilly, A. M.
2017-12-01
Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
Smith, Robert W; van Rosmalen, Rik P; Martins Dos Santos, Vitor A P; Fleck, Christian
2018-06-19
Models of metabolism are often used in biotechnology and pharmaceutical research to identify drug targets or increase the direct production of valuable compounds. Due to the complexity of large metabolic systems, a number of conclusions have been drawn using mathematical methods with simplifying assumptions. For example, constraint-based models describe changes of internal concentrations that occur much quicker than alterations in cell physiology. Thus, metabolite concentrations and reaction fluxes are fixed to constant values. This greatly reduces the mathematical complexity, while providing a reasonably good description of the system in steady state. However, without a large number of constraints, many different flux sets can describe the optimal model and we obtain no information on how metabolite levels dynamically change. Thus, to accurately determine what is taking place within the cell, finer quality data and more detailed models need to be constructed. In this paper we present a computational framework, DMPy, that uses a network scheme as input to automatically search for kinetic rates and produce a mathematical model that describes temporal changes of metabolite fluxes. The parameter search utilises several online databases to find measured reaction parameters. From this, we take advantage of previous modelling efforts, such as Parameter Balancing, to produce an initial mathematical model of a metabolic pathway. We analyse the effect of parameter uncertainty on model dynamics and test how recent flux-based model reduction techniques alter system properties. To our knowledge this is the first time such analysis has been performed on large models of metabolism. Our results highlight that good estimates of at least 80% of the reaction rates are required to accurately model metabolic systems. Furthermore, reducing the size of the model by grouping reactions together based on fluxes alters the resulting system dynamics. The presented pipeline automates the modelling process for large metabolic networks. From this, users can simulate their pathway of interest and obtain a better understanding of how altering conditions influences cellular dynamics. By testing the effects of different parameterisations we are also able to provide suggestions to help construct more accurate models of complete metabolic systems in the future.
Lyu, Jian; Liu, Xuan; Bi, Jinfeng; Wu, Xinye; Zhou, Linyan; Ruan, Weihong; Zhou, Mo; Jiao, Yi
2018-03-01
Kinetics of non-enzymatic browning and loss of free amino acids during different storage temperature (4, 25, 37 °C) were investigated. Changes of browning degree ( A 420 ), color parameters, Vitamin C ( V c ), free amino acids and 5-hydroxymethylfurfural (5-HMF) were analyzed to evaluate the non-enzymatic browning reactions, which were significantly affected by storage temperature. The lower temperature (4 °C) decreased the loss of V c and the generation of 5-HMF, but induce the highest loss of serine. At the end of storage, loss of serine, alanine and aspartic acid were mainly lost. Results showed that zero-order kinetic model ( R 2 > 0.859), the first-order model ( R 2 > 0.926) and the combined kinetic model ( R 2 > 0.916) were the most appropriate to describe the changes of a * and b * values, the degradation of V c and the changes of A 420 , L * and 5-HMF during different storage temperatures. These kinetic models can be applied for predicting and minimizing the non-enzymatic browning of fresh peach juice during storage.
Benke, Timothy A; Lüthi, Andreas; Palmer, Mary J; Wikström, Martin A; Anderson, William W; Isaac, John T R; Collingridge, Graham L
2001-01-01
The molecular properties of synaptic α-amino-3-hydroxy-5-methyl-4-isoxazolepropionate (AMPA) receptors are an important factor determining excitatory synaptic transmission in the brain. Changes in the number (N) or single-channel conductance (γ) of functional AMPA receptors may underlie synaptic plasticity, such as long-term potentiation (LTP) and long-term depression (LTD). These parameters have been estimated using non-stationary fluctuation analysis (NSFA). The validity of NSFA for studying the channel properties of synaptic AMPA receptors was assessed using a cable model with dendritic spines and a microscopic kinetic description of AMPA receptors. Electrotonic, geometric and kinetic parameters were altered in order to determine their effects on estimates of the underlying γ. Estimates of γ were very sensitive to the access resistance of the recording (RA) and the mean open time of AMPA channels. Estimates of γ were less sensitive to the distance between the electrode and the synaptic site, the electrotonic properties of dendritic structures, recording electrode capacitance and background noise. Estimates of γ were insensitive to changes in spine morphology, synaptic glutamate concentration and the peak open probability (Po) of AMPA receptors. The results obtained using the model agree with biological data, obtained from 91 dendritic recordings from rat CA1 pyramidal cells. A correlation analysis showed that RA resulted in a slowing of the decay time constant of excitatory postsynaptic currents (EPSCs) by approximately 150 %, from an estimated value of 3.1 ms. RA also greatly attenuated the absolute estimate of γ by approximately 50-70 %. When other parameters remain constant, the model demonstrates that NSFA of dendritic recordings can readily discriminate between changes in γvs. changes in N or Po. Neither background noise nor asynchronous activation of multiple synapses prevented reliable discrimination between changes in γ and changes in either N or Po. The model (available online) can be used to predict how changes in the different properties of AMPA receptors may influence synaptic transmission and plasticity. PMID:11731574
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
NASA Astrophysics Data System (ADS)
Kruijt, B.; Jans, W.; Vasconcelos, S.; Tribuzy, E. S.; Felsemburgh, C.; Eliane, M.; Rowland, L.; da Costa, A. C. L.; Meir, P.
2014-12-01
In many dynamic vegetation models, degradation of the tropical forests is induced because they assume that productivity falls rapidly when temperatures rise in the region of 30-40°C. Apart plant respiration, this is due to the assumptions on the temperature optima of photosynthetic capacity, which are low and can differ widely between models, where in fact hardly any empirical information is available for tropical forests. Even less is known about the possibility that photosynthesis will acclimate to changing temperatures. The objective of this study to is to provide better estimates for optima, as well as to determine whether any acclimation to temperature change is to be expected. We present both new and hitherto unpublished data on the temperature response of photosynthesis of Amazon rainforest trees, encompassing three sites, several species and five field campaigns. Leaf photosynthesis and its parameters were determined at a range of temperatures. To study the long-term (seasonal) acclimation of this response, this was combined with an artificial, in situ, multi-season leaf heating experiment. The data show that, on average for all non-heated cases, the photosynthetic parameter Vcmax weakly peaks between 35 and 40 ˚C, while heating does not have a clearly significant effect. Results for Jmax are slightly different, with sharper peaks. Scatter was relatively high, which could indicate weak overall temperature dependence. The combined results were used to fit new parameters to the various temperature response curve functions in a range of DGVMs. The figure shows a typical example: while the default Jules model assumes a temperature optimum for Vcmax at around 33 ˚C, the data suggest that Vcmax keeps rising up to at least 40 ˚C. Of course, calculated photosynthesis, obtained by applying this Vcmax in the Farquhar model, peaks at lower temperature. Finally, the implication of these new model parameters for modelled climate change impact on modelled Amazon forests will be assessed, where it is expected that predicted die-back will be less.
NASA Astrophysics Data System (ADS)
Chegwidden, O.; Nijssen, B.; Mao, Y.; Rupp, D. E.
2016-12-01
The Columbia River Basin (CRB) in the United States' Pacific Northwest (PNW) is highly regulated for hydropower generation, flood control, fish survival, irrigation and navigation. Historically it has had a hydrologic regime characterized by winter precipitation in the form of snow, followed by a spring peak in streamflow from snowmelt. Anthropogenic climate change is expected to significantly alter this regime, causing changes to streamflow timing and volume. While numerous hydrologic studies have been conducted across the CRB, the impact of methodological choices in hydrologic modeling has not been as heavily investigated. To better understand their impact on the spread in modeled projections of hydrological change, we ran simulations involving permutations of a variety of methodological choices. We used outputs from ten global climate models (GCMs) and two representative concentration pathways from the Intergovernmental Panel on Climate Change's Fifth Assessment Report. After downscaling the GCM output using three different techniques we forced the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS), both implemented at 1/16th degree ( 5 km) for the period 1950-2099. For the VIC model, we used three independently-derived parameter sets. We will show results from the range of simulations, both in the form of basin-wide spatial analyses of hydrologic variables and through analyses of changes in streamflow at selected sites throughout the CRB. We will then discuss the differences in sensitivities to climate change seen among the projections, paying particular attention to differences in projections from the hydrologic models and different parameter sets.
Mathematical modelling of the maternal cardiovascular system in the three stages of pregnancy.
Corsini, Chiara; Cervi, Elena; Migliavacca, Francesco; Schievano, Silvia; Hsia, Tain-Yen; Pennati, Giancarlo
2017-09-01
In this study, a mathematical model of the female circulation during pregnancy is presented in order to investigate the hemodynamic response to the cardiovascular changes associated with each trimester of pregnancy. First, a preliminary lumped parameter model of the circulation of a non-pregnant female was developed, including the heart, the systemic circulation with a specific block for the uterine district and the pulmonary circulation. The model was first tested at rest; then heart rate and vascular resistances were individually varied to verify the correct response to parameter alterations characterising pregnancy. In order to simulate hemodynamics during pregnancy at each trimester, the main changes applied to the model consisted in reducing vascular resistances, and simultaneously increasing heart rate and ventricular wall volumes. Overall, reasonable agreement was found between model outputs and in vivo data, with the trends of the cardiac hemodynamic quantities suggesting correct response of the heart model throughout pregnancy. Results were reported for uterine hemodynamics, with flow tracings resembling typical Doppler velocity waveforms at each stage, including pulsatility indexes. Such a model may be used to explore the changes that happen during pregnancy in female with cardiovascular diseases. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D
2011-11-01
There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.
Construction of Gridded Daily Weather Data and its Use in Central-European Agroclimatic Study
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Trnka, M.; Skalak, P.
2013-12-01
The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series, interpolated, and then modified according to the GCM- or RCM-based climate change scenarios. The present contribution, in which the parametric daily weather generator M&Rfi is linked to the high-resolution RCM output (ALADIN-Climate/CZ model) and GCM-based climate change scenarios, consists of two parts: The first part focuses on a methodology. Firstly, the gridded WG representing the baseline climate is created by merging information from observations and high resolution RCM outputs. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with RCM-simulated weather series vs. spatially scarcer observations. To represent the future climate, the WG parameters are modified according to the 'WG-friendly' climate change scenarios. These scenarios are defined in terms of changes in WG parameters and include - apart from changes in the means - changes in WG parameters, which represent the additional characteristics of the weather series (e.g. probability of wet day occurrence and lag-1 autocorrelation of daily mean temperature). The WG-friendly scenarios for the present experiment are based on comparison of future vs baseline surface weather series simulated by GCMs from a CMIP3 database. The second part will present results of climate change impact study based on an above methodology applied to Central Europe. The changes in selected climatic (focusing on the extreme precipitation and temperature characteristics) and agroclimatic (including number of days during vegetation season with heat and drought stresses) characteristics will be analysed. In discussing the results, the emphasis will be put on 'added value' of various aspects of above methodology (e.g. inclusion of changes in 'advanced' WG parameters into the climate change scenarios). Acknowledgements: The present experiment is made within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), and VALUE (COST ES 1102 action).
Thermodynamic description of Hofmeister effects on the LCST of thermosensitive polymers.
Heyda, Jan; Dzubiella, Joachim
2014-09-18
Cosolvent effects on protein or polymer collapse transitions are typically discussed in terms of a two-state free energy change that is strictly linear in cosolute concentration. Here we investigate in detail the nonlinear thermodynamic changes of the collapse transition occurring at the lower critical solution temperature (LCST) of the role-model polymer poly(N-isopropylacrylamide) [PNIPAM] induced by Hofmeister salts. First, we establish an equation, based on the second-order expansion of the two-state free energy in concentration and temperature space, which excellently fits the experimental LCST curves and enables us to directly extract the corresponding thermodynamic parameters. Linear free energy changes, grounded on generic excluded-volume mechanisms, are indeed found for strongly hydrated kosmotropes. In contrast, for weakly hydrated chaotropes, we find significant nonlinear changes related to higher order thermodynamic derivatives of the preferential interaction parameter between salts and polymer. The observed non-monotonic behavior of the LCST can then be understood from a not yet recognized sign change of the preferential interaction parameter with salt concentration. Finally, we find that solute partitioning models can possibly predict the linear free energy changes for the kosmotropes, but fail for chaotropes. Our findings cast strong doubt on their general applicability to protein unfolding transitions induced by chaotropes.
Colorado River basin sensitivity to disturbance impacts
NASA Astrophysics Data System (ADS)
Bennett, K. E.; Urrego-Blanco, J. R.; Jonko, A. K.; Vano, J. A.; Newman, A. J.; Bohn, T. J.; Middleton, R. S.
2017-12-01
The Colorado River basin is an important river for the food-energy-water nexus in the United States and is projected to change under future scenarios of increased CO2emissions and warming. Streamflow estimates to consider climate impacts occurring as a result of this warming are often provided using modeling tools which rely on uncertain inputs—to fully understand impacts on streamflow sensitivity analysis can help determine how models respond under changing disturbances such as climate and vegetation. In this study, we conduct a global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the Variable Infiltration Capacity (VIC) hydrologic model to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in VIC. Additionally, we examine sensitivities of basin-wide model simulations using an approach that incorporates changes in temperature, precipitation and vegetation to consider impact responses for snow-dominated headwater catchments, low elevation arid basins, and for the upper and lower river basins. We find that for the Colorado River basin, snow-dominated regions are more sensitive to uncertainties. New parameter sensitivities identified include runoff/evapotranspiration sensitivity to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI). Basin-wide streamflow sensitivities to precipitation, temperature and vegetation are variable seasonally and also between sub-basins; with the largest sensitivities for smaller, snow-driven headwater systems where forests are dense. For a major headwater basin, a 1ºC of warming equaled a 30% loss of forest cover, while a 10% precipitation loss equaled a 90% forest cover decline. Scenarios utilizing multiple disturbances led to unexpected results where changes could either magnify or diminish extremes, such as low and peak flows and streamflow timing, dependent on the strength and direction of the forcing. These results indicate the importance of understanding model sensitivities under disturbance impacts to manage these shifts; plan for future water resource changes and determine how the impacts will affect the sustainability and adaptability of food-energy-water systems.
Combinatorial influence of environmental parameters on transcription factor activity
Knijnenburg, T.A.; Wessels, L.F.A.; Reinders, M.J.T.
2008-01-01
Motivation: Cells receive a wide variety of environmental signals, which are often processed combinatorially to generate specific genetic responses. Changes in transcript levels, as observed across different environmental conditions, can, to a large extent, be attributed to changes in the activity of transcription factors (TFs). However, in unraveling these transcription regulation networks, the actual environmental signals are often not incorporated into the model, simply because they have not been measured. The unquantified heterogeneity of the environmental parameters across microarray experiments frustrates regulatory network inference. Results: We propose an inference algorithm that models the influence of environmental parameters on gene expression. The approach is based on a yeast microarray compendium of chemostat steady-state experiments. Chemostat cultivation enables the accurate control and measurement of many of the key cultivation parameters, such as nutrient concentrations, growth rate and temperature. The observed transcript levels are explained by inferring the activity of TFs in response to combinations of cultivation parameters. The interplay between activated enhancers and repressors that bind a gene promoter determine the possible up- or downregulation of the gene. The model is translated into a linear integer optimization problem. The resulting regulatory network identifies the combinatorial effects of environmental parameters on TF activity and gene expression. Availability: The Matlab code is available from the authors upon request. Contact: t.a.knijnenburg@tudelft.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18586711
Effects of climate change on evapotranspiration over the Okavango Delta water resources
NASA Astrophysics Data System (ADS)
Moses, Oliver; Hambira, Wame L.
2018-06-01
In semi-arid developing countries, most poor people depend on contaminated surface or groundwater resources since they do not have access to safe and centrally supplied water. These water resources are threatened by several factors that include high evapotranspiration rates. In the Okavango Delta region in the north-western Botswana, communities facing insufficient centrally supplied water rely mainly on the surface water resources of the Delta. The Delta loses about 98% of its water through evapotranspiration. However, the 2% remaining water rescues the communities facing insufficient water from the main stream water supply. To understand the effects of climate change on evapotranspiration over the Okavango Delta water resources, this study analysed trends in the main climatic parameters needed as input variables in evapotranspiration models. The Mann Kendall test was used in the analysis. Trend analysis is crucial since it reveals the direction of trends in the climatic parameters, which is helpful in determining the effects of climate change on evapotranspiration. The main climatic parameters required as input variables in evapotranspiration models that were of interest in this study were wind speeds, solar radiation and relative humidity. Very little research has been conducted on these climatic parameters in the Okavango Delta region. The conducted trend analysis was more on wind speeds, which had relatively longer data records than the other two climatic parameters of interest. Generally, statistically significant increasing trends have been found, which suggests that climate change is likely to further increase evapotranspiration over the Okavango Delta water resources.
Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland
NASA Astrophysics Data System (ADS)
Habermann, M.; Truffer, M.; Maxwell, D.
2013-06-01
Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We investigate the basal conditions throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a Shallow Shelf Approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L-curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of basal yield stress in the first 7 km of the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.
NASA Astrophysics Data System (ADS)
Kim, S.; Seo, D. J.
2017-12-01
When water temperature (TW) increases due to changes in hydrometeorological conditions, the overall ecological conditions change in the aquatic system. The changes can be harmful to human health and potentially fatal to fish habitat. Therefore, it is important to assess the impacts of thermal disturbances on in-stream processes of water quality variables and be able to predict effectiveness of possible actions that may be taken for water quality protection. For skillful prediction of in-stream water quality processes, it is necessary for the watershed water quality models to be able to reflect such changes. Most of the currently available models, however, assume static parameters for the biophysiochemical processes and hence are not able to capture nonstationaries seen in water quality observations. In this work, we assess the performance of the Hydrological Simulation Program-Fortran (HSPF) in predicting algal dynamics following TW increase. The study area is located in the Republic of Korea where waterway change due to weir construction and drought concurrently occurred around 2012. In this work we use data assimilation (DA) techniques to update model parameters as well as the initial condition of selected state variables for in-stream processes relevant to algal growth. For assessment of model performance and characterization of temporal variability, various goodness-of-fit measures and wavelet analysis are used.
High-throughput cardiac science on the Grid.
Abramson, David; Bernabeu, Miguel O; Bethwaite, Blair; Burrage, Kevin; Corrias, Alberto; Enticott, Colin; Garic, Slavisa; Gavaghan, David; Peachey, Tom; Pitt-Francis, J; Pueyo, E; Rodriguez, Blanca; Sher, Anna; Tan, Jefferson
2010-08-28
Cardiac electrophysiology is a mature discipline, with the first model of a cardiac cell action potential having been developed in 1962. Current models range from single ion channels, through very complex models of individual cardiac cells, to geometrically and anatomically detailed models of the electrical activity in whole ventricles. A critical issue for model developers is how to choose parameters that allow the model to faithfully reproduce observed physiological effects without over-fitting. In this paper, we discuss the use of a parametric modelling toolkit, called Nimrod, that makes it possible both to explore model behaviour as parameters are changed and also to tune parameters by optimizing model output. Importantly, Nimrod leverages computers on the Grid, accelerating experiments by using available high-performance platforms. We illustrate the use of Nimrod with two case studies, one at the cardiac tissue level and one at the cellular level.
Henriksson, Mikael; Corino, Valentina D A; Sornmo, Leif; Sandberg, Frida
2016-09-01
The atrioventricular (AV) node plays a central role in atrial fibrillation (AF), as it influences the conduction of impulses from the atria into the ventricles. In this paper, the statistical dual pathway AV node model, previously introduced by us, is modified so that it accounts for atrial impulse pathway switching even if the preceding impulse did not cause a ventricular activation. The proposed change in model structure implies that the number of model parameters subjected to maximum likelihood estimation is reduced from five to four. The model is evaluated using the data acquired in the RATe control in atrial fibrillation (RATAF) study, involving 24-h ECG recordings from 60 patients with permanent AF. When fitting the models to the RATAF database, similar results were obtained for both the present and the previous model, with a median fit of 86%. The results show that the parameter estimates characterizing refractory period prolongation exhibit considerably lower variation when using the present model, a finding that may be ascribed to fewer model parameters. The new model maintains the capability to model RR intervals, while providing more reliable parameters estimates. The model parameters are expected to convey novel clinical information, and may be useful for predicting the effect of rate control drugs.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
Prognostic characteristics of the lowest-mode internal waves in the Sea of Okhotsk
NASA Astrophysics Data System (ADS)
Kurkin, Andrey; Kurkina, Oxana; Zaytsev, Andrey; Rybin, Artem; Talipova, Tatiana
2017-04-01
The nonlinear dynamics of short-period internal waves on ocean shelves is well described by generalized nonlinear evolutionary models of Korteweg - de Vries type. Parameters of these models such as long wave propagation speed, nonlinear and dispersive coefficients can be calculated using hydrological data (sea water density stratification), and therefore have geographical and seasonal variations. The internal wave parameters for the basin of the Sea of Okhotsk are computed on a base of recent version of hydrological data source GDEM V3.0. Geographical and seasonal variability of internal wave characteristics is investigated. It is shown that annually or seasonally averaged data can be used for linear parameters. The nonlinear parameters are more sensitive to temporal averaging of hydrological data and detailed data are preferable to use. The zones for nonlinear parameters to change their signs (so-called "turning points") are selected. Possible internal waveforms appearing in the process of internal tide transformation including the solitary waves changing polarities are simulated for the hydrological conditions in the Sea of Okhotsk shelf to demonstrate different scenarios of internal wave adjustment, transformation, refraction and cylindrical divergence.
Rhelogical constraints on ridge formation on Icy Satellites
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Manga, M.
2010-12-01
The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.
Jump-Diffusion models and structural changes for asset forecasting in hydrology
NASA Astrophysics Data System (ADS)
Tranquille Temgoua, André Guy; Martel, Richard; Chang, Philippe J. J.; Rivera, Alfonso
2017-04-01
Impacts of climate change on surface water and groundwater are of concern in many regions of the world since water is an essential natural resource. Jump-Diffusion models are generally used in economics and other related fields but not in hydrology. The potential application could be made for hydrologic data series analysis and forecast. The present study uses Jump-Diffusion models by adding structural changes to detect fluctuations in hydrologic processes in relationship with climate change. The model implicitly assumes that modifications in rivers' flowrates can be divided into three categories: (a) normal changes due to irregular precipitation events especially in tropical regions causing major disturbance in hydrologic processes (this component is modelled by a discrete Brownian motion); (b) abnormal, sudden and non-persistent modifications in hydrologic proceedings are handled by Poisson processes; (c) the persistence of hydrologic fluctuations characterized by structural changes in hydrological data related to climate variability. The objective of this paper is to add structural changes in diffusion models with jumps, in order to capture the persistence of hydrologic fluctuations. Indirectly, the idea is to observe if there are structural changes of discharge/recharge over the study area, and to find an efficient and flexible model able of capturing a wide variety of hydrologic processes. Structural changes in hydrological data are estimated using the method of nonlinear discrete filters via Method of Simulated Moments (MSM). An application is given using sensitive parameters such as baseflow index and recession coefficient to capture discharge/recharge. Historical dataset are examined by the Volume Spread Analysis (VSA) to detect real time and random perturbations in hydrologic processes. The application of the method allows establishing more accurate hydrologic parameters. The impact of this study is perceptible in forecasting floods and groundwater recession. Keywords: hydrologic processes, Jump-Diffusion models, structural changes, forecast, climate change
Bustamante, P; Pena, M A; Barra, J
2000-01-20
Sodium salts are often used in drug formulation but their partial solubility parameters are not available. Sodium alters the physical properties of the drug and the knowledge of these parameters would help to predict adhesion properties that cannot be estimated using the solubility parameters of the parent acid. This work tests the applicability of the modified extended Hansen method to determine partial solubility parameters of sodium salts of acidic drugs containing a single hydrogen bonding group (ibuprofen, sodium ibuprofen, benzoic acid and sodium benzoate). The method uses a regression analysis of the logarithm of the experimental mole fraction solubility of the drug against the partial solubility parameters of the solvents, using models with three and four parameters. The solubility of the drugs was determined in a set of solvents representative of several chemical classes, ranging from low to high solubility parameter values. The best results were obtained with the four parameter model for the acidic drugs and with the three parameter model for the sodium derivatives. The four parameter model includes both a Lewis-acid and a Lewis-base term. Since the Lewis acid properties of the sodium derivatives are blocked by sodium, the three parameter model is recommended for these kind of compounds. Comparison of the parameters obtained shows that sodium greatly changes the polar parameters whereas the dispersion parameter is not much affected. Consequently the total solubility parameters of the salts are larger than for the parent acids in good agreement with the larger hydrophilicity expected from the introduction of sodium. The results indicate that the modified extended Hansen method can be applied to determine the partial solubility parameters of acidic drugs and their sodium salts.
Cai, Chuang; Li, Gang; Yang, Hailong; Yang, Jiaheng; Liu, Hong; Struik, Paul C; Luo, Weihong; Yin, Xinyou; Di, Lijun; Guo, Xuanhe; Jiang, Wenyu; Si, Chuanfei; Pan, Genxing; Zhu, Jianguo
2018-04-01
Leaf photosynthesis of crops acclimates to elevated CO 2 and temperature, but studies quantifying responses of leaf photosynthetic parameters to combined CO 2 and temperature increases under field conditions are scarce. We measured leaf photosynthesis of rice cultivars Changyou 5 and Nanjing 9108 grown in two free-air CO 2 enrichment (FACE) systems, respectively, installed in paddy fields. Each FACE system had four combinations of two levels of CO 2 (ambient and enriched) and two levels of canopy temperature (no warming and warmed by 1.0-2.0°C). Parameters of the C 3 photosynthesis model of Farquhar, von Caemmerer and Berry (the FvCB model), and of a stomatal conductance (g s ) model were estimated for the four conditions. Most photosynthetic parameters acclimated to elevated CO 2 , elevated temperature, and their combination. The combination of elevated CO 2 and temperature changed the functional relationships between biochemical parameters and leaf nitrogen content for Changyou 5. The g s model significantly underestimated g s under the combination of elevated CO 2 and temperature by 19% for Changyou 5 and by 10% for Nanjing 9108 if no acclimation was assumed. However, our further analysis applying the coupled g s -FvCB model to an independent, previously published FACE experiment showed that including such an acclimation response of g s hardly improved prediction of leaf photosynthesis under the four combinations of CO 2 and temperature. Therefore, the typical procedure that crop models using the FvCB and g s models are parameterized from plants grown under current ambient conditions may not result in critical errors in projecting productivity of paddy rice under future global change. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Cheong, Chin Wen
2008-02-01
This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.
NASA Astrophysics Data System (ADS)
Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei
2017-04-01
The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.
Jackson, Charlotte; Mangtani, Punam; Fine, Paul; Vynnycky, Emilia
2014-01-01
Background Changes in children’s contact patterns between termtime and school holidays affect the transmission of several respiratory-spread infections. Transmission of varicella zoster virus (VZV), the causative agent of chickenpox, has also been linked to the school calendar in several settings, but temporal changes in the proportion of young children attending childcare centres may have influenced this relationship. Methods We used two modelling methods (a simple difference equations model and a Time series Susceptible Infectious Recovered (TSIR) model) to estimate fortnightly values of a contact parameter (the per capita rate of effective contact between two specific individuals), using GP consultation data for chickenpox in England and Wales from 1967–2008. Results The estimated contact parameters were 22–31% lower during the summer holiday than during termtime. The relationship between the contact parameter and the school calendar did not change markedly over the years analysed. Conclusions In England and Wales, reductions in contact between children during the school summer holiday lead to a reduction in the transmission of VZV. These estimates are relevant for predicting how closing schools and nurseries may affect an outbreak of an emerging respiratory-spread pathogen. PMID:24932994
Tosun, İsmail
2012-01-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177
Tosun, Ismail
2012-03-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.
Estimating thermal performance curves from repeated field observations
Childress, Evan; Letcher, Benjamin H.
2017-01-01
Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.
NASA Astrophysics Data System (ADS)
Mahadevan, S.; Manojkumar, R.; Jayakumar, T.; Das, C. R.; Rao, B. P. C.
2016-06-01
17-4 PH (precipitation hardening) stainless steel is a soft martensitic stainless steel strengthened by aging at appropriate temperature for sufficient duration. Precipitation of copper particles in the martensitic matrix during aging causes coherency strains which improves the mechanical properties, namely hardness and strength of the matrix. The contributions to X-ray diffraction (XRD) profile broadening due to coherency strains caused by precipitation and crystallite size changes due to aging are separated and quantified using the modified Williamson-Hall approach. The estimated normalized mean square strain and crystallite size are used to explain the observed changes in hardness. Microstructural changes observed in secondary electron images are in qualitative agreement with crystallite size changes estimated from XRD profile analysis. The precipitation kinetics in the age-hardening regime and overaged regime are studied from hardness changes and they follow the Avrami kinetics and Wilson's model, respectively. In overaged condition, the hardness changes are linearly correlated to the tempering parameter (also known as Larson-Miller parameter). Similar linear variation is observed between the normalized mean square strain (determined from XRD line profile analysis) and the tempering parameter, in the incoherent regime which is beyond peak microstrain conditions.
Social judgment theory based model on opinion formation, polarization and evolution
NASA Astrophysics Data System (ADS)
Chau, H. F.; Wong, C. Y.; Chow, F. K.; Fung, Chi-Hang Fred
2014-12-01
The dynamical origin of opinion polarization in the real world is an interesting topic that physical scientists may help to understand. To properly model the dynamics, the theory must be fully compatible with findings by social psychologists on microscopic opinion change. Here we introduce a generic model of opinion formation with homogeneous agents based on the well-known social judgment theory in social psychology by extending a similar model proposed by Jager and Amblard. The agents’ opinions will eventually cluster around extreme and/or moderate opinions forming three phases in a two-dimensional parameter space that describes the microscopic opinion response of the agents. The dynamics of this model can be qualitatively understood by mean-field analysis. More importantly, first-order phase transition in opinion distribution is observed by evolving the system under a slow change in the system parameters, showing that punctuated equilibria in public opinion can occur even in a fully connected social network.
NASA Astrophysics Data System (ADS)
Szabó, Zsuzsanna; Edit Gál, Nóra; Kun, Éva; Szőcs, Teodóra; Falus, György
2017-04-01
Carbon Capture and Storage is a transitional technology to reduce greenhouse gas emissions and to mitigate climate change. Following the implementation and enforcement of the 2009/31/EC Directive in the Hungarian legislation, the Geological and Geophysical Institute of Hungary is required to evaluate the potential CO2 geological storage structures of the country. Basic assessment of these saline water formations has been already performed and the present goal is to extend the studies to the whole of the storage complex and consider the protection of fresh water aquifers of the neighbouring area even in unlikely scenarios when CO2 injection has a much more regional effect than planned. In this work, worst-case scenarios are modelled to understand the effects of CO2 or saline water leaks into drinking water aquifers. The dissolution of CO2 may significantly change the pH of fresh water which induces mineral dissolution and precipitation in the aquifer and therefore, changes in solution composition and even rock porosity. Mobilization of heavy metals may also be of concern. Brine migration from CO2 reservoir and replacement of fresh water in the shallower aquifer may happen due to pressure increase as a consequence of CO2 injection. The saline water causes changes in solution composition which may also induce mineral reactions. The modelling of the above scenarios has happened at several methodological levels such as equilibrium batch, kinetic batch and kinetic reactive transport simulations. All of these have been performed by PHREEQC using the PHREEQC.DAT thermodynamic database. Kinetic models use equations and kinetic rate parameters from the USGS report of Palandri and Kharaka (2004). Reactive transport modelling also considers estimated fluid flow and dispersivity of the studied formation. Further input parameters are the rock and the original ground water compositions of the aquifers and a range of gas-phase CO2 or brine replacement ratios. Worst-case scenarios at seven potential CO2-storage areas have been modelled. The visualization of results has been automatized by R programming. The three types of models (equilibrium, kinetic batch and reactive transport) provide different type but overlapping information. All modelling output of both scenarios (CO2/brine) indicate the increase of ion-concentrations in the fresh water, which might exceed drinking water limit values. Transport models provide a possibility to identify the most suitable chemical parameter in the fresh water for leakage monitoring. This indicator parameter may show detectable and early changes even far away from the contamination source. In the CO2 models potassium concentration increase is significant and runs ahead of the other parameters. In the rock, the models indicate feldspar, montmorillonite, dolomite and illite dissolution whereas calcite, chlorite, kaolinite and silica precipitates, and in the case of CO2-inflow models, dawsonite traps a part of the leaking gas.
Hierarchical models and the analysis of bird survey information
Sauer, J.R.; Link, W.A.
2003-01-01
Management of birds often requires analysis of collections of estimates. We describe a hierarchical modeling approach to the analysis of these data, in which parameters associated with the individual species estimates are treated as random variables, and probability statements are made about the species parameters conditioned on the data. A Markov-Chain Monte Carlo (MCMC) procedure is used to fit the hierarchical model. This approach is computer intensive, and is based upon simulation. MCMC allows for estimation both of parameters and of derived statistics. To illustrate the application of this method, we use the case in which we are interested in attributes of a collection of estimates of population change. Using data for 28 species of grassland-breeding birds from the North American Breeding Bird Survey, we estimate the number of species with increasing populations, provide precision-adjusted rankings of species trends, and describe a measure of population stability as the probability that the trend for a species is within a certain interval. Hierarchical models can be applied to a variety of bird survey applications, and we are investigating their use in estimation of population change from survey data.
On the theory of multi-pulse vibro-impact mechanisms
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Metrikin, V. S.; Nikiforova, I. V.; Ipatov, A. A.
2017-11-01
This paper presents a mathematical model of a new multi-striker eccentric shock-vibration mechanism with a crank-sliding bar vibration exciter and an arbitrary number of pistons. Analytical solutions for the parameters of the model are obtained to determine the regions of existence of stable periodic motions. Under the assumption of an absolutely inelastic collision of the piston, we derive equations that single out a bifurcational unattainable boundary in the parameter space, which has a countable number of arbitrarily complex stable periodic motions in its neighbourhood. We present results of numerical simulations, which illustrate the existence of periodic and stochastic motions. The methods proposed in this paper for investigating the dynamical characteristics of the new crank-type conrod mechanisms allow practitioners to indicate regions in the parameter space, which allow tuning these mechanisms into the most efficient periodic mode of operation, and to effectively analyze the main changes in their operational regimes when the system parameters are changed.
NASA Astrophysics Data System (ADS)
Becker, R.; Usman, M.
2017-12-01
A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.
A sequence-dependent rigid-base model of DNA
NASA Astrophysics Data System (ADS)
Gonzalez, O.; Petkevičiutė, D.; Maddocks, J. H.
2013-02-01
A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can successfully predict the nonlocal changes in the minimum energy configuration of an oligomer that are consequent upon a local change of sequence at the level of a single point mutation.
A sequence-dependent rigid-base model of DNA.
Gonzalez, O; Petkevičiūtė, D; Maddocks, J H
2013-02-07
A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can successfully predict the nonlocal changes in the minimum energy configuration of an oligomer that are consequent upon a local change of sequence at the level of a single point mutation.
Faugeras, Blaise; Maury, Olivier
2005-10-01
We develop an advection-diffusion size-structured fish population dynamics model and apply it to simulate the skipjack tuna population in the Indian Ocean. The model is fully spatialized, and movements are parameterized with oceanographical and biological data; thus it naturally reacts to environment changes. We first formulate an initial-boundary value problem and prove existence of a unique positive solution. We then discuss the numerical scheme chosen for the integration of the simulation model. In a second step we address the parameter estimation problem for such a model. With the help of automatic differentiation, we derive the adjoint code which is used to compute the exact gradient of a Bayesian cost function measuring the distance between the outputs of the model and catch and length frequency data. A sensitivity analysis shows that not all parameters can be estimated from the data. Finally twin experiments in which pertubated parameters are recovered from simulated data are successfully conducted.
NASA Technical Reports Server (NTRS)
Nunes, A. C., Jr.
1983-01-01
A tentative mathematical computer model of the microfissuring process during electron beam welding of Inconel 718 has been constructed. Predictions of the model are compatible with microfissuring tests on eight 0.25-in. thick test plates. The model takes into account weld power and speed, weld loss (efficiency), parameters and material characteristics. Besides the usual material characteristics (thermal and strength properties), a temperature and grain size dependent critical fracture strain is required by the model. The model is based upon fundamental physical theory (i.e., it is not a mere data interpolation system), and can be extended to other metals by suitable parameter changes.
Three-dimensional FEM model of FBGs in PANDA fibers with experimentally determined model parameters
NASA Astrophysics Data System (ADS)
Lindner, Markus; Hopf, Barbara; Koch, Alexander W.; Roths, Johannes
2017-04-01
A 3D-FEM model has been developed to improve the understanding of multi-parameter sensing with Bragg gratings in attached or embedded polarization maintaining fibers. The material properties of the fiber, especially Young's modulus and Poisson's ratio of the fiber's stress applying parts, are crucial for accurate simulations, but are usually not provided by the manufacturers. A methodology is presented to determine the unknown parameters by using experimental characterizations of the fiber and iterative FEM simulations. The resulting 3D-Model is capable of describing the change in birefringence of the free fiber when exposed to longitudinal strain. In future studies the 3D-FEM model will be employed to study the interaction of PANDA fibers with the surrounding materials in which they are embedded.
Viscoelastic flow modeling in the extrusion of a dough-like fluid
NASA Technical Reports Server (NTRS)
Dhanasekharan, M.; Kokini, J. L.; Janes, H. W. (Principal Investigator)
2000-01-01
This work attempts to investigate the effect of viscoelasticity and three-dimensional geometry in screw channels. The Phan-Thien Tanner (PTT) constitutive equation with simplified model parameters was solved in conjunction with the flow equations. Polyflow, a commercially available finite element code was used to solve the resulting nonlinear partial differential equations. The PTT model predicted one log scale lower pressure buildup compared to the equivalent Newtonian results. However, the velocity profile did not show significant changes for the chosen PTT model parameters. Past Researchers neglected viscoelastic effects and also the three dimensional nature of the flow in extruder channels. The results of this paper provide a starting point for further simulations using more realistic model parameters, which may enable the food engineer to more accurately scale-up and design extrusion processes.
A study of the kinetics of isothermal nicotine desorption from silicon dioxide
NASA Astrophysics Data System (ADS)
Adnadjevic, Borivoj; Lazarevic, Natasa; Jovanovic, Jelena
2010-12-01
The isothermal kinetics of nicotine desorption from silicon dioxide (SiO 2) was investigated. The isothermal thermogravimetric curves of nicotine at temperatures of 115 °C, 130 °C and 152 °C were recorded. The kinetic parameters ( Ea, ln A) of desorption of nicotine were calculated using various methods (stationary point, model constants and differential isoconversion method). By applying the "model-fitting" method, it was found that the kinetic model of nicotine desorption from silicon dioxide was a phase boundary controlled reaction (contracting volume). The values of the kinetic parameters, Ea,α and ln Aα, complexly change with changing degree of desorption and a compensation effect exists. A new mechanism of activation for the desorption of the absorbed molecules of nicotine was suggested in agreement with model of selective energy transfer.
The MIT IGSM-CAM framework for uncertainty studies in global and regional climate change
NASA Astrophysics Data System (ADS)
Monier, E.; Scott, J. R.; Sokolov, A. P.; Forest, C. E.; Schlosser, C. A.
2011-12-01
The MIT Integrated Global System Model (IGSM) version 2.3 is an intermediate complexity fully coupled earth system model that allows simulation of critical feedbacks among its various components, including the atmosphere, ocean, land, urban processes and human activities. A fundamental feature of the IGSM2.3 is the ability to modify its climate parameters: climate sensitivity, net aerosol forcing and ocean heat uptake rate. As such, the IGSM2.3 provides an efficient tool for generating probabilistic distribution functions of climate parameters using optimal fingerprint diagnostics. A limitation of the IGSM2.3 is its zonal-mean atmosphere model that does not permit regional climate studies. For this reason, the MIT IGSM2.3 was linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM) version 3 and new modules were developed and implemented in CAM in order to modify its climate sensitivity and net aerosol forcing to match that of the IGSM. The IGSM-CAM provides an efficient and innovative framework to study regional climate change where climate parameters can be modified to span the range of uncertainty and various emissions scenarios can be tested. This paper presents results from the cloud radiative adjustment method used to modify CAM's climate sensitivity. We also show results from 21st century simulations based on two emissions scenarios (a median "business as usual" scenario where no policy is implemented after 2012 and a policy scenario where greenhouse-gas are stabilized at 660 ppm CO2-equivalent concentrations by 2100) and three sets of climate parameters. The three values of climate sensitivity chosen are median and the bounds of the 90% probability interval of the probability distribution obtained by comparing the observed 20th century climate change with simulations by the IGSM with a wide range of climate parameters values. The associated aerosol forcing values were chosen to ensure a good agreement of the simulations with the observed climate change over the 20th century. Because the concentrations of sulfate aerosols significantly decrease over the 21st century in both emissions scenarios, climate changes obtained in these six simulations provide a good approximation for the median, and the 5th and 95th percentiles of the probability distribution of 21st century climate change.
NASA Astrophysics Data System (ADS)
Steinschneider, S.; Wi, S.; Brown, C. M.
2013-12-01
Flood risk management performance is investigated within the context of integrated climate and hydrologic modeling uncertainty to explore system robustness. The research question investigated is whether structural and hydrologic parameterization uncertainties are significant relative to other uncertainties such as climate change when considering water resources system performance. Two hydrologic models are considered, a conceptual, lumped parameter model that preserves the water balance and a physically-based model that preserves both water and energy balances. In the conceptual model, parameter and structural uncertainties are quantified and propagated through the analysis using a Bayesian modeling framework with an innovative error model. Mean climate changes and internal climate variability are explored using an ensemble of simulations from a stochastic weather generator. The approach presented can be used to quantify the sensitivity of flood protection adequacy to different sources of uncertainty in the climate and hydrologic system, enabling the identification of robust projects that maintain adequate performance despite the uncertainties. The method is demonstrated in a case study for the Coralville Reservoir on the Iowa River, where increased flooding over the past several decades has raised questions about potential impacts of climate change on flood protection adequacy.
Total solar eclipse effects on VLF signals: Observations and modeling
NASA Astrophysics Data System (ADS)
Clilverd, Mark A.; Rodger, Craig J.; Thomson, Neil R.; Lichtenberger, János; Steinbach, Péter; Cannon, Paul; Angling, Matthew J.
During the total solar eclipse observed in Europe on August 11, 1999, measurements were made of the amplitude and phase of four VLF transmitters in the frequency range 16-24 kHz. Five receiver sites were set up, and significant variations in phase and amplitude are reported for 17 paths, more than any previously during an eclipse. Distances from transmitter to receiver ranged from 90 to 14,510 km, although the majority were <2000 km. Typically, positive amplitude changes were observed throughout the whole eclipse period on path lengths <2000 km, while negative amplitude changes were observed on paths >10,000 km. Negative phase changes were observed on most paths, independent of path length. Although there was significant variation from path to path, the typical changes observed were ~3 dB and ~50°. The changes observed were modeled using the Long Wave Propagation Capability waveguide code. Maximum eclipse effects occurred when the Wait inverse scale height parameter β was 0.5 km-1 and the effective ionospheric height parameter H' was 79 km, compared with β=0.43km-1 and H'=71km for normal daytime conditions. The resulting changes in modeled amplitude and phase show good agreement with the majority of the observations. The modeling undertaken provides an interpretation of why previous estimates of height change during eclipses have shown such a range of values. A D region gas-chemistry model was compared with electron concentration estimates inferred from the observations made during the solar eclipse. Quiet-day H' and β parameters were used to define the initial ionospheric profile. The gas-chemistry model was then driven only by eclipse-related solar radiation levels. The calculated electron concentration values at 77 km altitude throughout the period of the solar eclipse show good agreement with the values determined from observations at all times, which suggests that a linear variation in electron production rate with solar ionizing radiation is reasonable. At times of minimum electron concentration the chemical model predicts that the D region profile would be parameterized by the same β and H' as the LWPC model values, and rocket profiles, during totality and can be considered a validation of the chemical processes defined within the model.
Vehicle Tire and Wheel Creation in BRL-CAD
2009-04-01
Tire Tread Modeling 4 4. Setting Tire Thickness 7 5. Changing the Rim Width 9 6. Changing the Radial Location of the... treaded or nontreaded model in the tire -model.c combination based on the analysis. 4. Setting Tire Thickness Tire thickness is manipulated via... tread is not modeled by default but can be added using options. • Fine-grained control of parameters such as tire thickness is available with
Establishment and correction of an Echelle cross-prism spectrogram reduction model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Bayanheshig; Li, Xiaotian; Cui, Jicheng
2017-11-01
The accuracy of an echelle cross-prism spectrometer depends on the matching degree between the spectrum reduction model and the actual state of the spectrometer. However, the error of adjustment can change the actual state of the spectrometer and result in a reduction model that does not match. This produces an inaccurate wavelength calibration. Therefore, the calibration of a spectrogram reduction model is important for the analysis of any echelle cross-prism spectrometer. In this study, the spectrogram reduction model of an echelle cross-prism spectrometer was established. The image position laws of a spectrometer that varies with the system parameters were simulated to the influence of the changes in prism refractive index, focal length and so on, on the calculation results. The model was divided into different wavebands. The iterative method, least squares principle and element lamps with known characteristic wavelength were used to calibrate the spectral model in different wavebands to obtain the actual values of the system parameters. After correction, the deviation of actual x- and y-coordinates and the coordinates calculated by the model are less than one pixel. The model corrected by this method thus reflects the system parameters in the current spectrometer state and can assist in accurate wavelength extraction. The instrument installation and adjustment would be guided in model-repeated correction, reducing difficulty of equipment, respectively.
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
A validation of dynamic causal modelling for 7T fMRI.
Tak, S; Noh, J; Cheong, C; Zeidman, P; Razi, A; Penny, W D; Friston, K J
2018-07-15
There is growing interest in ultra-high field magnetic resonance imaging (MRI) in cognitive and clinical neuroscience studies. However, the benefits offered by higher field strength have not been evaluated in terms of effective connectivity and dynamic causal modelling (DCM). In this study, we address the validity of DCM for 7T functional MRI data at two levels. First, we evaluate the predictive validity of DCM estimates based upon 3T and 7T in terms of reproducibility. Second, we assess improvements in the efficiency of DCM estimates at 7T, in terms of the entropy of the posterior distribution over model parameters (i.e., information gain). Using empirical data recorded during fist-closing movements with 3T and 7T fMRI, we found a high reproducibility of average connectivity and condition-specific changes in connectivity - as quantified by the intra-class correlation coefficient (ICC = 0.862 and 0.936, respectively). Furthermore, we found that the posterior entropy of 7T parameter estimates was substantially less than that of 3T parameter estimates; suggesting the 7T data are more informative - and furnish more efficient estimates. In the framework of DCM, we treated field-dependent parameters for the BOLD signal model as free parameters, to accommodate fMRI data at 3T and 7T. In addition, we made the resting blood volume fraction a free parameter, because different brain regions can differ in their vascularization. In this paper, we showed DCM enables one to infer changes in effective connectivity from 7T data reliably and efficiently. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
Gottfredson, Nisha C.; Bauer, Daniel J.; Baldwin, Scott A.; Okiishi, John C.
2014-01-01
Objective This study demonstrates how to use a shared parameter mixture model (SPMM) in longitudinal psychotherapy studies to accommodate missing that are due to a correlation between rate of improvement and termination of therapy. Traditional growth models assume that such a relationship does not exist (i.e., assume that data are missing at random) and will produce biased results if this assumption is incorrect. Method We use longitudinal data from 4,676 patients enrolled in a naturalistic study of psychotherapy to compare results from a latent growth model and a shared parameter mixture model (SPMM). Results In this dataset, estimates of the rate of improvement during therapy differ by 6.50 – 6.66% across the two models, indicating that participants with steeper trajectories left psychotherapy earliest, thereby potentially biasing inference for the slope in the latent growth model. Conclusion We conclude that reported estimates of change during therapy may be underestimated in naturalistic studies of therapy in which participants and their therapists determine the end of treatment. Because non-randomly missing data can also occur in randomized controlled trials or in observational studies of development, the utility of the SPMM extends beyond naturalistic psychotherapy data. PMID:24274626
Macromolecular refinement by model morphing using non-atomic parameterizations.
Cowtan, Kevin; Agirre, Jon
2018-02-01
Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.
Understanding Climate Uncertainty with an Ocean Focus
NASA Astrophysics Data System (ADS)
Tokmakian, R. T.
2009-12-01
Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in ocean circulation due to parameter specification will be described and early results using the ocean/ice components of the CCSM climate model in a designed experiment framework will be shown. Cox, P. and D. Stephenson, Climate Change: A Changing Climate for Prediction, 2007, Science 317 (5835), 207, DOI: 10.1126/science.1145956. Rougier, J. C., 2007: Probabilistic Inference for Future Climate Using an Ensemble of Climate Model Evaluations, Climatic Change, 81, 247-264. Smith L., 2002, What might we learn from climate forecasts? Proc. Nat’l Academy of Sciences, Vol. 99, suppl. 1, 2487-2492 doi:10.1073/pnas.012580599.
Predicting changes in volcanic activity through modelling magma ascent rate.
NASA Astrophysics Data System (ADS)
Thomas, Mark; Neuberg, Jurgen
2013-04-01
It is a simple fact that changes in volcanic activity happen and in retrospect they are easy to spot, the dissimilar eruption dynamics between an effusive and explosive event are not hard to miss. However to be able to predict such changes is a much more complicated process. To cause altering styles of activity we know that some part or combination of parts within the system must vary with time, as if there is no physical change within the system, why would the change in eruptive activity occur? What is unknown is which parts or how big a change is needed. We present the results of a suite of conduit flow models that aim to answer these questions by assessing the influence of individual model parameters such as the dissolved water content or magma temperature. By altering these variables in a systematic manner we measure the effect of the changes by observing the modelled ascent rate. We use the ascent rate as we believe it is a very important indicator that can control the style of eruptive activity. In particular, we found that the sensitivity of the ascent rate to small changes in model parameters surprising. Linking these changes to observable monitoring data in a way that these data could be used as a predictive tool is the ultimate goal of this work. We will show that changes in ascent rate can be estimated by a particular type of seismicity. Low frequency seismicity, thought to be caused by the brittle failure of melt is often linked with the movement of magma within a conduit. We show that acceleration in the rate of low frequency seismicity can correspond to an increase in the rate of magma movement and be used as an indicator for potential changes in eruptive activity.
NASA Astrophysics Data System (ADS)
Roth, A. C.; Hock, R.; Schuler, T.; Bieniek, P.; Aschwanden, A.
2017-12-01
Mass loss from glaciers in Southeast Alaska is expected to alter downstream ecological systems as runoff patterns change. To investigate these potential changes under future climate scenarios, distributed glacier mass balance modeling is required. However, the spatial resolution gap between global or regional climate models and the requirements for glacier mass balance modeling studies must be addressed first. We have used a linear theory of orographic precipitation model to downscale precipitation from both the Weather Research and Forecasting (WRF) model and ERA-Interim to the Juneau Icefield region over the period 1979-2013. This implementation of the LT model is a unique parameterization that relies on the specification of snow fall speed and rain fall speed as tuning parameters to calculate the cloud time delay, τ. We assessed the LT model results by considering winter precipitation so the effect of melt was minimized. The downscaled precipitation pattern produced by the LT model captures the orographic precipitation pattern absent from the coarse resolution WRF and ERA-Interim precipitation fields. Observational data constraints limited our ability to determine a unique parameter combination and calibrate the LT model to glaciological observations. We established a reference run of parameter values based on literature and performed a sensitivity analysis of the LT model parameters, horizontal resolution, and climate input data on the average winter precipitation. The results of the reference run showed reasonable agreement with the available glaciological measurements. The precipitation pattern produced by the LT model was consistent regardless of parameter combination, horizontal resolution, and climate input data, but the precipitation amount varied strongly with these factors. Due to the consistency of the winter precipitation pattern and the uncertainty in precipitation amount, we suggest a precipitation index map approach to be used in combination with a distributed mass balance model for future mass balance modeling studies of the Juneau Icefield. The LT model has potential to be used in other regions in Alaska and elsewhere with strong orographic effects for improved glacier mass balance modeling and/or hydrological modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yu; Cao, Ruifen; Pei, Xi
2015-06-15
Purpose: The flat-panel detector response characteristics are investigated to optimize the scanning parameter considering the image quality and less radiation dose. The signal conversion model is also established to predict the tumor shape and physical thickness changes. Methods: With the ELEKTA XVI system, the planar images of 10cm water phantom were obtained under different image acquisition conditions, including tube voltage, electric current, exposure time and frames. The averaged responses of square area in center were analyzed using Origin8.0. The response characteristics for each scanning parameter were depicted by different fitting types. The transmission measured for 10cm water was compared tomore » Monte Carlo simulation. Using the quadratic calibration method, a series of variable-thickness water phantoms images were acquired to derive the signal conversion model. A 20cm wedge water phantom with 2cm step thickness was used to verify the model. At last, the stability and reproducibility of the model were explored during a four week period. Results: The gray values of image center all decreased with the increase of different image acquisition parameter presets. The fitting types adopted were linear fitting, quadratic polynomial fitting, Gauss fitting and logarithmic fitting with the fitting R-Square 0.992, 0.995, 0.997 and 0.996 respectively. For 10cm water phantom, the transmission measured showed better uniformity than Monte Carlo simulation. The wedge phantom experiment show that the radiological thickness changes prediction error was in the range of (-4mm, 5mm). The signal conversion model remained consistent over a period of four weeks. Conclusion: The flat-panel response decrease with the increase of different scanning parameters. The preferred scanning parameter combination was 100kV, 10mA, 10ms, 15frames. It is suggested that the signal conversion model could effectively be used for tumor shape change and radiological thickness prediction. Supported by National Natural Science Foundation of China (81101132, 11305203) and Natural Science Foundation of Anhui Province (11040606Q55, 1308085QH138)« less
NASA Astrophysics Data System (ADS)
Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.
2017-07-01
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.
Extreme sensitivity in Thermoacoustics
NASA Astrophysics Data System (ADS)
Juniper, Matthew
2017-11-01
In rocket engines and gas turbines, fluctuations in the heat release rate can lock in to acoustic oscillations and grow catastrophically. Nine decades of engine development have shown that these oscillations are difficult to predict but can usually be eliminated with small ad hoc design changes. The difficulty in prediction arises because the oscillations' growth rate is exceedingly sensitive to parameters that cannot always be measured or simulated reliably, which introduces severe systematic error into thermoacoustic models of engines. Passive control strategies then have to be devised through full scale engine tests, which can be ruinously expensive. For the Apollo F1 engine, for example, 2000 full-scale tests were required. Even today, thermoacoustic oscillations often re-appear unexpectedly at full engine test stage. Although the physics is well known, a novel approach to design is required. In this presentation, the parameters of a thermoacoustic model are inferred from many thousand automated experiments using inverse uncertainty quantification. The adjoint of this model is used to obtain cheaply the gradients of every unstable mode with respect to the model parameters. This gradient information is then used in an optimization algorithm to stabilize every thermoacoustic mode by subtly changing the geometry of the model.
Improving the twilight model for polar cap absorption nowcasts
NASA Astrophysics Data System (ADS)
Rogers, N. C.; Kero, A.; Honary, F.; Verronen, P. T.; Warrington, E. M.; Danskin, D. W.
2016-11-01
During solar proton events (SPE), energetic protons ionize the polar mesosphere causing HF radio wave attenuation, more strongly on the dayside where the effective recombination coefficient, αeff, is low. Polar cap absorption models predict the 30 MHz cosmic noise absorption, A, measured by riometers, based on real-time measurements of the integrated proton flux-energy spectrum, J. However, empirical models in common use cannot account for regional and day-to-day variations in the daytime and nighttime profiles of αeff(z) or the related sensitivity parameter, m=A>/&sqrt;J. Large prediction errors occur during twilight when m changes rapidly, and due to errors locating the rigidity cutoff latitude. Modeling the twilight change in m as a linear or Gauss error-function transition over a range of solar-zenith angles (χl < χ < χu) provides a better fit to measurements than selecting day or night αeff profiles based on the Earth-shadow height. Optimal model parameters were determined for several polar cap riometers for large SPEs in 1998-2005. The optimal χl parameter was found to be most variable, with smaller values (as low as 60°) postsunrise compared with presunset and with positive correlation between riometers over a wide area. Day and night values of m exhibited higher correlation for closely spaced riometers. A nowcast simulation is presented in which rigidity boundary latitude and twilight model parameters are optimized by assimilating age-weighted measurements from 25 riometers. The technique reduces model bias, and root-mean-square errors are reduced by up to 30% compared with a model employing no riometer data assimilation.
Mi, Ran; Hu, Yan-Jun; Fan, Xiao-Yang; Ouyang, Yu; Bai, Ai-Min
2014-01-03
This paper exploring the site-selective binding of jatrorrhizine to human serum albumin (HSA) under physiological conditions (pH=7.4). The investigation was carried out using fluorescence spectroscopy, UV-vis spectroscopy, and molecular modeling. The results of fluorescence quenching and UV-vis absorption spectra experiments indicated the formation of the complex of HSA-jatrorrhizine. Binding parameters calculating from Stern-Volmer method and Scatchard method were calculated at 298, 304 and 310 K, with the corresponding thermodynamic parameters ΔG, ΔH and ΔS as well. Binding parameters calculating from Stern-Volmer method and Scatchard method showed that jatrorrhizine bind to HSA with the binding affinities of the order 10(4) L mol(-1). The thermodynamic parameters studies revealed that the binding was characterized by negative enthalpy and positive entropy changes and the electrostatic interactions play a major role for jatrorrhizine-HSA association. Site marker competitive displacement experiments and molecular modeling calculation demonstrating that jatrorrhizine is mainly located within the hydrophobic pocket of the subdomain IIIA of HSA. Furthermore, the synchronous fluorescence spectra suggested that the association between jatrorrhizine and HSA changed molecular conformation of HSA. Copyright © 2013. Published by Elsevier B.V.
Bifurcation and Spike Adding Transition in Chay-Keizer Model
NASA Astrophysics Data System (ADS)
Lu, Bo; Liu, Shenquan; Liu, Xuanliang; Jiang, Xiaofang; Wang, Xiaohui
Electrical bursting is an activity which is universal in excitable cells such as neurons and various endocrine cells, and it encodes rich physiological information. As burst delay identifies that the signal integration has reached the threshold at which it can generate an action potential, the number of spikes in a burst may have essential physiological implications, and the transition of bursting in excitable cells is associated with the bifurcation phenomenon closely. In this paper, we focus on the transition of the spike count per burst of the pancreatic β-cells within a mathematical model and bifurcation phenomenon in the Chay-Keizer model, which is utilized to simulate the pancreatic β-cells. By the fast-slow dynamical bifurcation analysis and the bi-parameter bifurcation analysis, the local dynamics of the Chay-Keizer system around the Bogdanov-Takens bifurcation is illustrated. Then the variety of the number of spikes per burst is discussed by changing the settings of a single parameter and bi-parameter. Moreover, results on the number of spikes within a burst are summarized in ISIs (interspike intervals) sequence diagrams, maximum and minimum, and the number of spikes under bi-parameter value changes.
Modal parameters of space structures in 1 G and 0 G
NASA Technical Reports Server (NTRS)
Bicos, Andrew S.; Crawley, Edward F.; Barlow, Mark S.; Van Schoor, Marthinus C.; Masters, Brett
1993-01-01
Analytic and experimental results are presented from a study of the changes in the modal parameters of space structural test articles from one- to zero-gravity. Deployable, erectable, and rotary modules was assembled to form three one- and two-dimensional structures, in which variations in bracing wire and rotary joint preload could be introduced. The structures were modeled as if hanging from a suspension system in one gravity, and unconstrained, as if free floating in zero-gravity. The analysis is compared with ground experimental measurements, which were made on a spring-wire suspension system with a nominal plunge frequency of one Hertz, and with measurements made on the Shuttle middeck. The degree of change in linear modal parameters as well as the change in nonlinear nature of the response is examined. Trends in modal parameters are presented as a function of force amplitude, joint preload, reassembly, shipset, suspension, and ambient gravity level.
Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O
2016-06-01
Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.
Analytical study on the generalized Davydov model in the alpha helical proteins
NASA Astrophysics Data System (ADS)
Wang, Pan; Xiao, Shu-Hong; Chen, Li; Yang, Gang
2017-06-01
In this paper, we investigate the dynamics of a generalized Davydov model derived from an infinite chain of alpha helical protein molecules which contain three hydrogen bonding spines running almost parallel to the helical axis. Through the introduction of the auxiliary function, the bilinear form, one-, two- and three-soliton solutions for the generalized Davydov model are obtained firstly. Propagation and interactions of solitons have been investigated analytically and graphically. The amplitude of the soliton is only related to the complex parameter μ and real parameter 𝜃 with a range of [0, 2π]. The velocity of the soliton is only related to the complex parameter μ, real parameter 𝜃, lattice parameter 𝜀, and physical parameters β1, β3 and β4. Overtaking and head-on interactions of two and three solitons are presented. The common in the interactions of three solitons is the directions of the solitons change after the interactions. The soliton derived in this paper is expected to have potential applications in the alpha helical proteins.
Fuzzy Neural Networks for Decision Support in Negotiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakas, D. P.; Vlachos, D. S.; Simos, T. E.
There is a large number of parameters which one can take into account when building a negotiation model. These parameters in general are uncertain, thus leading to models which represents them with fuzzy sets. On the other hand, the nature of these parameters makes them very difficult to model them with precise values. During negotiation, these parameters play an important role by altering the outcomes or changing the state of the negotiators. One reasonable way to model this procedure is to accept fuzzy relations (from theory or experience). The action of these relations to fuzzy sets, produce new fuzzy setsmore » which describe now the new state of the system or the modified parameters. But, in the majority of these situations, the relations are multidimensional, leading to complicated models and exponentially increasing computational time. In this paper a solution to this problem is presented. The use of fuzzy neural networks is shown that it can substitute the use of fuzzy relations with comparable results. Finally a simple simulation is carried in order to test the new method.« less
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
Watershed scale response to climate change--Trout Lake Basin, Wisconsin
Walker, John F.; Hunt, Randall J.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Trout River Basin at Trout Lake in northern Wisconsin.
Watershed scale response to climate change--Clear Creek Basin, Iowa
Christiansen, Daniel E.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Clear Creek Basin, near Coralville, Iowa.
Watershed scale response to climate change--Feather River Basin, California
Koczot, Kathryn M.; Markstrom, Steven L.; Hay, Lauren E.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Feather River Basin, California.
Watershed scale response to climate change--South Fork Flathead River Basin, Montana
Chase, Katherine J.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the South Fork Flathead River Basin, Montana.
Watershed scale response to climate change--Cathance Stream Basin, Maine
Dudley, Robert W.; Hay, Lauren E.; Markstrom, Steven L.; Hodgkins, Glenn A.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Cathance Stream Basin, Maine.
Watershed scale response to climate change--Pomperaug River Watershed, Connecticut
Bjerklie, David M.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Pomperaug River Basin at Southbury, Connecticut.
Watershed scale response to climate change--Starkweather Coulee Basin, North Dakota
Vining, Kevin C.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Starkweather Coulee Basin near Webster, North Dakota.
Watershed scale response to climate change--Sagehen Creek Basin, California
Markstrom, Steven L.; Hay, Lauren E.; Regan, R. Steven
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Sagehen Creek Basin near Truckee, California.
Watershed scale response to climate change--Sprague River Basin, Oregon
Risley, John; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Sprague River Basin near Chiloquin, Oregon.
Watershed scale response to climate change--Black Earth Creek Basin, Wisconsin
Hunt, Randall J.; Walker, John F.; Westenbroek, Steven M.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Black Earth Creek Basin, Wisconsin.
Watershed scale response to climate change--East River Basin, Colorado
Battaglin, William A.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the East River Basin, Colorado.
Watershed scale response to climate change--Naches River Basin, Washington
Mastin, Mark C.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Naches River Basin below Tieton River in Washington.
Watershed scale response to climate change--Flint River Basin, Georgia
Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Flint River Basin at Montezuma, Georgia.
Prediction of body lipid change in pregnancy and lactation.
Friggens, N C; Ingvartsen, K L; Emmans, G C
2004-04-01
A simple method to predict the genetically driven pattern of body lipid change through pregnancy and lactation in dairy cattle is proposed. The rationale and evidence for genetically driven body lipid change have their basis in evolutionary considerations and in the homeorhetic changes in lipid metabolism through the reproductive cycle. The inputs required to predict body lipid change are body lipid mass at calving (kg) and the date of conception (days in milk). Body lipid mass can be derived from body condition score and live weight. A key assumption is that there is a linear rate of change of the rate of body lipid change (dL/dt) between calving and a genetically determined time in lactation (T') at which a particular level of body lipid (L') is sought. A second assumption is that there is a linear rate of change of the rate of body lipid change (dL/dt) between T' and the next calving. The resulting model was evaluated using 2 sets of data. The first was from Holstein cows with 3 different levels of body fatness at calving. The second was from Jersey cows in first, second, and third parity. The model was found to reproduce the observed patterns of change in body lipid reserves through lactation in both data sets. The average error of prediction was low, less than the variation normally associated with the recording of condition score, and was similar for the 2 data sets. When the model was applied using the initially suggested parameter values derived from the literature the average error of prediction was 0.185 units of condition score (+/- 0.086 SD). After minor adjustments to the parameter values, the average error of prediction was 0.118 units of condition score (+/- 0.070 SD). The assumptions on which the model is based were sufficient to predict the changes in body lipid of both Holstein and Jersey cows under different nutritional conditions and parities. Thus, the model presented here shows that it is possible to predict genetically driven curves of body lipid change through lactation in a simple way that requires few parameters and inputs that can be derived in practice. It is expected that prediction of the cow's energy requirements can be substantially improved, particularly in early lactation, by incorporating a genetically driven body energy mobilization.
Nitkunan, Arani; Barrick, Tom R; Charlton, Rebecca A; Clark, Chris A; Markus, Hugh S
2008-07-01
Cerebral small vessel disease is the most common cause of vascular dementia. Interest in using MRI parameters as surrogate markers of disease to assess therapies is increasing. In patients with symptomatic sporadic small vessel disease, we determined which MRI parameters best correlated with cognitive function on cross-sectional analysis and which changed over a period of 1 year. Thirty-five patients with lacunar stroke and leukoaraiosis were recruited. They underwent multimodal MRI (brain volume, fluid-attenuated inversion recovery lesion load, lacunar infarct number, fractional anisotropy, and mean diffusivity from diffusion tensor imaging) and neuropsychological testing. Twenty-seven agreed to reattend for repeat MRI and neuropsychology at 1 year. An executive function score correlated most strongly with diffusion tensor imaging (fractional anisotropy histogram, r=-0.640, P=0.004) and brain volume (r=0.501, P=0.034). Associations with diffusion tensor imaging were stronger than with all other MRI parameters. On multiple regression of all imaging parameters, a model that contained brain volume and fractional anisotropy, together with age, gender, and premorbid IQ, explained 74% of the variance of the executive function score (P=0.0001). Changes in mean diffusivity and fractional anisotropy were detectable over the 1-year follow-up; in contrast, no change in other MRI parameters was detectable over this time period. A multimodal MRI model explains a large proportion of the variation in executive function in cerebral small vessel disease. In particular, diffusion tensor imaging correlates best with executive function and is the most sensitive to change. This supports the use of MRI, in particular diffusion tensor imaging, as a surrogate marker in treatment trials.
The Multigroup Multilevel Categorical Latent Growth Curve Models
ERIC Educational Resources Information Center
Hung, Lai-Fa
2010-01-01
Longitudinal data describe developmental patterns and enable predictions of individual changes beyond sampled time points. Major methodological issues in longitudinal data include modeling random effects, subject effects, growth curve parameters, and autoregressive residuals. This study embedded the longitudinal model within a multigroup…
NASA Astrophysics Data System (ADS)
Askarimarnani, Sara; Willgoose, Garry; Fityus, Stephen
2017-04-01
Coal seam gas (CSG) is a form of natural gas that occurs in some coal seams. Coal seams have natural fractures with dual-porosity systems and low permeability. In the CSG industry, hydraulic fracturing is applied to increase the permeability and extract the gas more efficiently from the coal seam. The industry claims that it can design fracking patterns. Whether this is true or not, the public (and regulators) requires assurance that once a well has been fracked that the fracking has occurred according to plan and that the fracked well is safe. Thus defensible post-fracking testing methodologies for gas generating wells are required. In 2009 a fracked well HB02, owned by AGL, near Broke, NSW, Australia was subjected to "traditional" water pump-testing as part of this assurance process. Interpretation with well Type Curves and simple single phase (i.e. only water, no gas) highlighted deficiencies in traditional water well approaches with a systemic deviation from the qualitative characteristic of well drawdown curves (e.g. concavity versus convexity of drawdown with time). Accordingly a multiphase (i.e. water and methane) model of the well was developed and compared with the observed data. This paper will discuss the results of this multiphase testing using the TOUGH2 model and its EOS7C constitutive model. A key objective was to test a methodology, based on GLUE monte-carlo calibration technique, to calibrate the characteristics of the frack using the well test drawdown curve. GLUE involves a sensitivity analysis of how changes in the fracture properties change the well hydraulics through and analysis of the drawdown curve and changes in the cone of depression. This was undertaken by changing the native coal, fracture, and gas parameters to see how changing those parameters changed the match between simulations and the observed well drawdown. Results from the GLUE analysis show how much information is contained in the well drawdown curve for estimating field scale coal and gas generation properties, the fracture geometry, and the proponent characteristics. The results with the multiphase model show a better match to the drawdown than using a single phase model but the differences between the best fit drawdowns were small, and smaller than the difference between the best fit and field data. However, the parameters derived to generate these best fits for each model were very different. We conclude that while satisfactory fits with single phase groundwater models (e.g. MODFLOW, FEFLOW) can be achieved the parameters derived will not be realistic, with potential implications for drawdowns and water yields for gas field modelling. Multiphase models are thus required and we will discuss some of the limitations of TOUGH2 for the CSG problem.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Developing a tuberculosis transmission model that accounts for changes in population health.
Oxlade, Olivia; Schwartzman, Kevin; Benedetti, Andrea; Pai, Madhukar; Heymann, Jody; Menzies, Dick
2011-01-01
Simulation models are useful in policy planning for tuberculosis (TB) control. To accurately assess interventions, important modifiers of the epidemic should be accounted for in evaluative models. Improvements in population health were associated with the declining TB epidemic in the pre-antibiotic era and may be relevant today. The objective of this study was to develop and validate a TB transmission model that accounted for changes in population health. We developed a deterministic TB transmission model, using reported data from the pre-antibiotic era in England. Change in adjusted life expectancy, used as a proxy for general health, was used to determine the rate of change of key epidemiological parameters. Predicted outcomes included risk of TB infection and TB mortality. The model was validated in the setting of the Netherlands and then applied to modern Peru. The model, developed in the setting of England, predicted TB trends in the Netherlands very accurately. The R(2) value for correlation between observed and predicted data was 0.97 and 0.95 for TB infection and mortality, respectively. In Peru, the predicted decline in incidence prior to the expansion of "Directly Observed Treatment Short Course" (The DOTS strategy) was 3.7% per year (observed = 3.9% per year). After DOTS expansion, the predicted decline was very similar to the observed decline of 5.8% per year. We successfully developed and validated a TB model, which uses a proxy for population health to estimate changes in key epidemiology parameters. Population health contributed significantly to improvement in TB outcomes observed in Peru. Changing population health should be incorporated into evaluative models for global TB control.
Energy Expenditure of Trotting Gait Under Different Gait Parameters
NASA Astrophysics Data System (ADS)
Chen, Xian-Bao; Gao, Feng
2017-07-01
Robots driven by batteries are clean, quiet, and can work indoors or in space. However, the battery endurance is a great problem. A new gait parameter design energy saving strategy to extend the working hours of the quadruped robot is proposed. A dynamic model of the robot is established to estimate and analyze the energy expenditures during trotting. Given a trotting speed, optimal stride frequency and stride length can minimize the energy expenditure. However, the relationship between the speed and the optimal gait parameters is nonlinear, which is difficult for practical application. Therefore, a simplified gait parameter design method for energy saving is proposed. A critical trotting speed of the quadruped robot is found and can be used to decide the gait parameters. When the robot is travelling lower than this speed, it is better to keep a constant stride length and change the cycle period. When the robot is travelling higher than this speed, it is better to keep a constant cycle period and change the stride length. Simulations and experiments on the quadruped robot show that by using the proposed gait parameter design approach, the energy expenditure can be reduced by about 54% compared with the 100 mm stride length under 500 mm/s speed. In general, an energy expenditure model based on the gait parameter of the quadruped robot is built and the trotting gait parameters design approach for energy saving is proposed.
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2015-04-01
Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.
High-spin europium and gadolinium centers in yttrium-aluminum garnet
NASA Astrophysics Data System (ADS)
Vazhenin, V. A.; Potapov, A. P.; Asatryan, G. R.; Uspenskaya, Yu. A.; Petrosyan, A. G.; Fokin, A. V.
2016-08-01
Electron-spin resonance spectra of Eu2+ and Gd3+ centers substituting Y3+ ions in single-crystal yttrium-aluminum garnet have been studied and the parameters of their rhombic spin Hamiltonian have been determined. The fine-structure parameters of the above ions have been calculated in the superposition model disregarding changes in the angular coordinates of the ligand environment of the impurity defect thus demonstrating the necessity of taking these changes into account.
NASA Astrophysics Data System (ADS)
Nijzink, Remko C.; Hutton, Christopher; Pechlivanidis, Ilias; Capell, René; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; McGuire, Kevin; Savenije, Hubert; Hrachowitz, Markus
2017-04-01
The moisture storage available to vegetation is a key parameter in the hydrological functioning of ecosystems. This parameter, the root zone storage capacity, determines the partitioning between runoff and transpiration, but is impossible to observe at the catchment scale. In this research, data from the experimental forests of HJ Andrews (Oregon, USA) and Hubbard Brook (New Hampshire, USA) was used to test the hypotheses that: (1) the root zone storage capacity significantly changes after deforestation, (2) changes in the root zone storage capacity can to a large extent explain post-treatment changes to the hydrological regimes and that (3) a time-dynamic formulation of the root zone storage can improve the performance of a hydrological model. At first, root zone storage capacities were estimated based on a simple, water-balance based method. Briefly, the maximum difference between cumulative rainfall and estimated transpiration was determined, which could be considered a proxy for root zone storage capacity. These values were compared with root zone storage capacities obtained from four conceptual models (HYPE, HYMOD, FLEX, TUW), calibrated for consecutive 2-year windows. Both methods showed a sharp decline in root zone storage capacity after deforestation, which was followed by a gradual recovery signal. It was found in a trend analysis that these recovery periods took between 5 and 13 years for the different catchments. Eventually, one of the models was adjusted to allow for a time-dynamic formulation of root zone storage capacity. This adjusted model showed improvements in model performance as evaluated by 28 hydrological signatures, such as rising limb density or peak flows. Thus, this research clearly shows the time-dynamic character of a crucial parameter, which is often considered to remain constant in time. Root zone storage capacities are strongly affected by deforestation, leading to changes in hydrological regimes, and time-dynamic formulations of root zone storage are therefore necessary in systems under change.
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling.
Pera, H; Kleijn, J M; Leermakers, F A M
2014-02-14
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus kc and k̄ and the preferred monolayer curvature J(0)(m), and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of kc and the area compression modulus kA are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k̄ and J(0)(m) can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k̄ and J(0)(m) change sign with relevant parameter changes. Although typically k̄ < 0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J(0)(m) > 0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks.
Computational model of in vivo human energy metabolism during semi-starvation and re-feeding
Hall, Kevin D.
2008-01-01
Changes of body weight and composition are the result of complex interactions among metabolic fluxes contributing to macronutrient balances. To better understand these interactions, a mathematical model was constructed that used the measured dietary macronutrient intake during semi-starvation and re-feeding as model inputs and computed whole-body energy expenditure, de novo lipogenesis, gluconeogenesis, as well as turnover and oxidation of carbohydrate, fat and protein. Published in vivo human data provided the basis for the model components which were integrated by fitting a few unknown parameters to the classic Minnesota human starvation experiment. The model simulated the measured body weight and fat mass changes during semi-starvation and re-feeding and predicted the unmeasured metabolic fluxes underlying the body composition changes. The resting metabolic rate matched the experimental measurements and required a model of adaptive thermogenesis. Re-feeding caused an elevation of de novo lipogenesis which, along with increased fat intake, resulted in a rapid repletion and overshoot of body fat. By continuing the computer simulation with the pre-starvation diet and physical activity, the original body weight and composition was eventually restored, but body fat mass was predicted to take more than one additional year to return to within 5% of its original value. The model was validated by simulating a recently published short-term caloric restriction experiment without changing the model parameters. The predicted changes of body weight, fat mass, resting metabolic rate, and nitrogen balance matched the experimental measurements thereby providing support for the validity of the model. PMID:16449298
Steele, Madeline O.; Chang, Heejun; Reusser, Deborah A.; Brown, Cheryl A.; Jung, Il-Won
2012-01-01
As part of a larger investigation into potential effects of climate change on estuarine habitats in the Pacific Northwest, we estimated changes in freshwater inputs into four estuaries: Coquille River estuary, South Slough of Coos Bay, and Yaquina Bay in Oregon, and Willapa Bay in Washington. We used the U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) to model watershed hydrological processes under current and future climatic conditions. This model allowed us to explore possible shifts in coastal hydrologic regimes at a range of spatial scales. All modeled watersheds are located in rainfall-dominated coastal areas with relatively insignificant base flow inputs, and their areas vary from 74.3 to 2,747.6 square kilometers. The watersheds also vary in mean elevation, ranging from 147 meters in the Willapa to 1,179 meters in the Coquille. The latitudes of watershed centroids range from 43.037 degrees north latitude in the Coquille River estuary to 46.629 degrees north latitude in Willapa Bay. We calibrated model parameters using historical climate grid data downscaled to one-sixteenth of a degree by the Climate Impacts Group, and historical runoff from sub-watersheds or neighboring watersheds. Nash Sutcliffe efficiency values for daily flows in calibration sub-watersheds ranged from 0.71 to 0.89. After calibration, we forced the PRMS models with four North American Regional Climate Change Assessment Program climate models: Canadian Regional Climate Model-(National Center for Atmospheric Research) Community Climate System Model version 3, Canadian Regional Climate Model-Canadian Global Climate Model version 3, Hadley Regional Model version 3-Hadley Centre Climate Model version 3, and Regional Climate Model-Canadian Global Climate Model version 3. These are global climate models (GCMs) downscaled with regional climate models that are embedded within the GCMs, and all use the A2 carbon emission scenario developed by the Intergovernmental Panel on Climate Change. With these climate-forcing outputs, we derived the mean change in flow from the period encompassing the 1980s (1971-1995) to the period encompassing the 2050s (2041-2065). Specifically, we calculated percent change in mean monthly flow rate, coefficient of variation, top 5 percent of flow, and 7-day low flow. The trends with the most agreement among climate models and among watersheds were increases in autumn mean monthly flows, especially in October and November, decreases in summer monthly mean flow, and increases in the top 5 percent of flow. We also estimated variance in PRMS outputs owing to parameter uncertainty and the selection of climate model using Latin hypercube sampling. This analysis showed that PRMS low-flow simulations are more uncertain than medium or high flow simulations, and that variation among climate models was a larger source of uncertainty than the hydrological model parameters. These results improve our understanding of how climate change may affect the saltwater-freshwater balance in Pacific Northwest estuaries, with implications for their sensitive ecosystems.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
Documentation of the Benson Diesel Engine Simulation Program
NASA Technical Reports Server (NTRS)
Vangerpen, Jon
1988-01-01
This report documents the Benson Diesel Engine Simulation Program and explains how it can be used to predict the performance of diesel engines. The program was obtained from the Garrett Turbine Engine Company but has been extensively modified since. The program is a thermodynamic simulation of the diesel engine cycle which uses a single zone combustion model. It can be used to predict the effect of changes in engine design and operating parameters such as valve timing, speed and boost pressure. The most significan change made to this program is the addition of a more detailed heat transfer model to predict metal part temperatures. This report contains a description of the sub-models used in the Benson program, a description of the input parameters and sample program runs.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
WE-E-17A-01: Characterization of An Imaging-Based Model of Tumor Angiogenesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adhikarla, V; Jeraj, R
2014-06-15
Purpose: Understanding the transient dynamics of tumor oxygenation is important when evaluating tumor-vasculature response to anti-angiogenic therapies. An imaging-based tumor-vasculature model was used to elucidate factors that affect these dynamics. Methods: Tumor growth depends on its doubling time (Td). Hypoxia increases pro-angiogenic factor (VEGF) concentration which is modeled to reduce vessel perfusion, attributing to its effect of increasing vascular permeability. Perfused vessel recruitment depends on the existing perfused vasculature, VEGF concentration and maximum VEGF concentration (VEGFmax) for vessel dysfunction. A convolution-based algorithm couples the tumor to the normal tissue vessel density (VD-nt). The parameters are benchmarked to published pre-clinical datamore » and a sensitivity study evaluating the changes in the peak and time to peak tumor oxygenation characterizes them. The model is used to simulate changes in hypoxia and proliferation PET imaging data obtained using [Cu- 61]Cu-ATSM and [F-18]FLT respectively. Results: Td and VD-nt were found to be the most influential on peak tumor pO2 while VEGFmax was marginally influential. A +20 % change in Td, VD-nt and VEGFmax resulted in +50%, +25% and +5% increase in peak pO2. In contrast, Td was the most influential on the time to peak oxygenation with VD-nt and VEGFmax playing marginal roles. A +20% change in Td, VD-nt and VEGFmax increased the time to peak pO2 by +50%, +5% and +0%. A −20% change in the above parameters resulted in comparable decreases in the peak and time to peak pO2. Model application to the PET data was able to demonstrate the voxel-specific changes in hypoxia of the imaged tumor. Conclusion: Tumor-specific doubling time and vessel density are important parameters to be considered when evaluating hypoxia transients. While the current model simulates the oxygen dynamics of an untreated tumor, incorporation of therapeutic effects can make the model a potent tool for analyzing anti-angiogenic therapies.« less
Ahn, In-Young; Guillaumot, Charlène; Danis, Bruno
2017-01-01
Antarctic marine organisms are adapted to an extreme environment, characterized by a very low but stable temperature and a strong seasonality in food availability arousing from variations in day length. Ocean organisms are particularly vulnerable to global climate change with some regions being impacted by temperature increase and changes in primary production. Climate change also affects the biotic components of marine ecosystems and has an impact on the distribution and seasonal physiology of Antarctic marine organisms. Knowledge on the impact of climate change in key species is highly important because their performance affects ecosystem functioning. To predict the effects of climate change on marine ecosystems, a holistic understanding of the life history and physiology of Antarctic key species is urgently needed. DEB (Dynamic Energy Budget) theory captures the metabolic processes of an organism through its entire life cycle as a function of temperature and food availability. The DEB model is a tool that can be used to model lifetime feeding, growth, reproduction, and their responses to changes in biotic and abiotic conditions. In this study, we estimate the DEB model parameters for the bivalve Laternula elliptica using literature-extracted and field data. The DEB model we present here aims at better understanding the biology of L. elliptica and its levels of adaptation to its habitat with a special focus on food seasonality. The model parameters describe a metabolism specifically adapted to low temperatures, with a low maintenance cost and a high capacity to uptake and mobilise energy, providing this organism with a level of energetic performance matching that of related species from temperate regions. It was also found that L. elliptica has a large energy reserve that allows enduring long periods of starvation. Additionally, we applied DEB parameters to time-series data on biological traits (organism condition, gonad growth) to describe the effect of a varying environment in food and temperature on the organism condition and energy use. The DEB model developed here for L. elliptica allowed us to improve benchmark knowledge on the ecophysiology of this key species, providing new insights in the role of food availability and temperature on its life cycle and reproduction strategy. PMID:28850607
Agüera, Antonio; Ahn, In-Young; Guillaumot, Charlène; Danis, Bruno
2017-01-01
Antarctic marine organisms are adapted to an extreme environment, characterized by a very low but stable temperature and a strong seasonality in food availability arousing from variations in day length. Ocean organisms are particularly vulnerable to global climate change with some regions being impacted by temperature increase and changes in primary production. Climate change also affects the biotic components of marine ecosystems and has an impact on the distribution and seasonal physiology of Antarctic marine organisms. Knowledge on the impact of climate change in key species is highly important because their performance affects ecosystem functioning. To predict the effects of climate change on marine ecosystems, a holistic understanding of the life history and physiology of Antarctic key species is urgently needed. DEB (Dynamic Energy Budget) theory captures the metabolic processes of an organism through its entire life cycle as a function of temperature and food availability. The DEB model is a tool that can be used to model lifetime feeding, growth, reproduction, and their responses to changes in biotic and abiotic conditions. In this study, we estimate the DEB model parameters for the bivalve Laternula elliptica using literature-extracted and field data. The DEB model we present here aims at better understanding the biology of L. elliptica and its levels of adaptation to its habitat with a special focus on food seasonality. The model parameters describe a metabolism specifically adapted to low temperatures, with a low maintenance cost and a high capacity to uptake and mobilise energy, providing this organism with a level of energetic performance matching that of related species from temperate regions. It was also found that L. elliptica has a large energy reserve that allows enduring long periods of starvation. Additionally, we applied DEB parameters to time-series data on biological traits (organism condition, gonad growth) to describe the effect of a varying environment in food and temperature on the organism condition and energy use. The DEB model developed here for L. elliptica allowed us to improve benchmark knowledge on the ecophysiology of this key species, providing new insights in the role of food availability and temperature on its life cycle and reproduction strategy.
NASA Astrophysics Data System (ADS)
Sivapalan, Murugesu; Ruprecht, John K.; Viney, Neil R.
1996-03-01
A long-term water balance model has been developed to predict the hydrological effects of land-use change (especially forest clearing) in small experimental catchments in the south-west of Western Australia. This small catchment model has been used as the building block for the development of a large catchment-scale model, and has also formed the basis for a coupled water and salt balance model, developed to predict the changes in stream salinity resulting from land-use and climate change. The application of the coupled salt and water balance model to predict stream salinities in two small experimental catchments, and the application of the large catchment-scale model to predict changes in water yield in a medium-sized catchment that is being mined for bauxite, are presented in Parts 2 and 3, respectively, of this series of papers.The small catchment model has been designed as a simple, robust, conceptually based model of the basic daily water balance fluxes in forested catchments. The responses of the catchment to rainfall and pan evaporation are conceptualized in terms of three interdependent subsurface stores A, B and F. Store A depicts a near-stream perched aquifer system; B represents a deeper, permanent groundwater system; and F is an intermediate, unsaturated infiltration store. The responses of these stores are characterized by a set of constitutive relations which involves a number of conceptual parameters. These parameters are estimated by calibration by comparing observed and predicted runoff. The model has performed very well in simulations carried out on Salmon and Wights, two small experimental catchments in the Collie River basin in south-west Western Australia. The results from the application of the model to these small catchments are presented in this paper.
Malandrino, Andrea; Pozo, José M.; Castro-Mateos, Isaac; Frangi, Alejandro F.; van Rijsbergen, Marc M.; Ito, Keita; Wilke, Hans-Joachim; Dao, Tien Tuan; Ho Ba Tho, Marie-Christine; Noailly, Jérôme
2015-01-01
Capturing patient- or condition-specific intervertebral disk (IVD) properties in finite element models is outmost important in order to explore how biomechanical and biophysical processes may interact in spine diseases. However, disk degenerative changes are often modeled through equations similar to those employed for healthy organs, which might not be valid. As for the simulated effects of degenerative changes, they likely depend on specific disk geometries. Accordingly, we explored the ability of continuum tissue models to simulate disk degenerative changes. We further used the results in order to assess the interplay between these simulated changes and particular IVD morphologies, in relation to disk cell nutrition, a potentially important factor in disk tissue regulation. A protocol to derive patient-specific computational models from clinical images was applied to different spine specimens. In vitro, IVD creep tests were used to optimize poro-hyperelastic input material parameters in these models, in function of the IVD degeneration grade. The use of condition-specific tissue model parameters in the specimen-specific geometrical models was validated against independent kinematic measurements in vitro. Then, models were coupled to a transport-cell viability model in order to assess the respective effects of tissue degeneration and disk geometry on cell viability. While classic disk poro-mechanical models failed in representing known degenerative changes, additional simulation of tissue damage allowed model validation and gave degeneration-dependent material properties related to osmotic pressure and water loss, and to increased fibrosis. Surprisingly, nutrition-induced cell death was independent of the grade-dependent material properties, but was favored by increased diffusion distances in large IVDs. Our results suggest that in situ geometrical screening of IVD morphology might help to anticipate particular mechanisms of disk degeneration. PMID:25717471
Functional Diversity of Microbial Communities in Sludge-Amended Soils
NASA Astrophysics Data System (ADS)
Sun, Y. H.; Yang, Z. H.; Zhao, J. J.; Li, Q.
The BIOLOG method was applied to exploration of functional diversity of soil microbial communities in sludge-amended soils sampled from the Yangtze River Delta. Results indicated that metabolic profile, functional diversity indexes and Kinetic parameters of the soil microbial communities changed following soil amendment with sewage sludge, suggesting that the changes occurred in population of the microbes capable of exploiting carbon substrates and in this capability as well. The kinetic study of the functional diversity revealed that the metabolic profile of the soil microbial communities exhibited non-linear correlation with the incubation time, showing a curse of sigmoid that fits the dynamic model of growth of the soil microbial communities. In all the treatments, except for treatments of coastal fluvo-aquic soil amended with fresh sludge and dried sludge from Hangzhou, kinetic parameters K and r of the functional diversity of the soil microbial communities decreased significantly and parameter S increased. Changes in characteristics of the functional diversity well reflected differences in C utilizing capacity and model of the soil microbial communities in the sludge-amended soils, and changes in functional diversity of the soil microbial communities in a particular eco-environment, like soil amended with sewage sludge.
Koslen, Hannah R.; Chiel, Hillel J.; Mizutani, Claudia Mieko
2014-01-01
Morphogenetic gradients are essential to allocate cell fates in embryos of varying sizes within and across closely related species. We previously showed that the maternal NF-κB/Dorsal (Dl) gradient has acquired different shapes in Drosophila species, which result in unequally scaled germ layers along the dorso-ventral axis and the repositioning of the neuroectodermal borders. Here we combined experimentation and mathematical modeling to investigate which factors might have contributed to the fast evolutionary changes of this gradient. To this end, we modified a previously developed model that employs differential equations of the main biochemical interactions of the Toll (Tl) signaling pathway, which regulates Dl nuclear transport. The original model simulations fit well the D. melanogaster wild type, but not mutant conditions. To broaden the applicability of this model and probe evolutionary changes in gradient distributions, we adjusted a set of 19 independent parameters to reproduce three quantified experimental conditions (i.e. Dl levels lowered, nuclear size and density increased or decreased). We next searched for the most relevant parameters that reproduce the species-specific Dl gradients. We show that adjusting parameters relative to morphological traits (i.e. embryo diameter, nuclear size and density) alone is not sufficient to reproduce the species Dl gradients. Since components of the Tl pathway simulated by the model are fast-evolving, we next asked which parameters related to Tl would most effectively reproduce these gradients and identified a particular subset. A sensitivity analysis reveals the existence of nonlinear interactions between the two fast-evolving traits tested above, namely the embryonic morphological changes and Tl pathway components. Our modeling further suggests that distinct Dl gradient shapes observed in closely related melanogaster sub-group lineages may be caused by similar sequence modifications in Tl pathway components, which are in agreement with their phylogenetic relationships. PMID:25165818
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.
2014-01-01
Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duane, Greg; Tsonis, Anastasios; Kocarev, Ljupco
This collaborative reserach has several components but the main idea is that when imperfect copies of a given nonlinear dynamical system are coupled, they may synchronize for some set of coupling parameters. This idea is to be tested for several IPCC-like models each one with its own formulation and representing an “imperfect” copy of the true climate system. By computing the coupling parameters, which will lead the models to a synchronized state, a consensus on climate change simulations may be achieved.
Determination of Indicators of Ecological Change
2004-09-01
simultaneously characterized parameters for more than one forest (e.g., Huber and Iroume, 2001; Tobón Marin et al., 2000). As parameters (e.g...necessary to apply the revised model for use in five forest biomes , 2) use the model to predict precipitation interception and compare the measured and...larger interception losses than many other forest biomes . The within plot sampling coefficient of variation, ranging from a study average of 0.11 in
High Spatial Resolution Multi-Organ Finite Element Modeling of Ventricular-Arterial Coupling
Shavik, Sheikh Mohammad; Jiang, Zhenxiang; Baek, Seungik; Lee, Lik Chuan
2018-01-01
While it has long been recognized that bi-directional interaction between the heart and the vasculature plays a critical role in the proper functioning of the cardiovascular system, a comprehensive study of this interaction has largely been hampered by a lack of modeling framework capable of simultaneously accommodating high-resolution models of the heart and vasculature. Here, we address this issue and present a computational modeling framework that couples finite element (FE) models of the left ventricle (LV) and aorta to elucidate ventricular—arterial coupling in the systemic circulation. We show in a baseline simulation that the framework predictions of (1) LV pressure—volume loop, (2) aorta pressure—diameter relationship, (3) pressure—waveforms of the aorta, LV, and left atrium (LA) over the cardiac cycle are consistent with the physiological measurements found in healthy human. To develop insights of ventricular-arterial interactions, the framework was then used to simulate how alterations in the geometrical or, material parameter(s) of the aorta affect the LV and vice versa. We show that changing the geometry and microstructure of the aorta model in the framework led to changes in the functional behaviors of both LV and aorta that are consistent with experimental observations. On the other hand, changing contractility and passive stiffness of the LV model in the framework also produced changes in both the LV and aorta functional behaviors that are consistent with physiology principles. PMID:29551977
FISHER'S GEOMETRIC MODEL WITH A MOVING OPTIMUM
Matuszewski, Sebastian; Hermisson, Joachim; Kopp, Michael
2014-01-01
Fisher's geometric model has been widely used to study the effects of pleiotropy and organismic complexity on phenotypic adaptation. Here, we study a version of Fisher's model in which a population adapts to a gradually moving optimum. Key parameters are the rate of environmental change, the dimensionality of phenotype space, and the patterns of mutational and selectional correlations. We focus on the distribution of adaptive substitutions, that is, the multivariate distribution of the phenotypic effects of fixed beneficial mutations. Our main results are based on an “adaptive-walk approximation,” which is checked against individual-based simulations. We find that (1) the distribution of adaptive substitutions is strongly affected by the ecological dynamics and largely depends on a single composite parameter γ, which scales the rate of environmental change by the “adaptive potential” of the population; (2) the distribution of adaptive substitution reflects the shape of the fitness landscape if the environment changes slowly, whereas it mirrors the distribution of new mutations if the environment changes fast; (3) in contrast to classical models of adaptation assuming a constant optimum, with a moving optimum, more complex organisms evolve via larger adaptive steps. PMID:24898080
Application of Cox model in coagulation function in patients with primary liver cancer.
Guo, Xuan; Chen, Mingwei; Ding, Li; Zhao, Shan; Wang, Yuefei; Kang, Qinjiong; Liu, Yi
2011-01-01
To analyze the distribution of coagulation parameters in patients with primary liver cancer; explore the relationship between clinical staging, survival, and coagulation parameters by using Coxproportional hazard model; and provide a parameter for clinical management and prognosis. Coagulation parameters were evaluated in 228 patients with primary liver cancer, 52 patients with common liver disease, and 52 normal healthy controls. The relationship between primary livercancer staging and coagulation parameters wasanalyzed. Follow-up examinations were performed. The Cox proportional hazard model was used to analyze the relationship between coagulationparameters and survival. The changes in the coagulation parameters in patients with primary liver cancer were significantly different from those in normal controls. The effect of the disease on coagulation function became more obvious as the severity of liver cancer increased (p<0.05). The levels of D-dimer, fibrinogen degradation products (FDP), fibrinogen (FIB), and platelets (PLT) were negatively correlated with the long-term survival of patients with advanced liver cancer. The stages of primary liver cancer are associated with coagulation parameters. Coagulation parameters are related to survival and risk factors. Monitoring of coagulation parameters may help ensure better surveillance and treatment for liver cancer patients.
Role of dimensionality in Axelrod's model for the dissemination of culture
NASA Astrophysics Data System (ADS)
Klemm, Konstantin; Eguíluz, Víctor M.; Toral, Raúl; Miguel, Maxi San
2003-09-01
We analyze a model of social interaction in one- and two-dimensional lattices for a moderate number of features. We introduce an order parameter as a function of the overlap between neighboring sites. In a one-dimensional chain, we observe that the dynamics is consistent with a second-order transition, where the order parameter changes continuously and the average domain diverges at the transition point. However, in a two-dimensional lattice the order parameter is discontinuous at the transition point characteristic of a first-order transition between an ordered and a disordered state.
The perception of probability.
Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E
2014-01-01
We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
Ultra wideband (0.5-16 kHz) MR elastography for robust shear viscoelasticity model identification.
Liu, Yifei; Yasar, Temel K; Royston, Thomas J
2014-12-21
Changes in the viscoelastic parameters of soft biological tissues often correlate with progression of disease, trauma or injury, and response to treatment. Identifying the most appropriate viscoelastic model, then estimating and monitoring the corresponding parameters of that model can improve insight into the underlying tissue structural changes. MR Elastography (MRE) provides a quantitative method of measuring tissue viscoelasticity. In a previous study by the authors (Yasar et al 2013 Magn. Reson. Med. 70 479-89), a silicone-based phantom material was examined over the frequency range of 200 Hz-7.75 kHz using MRE, an unprecedented bandwidth at that time. Six viscoelastic models including four integer order models and two fractional order models, were fit to the wideband viscoelastic data (measured storage and loss moduli as a function of frequency). The 'fractional Voigt' model (spring and springpot in parallel) exhibited the best fit and was even able to fit the entire frequency band well when it was identified based only on a small portion of the band. This paper is an extension of that study with a wider frequency range from 500 Hz to 16 kHz. Furthermore, more fractional order viscoelastic models are added to the comparison pool. It is found that added complexity of the viscoelastic model provides only marginal improvement over the 'fractional Voigt' model. And, again, the fractional order models show significant improvement over integer order viscoelastic models that have as many or more fitting parameters.
Utilizing the social media data to validate 'climate change' indices
NASA Astrophysics Data System (ADS)
Molodtsova, T.; Kirilenko, A.; Stepchenkova, S.
2013-12-01
Reporting the observed and modeled changes in climate to public requires the measures understandable by the general audience. E.g., the NASA GISS Common Sense Climate Index (Hansen et al., 1998) reports the change in climate based on six practically observable parameters such as the air temperature exceeding the norm by one standard deviation. The utility of the constructed indices for reporting climate change depends, however, on an assumption that the selected parameters are felt and connected with the changing climate by a non-expert, which needs to be validated. Dynamic discussion of climate change issues in social media may provide data for this validation. We connected the intensity of public discussion of climate change in social networks with regional weather variations for the territory of the USA. We collected the entire 2012 population of Twitter microblogging activity on climate change topic, accumulating over 1.8 million separate records (tweets) globally. We identified the geographic location of the tweets and associated the daily and weekly intensity of twitting with the following parameters of weather for these locations: temperature anomalies, 'hot' temperature anomalies, 'cold' temperature anomalies, heavy rain/snow events. To account for non-weather related events we included the articles on climate change from the 'prestige press', a collection of major newspapers. We found that the regional changes in parameters of weather significantly affect the number of tweets published on climate change. This effect, however, is short-lived and varies throughout the country. We found that in different locations different weather parameters had the most significant effect on climate change microblogging activity. Overall 'hot' temperature anomalies had significant influence on climate change twitting intensity.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
Two-Player 2 × 2 Quantum Game in Spin System
NASA Astrophysics Data System (ADS)
Huang, Zhiming; Situ, Haozhen
2017-05-01
In this work, we study the payoffs of quantum Samaritan's dilemma played with the thermal entangled state of XXZ spin model in the presence of Dzyaloshinskii-Moriya (DM) interaction. We discuss the effect of anisotropy parameter, strength of DM interaction and temperature on quantum Samaritan's dilemma. It is shown that although increasing DM interaction and anisotropy parameter generate entanglement, players payoffs are not simply decided by entanglement and depend on other game components such as strategy and payoff measurement. In general, Entanglement and Alice's payoff evolve to a relatively stable value with anisotropy parameter, and develop to a fixed value with DM interaction strength, while Bob's payoff changes in the reverse direction. It is noted that the augment of Alice's payoff compensates for the loss of Bob's payoff. For different strategies, payoffs have different changes with temperature. Our results and discussions can be analogously generalized to other 2 × 2 quantum static games in various spin models.
A study of model parameters associated with the urban climate using HCMM data
NASA Technical Reports Server (NTRS)
1981-01-01
Infrared and visible data from the Heat Capacity Mapping Mission (HCMM) satellite were used to study the intensity of the urban heat island, commonly defined as the temperature difference between the center of the city and the surrounding suburban and rural regions, as a function of changes in the season and changes in meteorological conditions in order to derive various parameters which may be used in numerical models for urban climate. The analysis was focused on the city of St. Louis; and in situ data from St. Louis was combined with HCMM data in order to derive the various parameters. The HCMM data were mapped onto a Mercator projection map of the city and ground temperatures were established using data corrected for the effects of atmospheric absorption. The corrected and uncorrected HCMM data were compared to determine the magnitude of the error induced by atmospheric effects.
Bayesian model comparison and parameter inference in systems biology using nested sampling.
Pullen, Nick; Morris, Richard J
2014-01-01
Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.
NASA Astrophysics Data System (ADS)
Chaloupka, Jiří; Khaliullin, Giniyat
2015-07-01
We have explored the hidden symmetries of a generic four-parameter nearest-neighbor spin model, allowed in honeycomb-lattice compounds under trigonal compression. Our method utilizes a systematic algorithm to identify all dual transformations of the model that map the Hamiltonian on itself, changing the parameters and providing exact links between different points in its parameter space. We have found the complete set of points of hidden SU(2) symmetry at which a seemingly highly anisotropic model can be mapped back on the Heisenberg model and inherits therefore its properties such as the presence of gapless Goldstone modes. The procedure used to search for the hidden symmetries is quite general and may be extended to other bond-anisotropic spin models and other lattices, such as the triangular, kagome, hyperhoneycomb, or harmonic-honeycomb lattices. We apply our findings to the honeycomb-lattice iridates Na2IrO3 and Li2IrO3 , and illustrate how they help to identify plausible values of the model parameters that are compatible with the available experimental data.
Measuring Geophysical Parameters of the Greenland Ice Sheet using Airborne Radar Altimetry
NASA Technical Reports Server (NTRS)
Ferraro, Ellen J.; Swift. Calvin T.
1995-01-01
This paper presents radar-altimeter scattering models for each of the diagenetic zones of the Greenland ice sheet. AAFE radar- altimeter waveforms obtained during the 1991 and 1993 NASA multi-sensor airborne altimetry experiments over Greenland reveal that the Ku-band return pulse changes significantly with the different diagenetic zones. These changes are due to varying amounts of surface and volume scattering in the return waveform. In the ablation and soaked zones, where surface scattering dominates the AAFE return, geophysical parameters such as rms surface height and rms surface slope are obtained by fitting the waveforms to a surface-scattering model. Waveforms from the percolation zone show that the sub-surface ice features have a much more significant effect on the return pulse than the surrounding snowpack. Model percolation waveforms, created using a combined surface- and volume-scattering model and an ice-feature distribution obtained during the 1993 field season, agree well with actual AAFE waveforms taken in the same time period. Using a combined surface- and volume-scattering model for the dry-snow-zone return waveforms, the rms surface height and slope and the attenuation coefficient of the snowpack are obtained. These scattering models not only allow geophysical parameters of the ice sheet to be measured but also help in the understanding of satellite radar-altimeter data.
Dimethylsulfide model calibration and parametric sensitivity analysis for the Greenland Sea
NASA Astrophysics Data System (ADS)
Qu, Bo; Gabric, Albert J.; Zeng, Meifang; Xi, Jiaojiao; Jiang, Limei; Zhao, Li
2017-09-01
Sea-to-air fluxes of marine biogenic aerosols have the potential to modify cloud microphysics and regional radiative budgets, and thus moderate Earth's warming. Polar regions play a critical role in the evolution of global climate. In this work, we use a well-established biogeochemical model to simulate the DMS flux from the Greenland Sea (20°W-10°E and 70°N-80°N) for the period 2003-2004. Parameter sensitivity analysis is employed to identify the most sensitive parameters in the model. A genetic algorithm (GA) technique is used for DMS model parameter calibration. Data from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are used to drive the DMS model under 4 × CO2 conditions. DMS flux under quadrupled CO2 levels increases more than 300% compared with late 20th century levels (1 × CO2). Reasons for the increase in DMS flux include changes in the ocean state-namely an increase in sea surface temperature (SST) and loss of sea ice-and an increase in DMS transfer velocity, especially in spring and summer. Such a large increase in DMS flux could slow the rate of warming in the Arctic via radiative budget changes associated with DMS-derived aerosols.
Extended behavioural modelling of FET and lattice-mismatched HEMT devices
NASA Astrophysics Data System (ADS)
Khawam, Yahya; Albasha, Lutfi
2017-07-01
This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.
Modeling parameters that characterize pacing of elite female 800-m freestyle swimmers.
Lipińska, Patrycja; Allen, Sian V; Hopkins, Will G
2016-01-01
Pacing offers a potential avenue for enhancement of endurance performance. We report here a novel method for characterizing pacing in 800-m freestyle swimming. Websites provided 50-m lap and race times for 192 swims of 20 elite female swimmers between 2000 and 2013. Pacing for each swim was characterized with five parameters derived from a linear model: linear and quadratic coefficients for effect of lap number, reductions from predicted time for first and last laps, and lap-time variability (standard error of the estimate). Race-to-race consistency of the parameters was expressed as intraclass correlation coefficients (ICCs). The average swim was a shallow negative quadratic with slowest time in the eleventh lap. First and last laps were faster by 6.4% and 3.6%, and lap-time variability was ±0.64%. Consistency between swimmers ranged from low-moderate for the linear and quadratic parameters (ICC = 0.29 and 0.36) to high for the last-lap parameter (ICC = 0.62), while consistency for race time was very high (ICC = 0.80). Only ~15% of swimmers had enough swims (~15 or more) to provide reasonable evidence of optimum parameter values in plots of race time vs. each parameter. The modest consistency of most of the pacing parameters and lack of relationships between parameters and performance suggest that swimmers usually compensated for changes in one parameter with changes in another. In conclusion, pacing in 800-m elite female swimmers can be characterized with five parameters, but identifying an optimal pacing profile is generally impractical.
NASA Technical Reports Server (NTRS)
Tedesco, Marco; Kim, Edward J.
2005-01-01
In this paper, GA-based techniques are used to invert the equations of an electromagnetic model based on Dense Medium Radiative Transfer Theory (DMRT) under the Quasi Crystalline Approximation with Coherent Potential to retrieve snow depth, mean grain size and fractional volume from microwave brightness temperatures. The technique is initially tested on both noisy and not-noisy simulated data. During this phase, different configurations of genetic algorithm parameters are considered to quantify how their change can affect the algorithm performance. A configuration of GA parameters is then selected and the algorithm is applied to experimental data acquired during the NASA Cold Land Process Experiment. Snow parameters retrieved with the GA-DMRT technique are then compared with snow parameters measured on field.
NASA Astrophysics Data System (ADS)
Kyker-Snowman, E.; Wieder, W. R.; Grandy, S.
2017-12-01
Microbial-explicit models of soil carbon (C) and nitrogen (N) cycling have improved upon simulations of C and N stocks and flows at site-to-global scales relative to traditional first-order linear models. However, the response of microbial-explicit soil models to global change factors depends upon which parameters and processes in a model are altered by those factors. We used the MIcrobial-MIneral Carbon Stabilization Model with coupled N cycling (MIMICS-CN) to compare modeled responses to changes in temperature and plant inputs at two previously-modeled sites (Harvard Forest and Kellogg Biological Station). We spun the model up to equilibrium, applied each perturbation, and evaluated 15 years of post-perturbation C and N pools and fluxes. To model the effect of increasing temperatures, we independently examined the impact of decreasing microbial C use efficiency (CUE), increasing the rate of microbial turnover, and increasing Michaelis-Menten kinetic rates of litter decomposition, plus several combinations of the three. For plant inputs, we ran simulations with stepwise increases in metabolic litter, structural litter, whole litter (structural and metabolic), or labile soil C. The cumulative change in soil C or N varied in both sign and magnitude across simulations. For example, increasing kinetic rates of litter decomposition resulted in net releases of both C and N from soil pools, while decreasing CUE produced short-term increases in respiration but long-term accumulation of C in litter pools and shifts in soil C:N as microbial demand for C increased and biomass declined. Given that soil N cycling constrains the response of plant productivity to global change and that soils generate a large amount of uncertainty in current earth system models, microbial-explicit models are a critical opportunity to advance the modeled representation of soils. However, microbial-explicit models must be improved by experiments to isolate the physiological and stoichiometric parameters of soil microbes that shift under global change.
Modeling and analysis of the solar concentrator in photovoltaic systems
NASA Astrophysics Data System (ADS)
Mroczka, Janusz; Plachta, Kamil
2015-06-01
The paper presents the Λ-ridge and V-trough concentrator system with a low concentration ratio. Calculations and simulations have been made in the program created by the author. The results of simulation allow to choose the best parameters of photovoltaic system: the opening angle between the surface of the photovoltaic module and mirrors, resolution of the tracking system and the material for construction of the concentrator mirrors. The research shows the effect each of these parameters on the efficiency of the photovoltaic system and method of surface modeling using BRDF function. The parameters of concentrator surface (eg. surface roughness) were calculated using a new algorithm based on the BRDF function. The algorithm uses a combination of model Torrance-Sparrow and HTSG. The simulation shows the change in voltage, current and output power depending on system parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ning, E-mail: nl4g12@soton.ac.uk; He, Miao; Alghamdi, Hisham
2015-08-14
Trapping parameters can be considered as one of the important attributes to describe polymeric materials. In the present paper, a more accurate charge dynamics model has been developed, which takes account of charge dynamics in both volts-on and off stage into simulation. By fitting with measured charge data with the highest R-square value, trapping parameters together with injection barrier of both normal and aged low-density polyethylene samples were estimated using the improved model. The results show that, after long-term ageing process, the injection barriers of both electrons and holes is lowered, overall trap depth is shallower, and trap density becomesmore » much greater. Additionally, the changes in parameters for electrons are more sensitive than those of holes after ageing.« less
Integrated Model of the Eye/Optic Nerve Head Biomechanical Environment
NASA Technical Reports Server (NTRS)
Ethier, C. R.; Feola, A.; Myers, J. G.; Nelson, E.; Raykin, J.; Samuels, B.
2017-01-01
Visual Impairment and Intracranial Pressure (VIIP) syndrome is a concern for long-duration space flight. Previously, it has been suggested that ocular changes observed in VIIP syndrome are related to the cephalad fluid shift that results in altered fluid pressures [1]. We are investigating the impact of changes in intracranial pressure (ICP) using a combination of numerical models, which simulate the effects of various environment conditions, including finite element (FE) models of the posterior eye. The specific interest is to understand how altered pressures due to gravitational changes affect the biomechanical environment of tissues of the posterior eye and optic nerve sheath. METHODS: Additional description of the numerical modeling is provided in the IWS abstract by Nelson et al. In brief, to simulate the effects of a cephalad fluid shift on the cardiovascular and ocular systems, we utilized a lumped-parameter compartment model of these systems. The outputs of this lumped-parameter model then inform boundary conditions (pressures) for a finite element model of the optic nerve head (Figure 1). As an example, we show here a simulation of postural change from supine to 15 degree head-down tilt (HDT), with primary outcomes being the predicted change in strains at the optic nerve head (ONH) region, specifically in the lamina cribrosa (LC), retrolaminar optic nerve, and prelaminar neural tissue (PLNT). The strain field can be decomposed into three orthogonal components, denoted as the first, second and third principal strains. We compare the peak tensile (first principal) and compressive (third principal) strains, since elevated strain alters cell phenotype and induces tissue remodeling. RESULTS AND CONCLUSIONS: Our lumped-parameter model predicted an IOP increase of c. 7 mmHg after 21 minutes of 15 degree HDT, which agreed with previous reports of IOP in HDT [1]. The corresponding FEM simulations predicted a relative increase in the magnitudes of the peak tensile and compressive strains in the lamina cribrosa of 42 and 43, respectively (Fig. 2). The corresponding changes in the optic nerve strains were 17 and 39, while in the PLNT they were 47 and 43. These magnitudes of relative elevations in peak strains may induce a phenotypic response in resident mechano-responsive resident cells [2]. This approach may be expanded to investigate other environmental changes (e.g. parabolic flight). Through our VIIP SCHOLAR project, we will validate and improve these integrated models by measuring patient-specific changes in optic nerve sheath geometry in patients with idiopathic intracranial hypertension before and after lumbar puncture and CSF removal.
Airport Noise Prediction Model -- MOD 7
DOT National Transportation Integrated Search
1978-07-01
The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...
High scale flavor alignment in two-Higgs doublet models and its phenomenology
Gori, Stefania; Haber, Howard E.; Santos, Edward
2017-06-21
The most general two-Higgs doublet model (2HDM) includes potentially large sources of flavor changing neutral currents (FCNCs) that must be suppressed in order to achieve a phenomenologically viable model. The flavor alignment ansatz postulates that all Yukawa coupling matrices are diagonal when expressed in the basis of mass-eigenstate fermion fields, in which case tree-level Higgs-mediated FCNCs are eliminated. In this work, we explore models with the flavor alignment condition imposed at a very high energy scale, which results in the generation of Higgs-mediated FCNCs via renormalization group running from the high energy scale to the electroweak scale. Using the currentmore » experimental bounds on flavor changing observables, constraints are derived on the aligned 2HDM parameter space. In the favored parameter region, we analyze the implications for Higgs boson phenomenology.« less
Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2011-01-01
A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.
Adapting water treatment design and operations to the impacts of global climate change
NASA Astrophysics Data System (ADS)
Clark, Robert M.; Li, Zhiwei; Buchberger, Steven G.
2011-12-01
It is anticipated that global climate change will adversely impact source water quality in many areas of the United States and will therefore, potentially, impact the design and operation of current and future water treatment systems. The USEPA has initiated an effort called the Water Resources Adaptation Program (WRAP) which is intended to develop tools and techniques that can assess the impact of global climate change on urban drinking water and wastewater infrastructure. A three step approach for assessing climate change impacts on water treatment operation and design is being persude in this effort. The first step is the stochastic characterization of source water quality, the second step is the application of the USEPA Water Treatment Plant model and the third step is the application of cost algorithms to provide a metric that can be used to assess the coat impact of climate change. A model has been validated using data collected from Cincinnati's Richard Miller Water Treatment Plant for the USEPA Information Collection Rule (ICR) database. An analysis of the water treatment processes in response to assumed perturbations in raw water quality identified TOC, pH, and bromide as the three most important parameters affecting performance of the Miller WTP. The Miller Plant was simulated using the EPA WTP model to examine the impact of these parameters on selected regulated water quality parameters. Uncertainty in influent water quality was analyzed to estimate the risk of violating drinking water maximum contaminant levels (MCLs).Water quality changes in the Ohio River were projected for 2050 using Monte Carlo simulation and the WTP model was used to evaluate the effects of water quality changes on design and operation. Results indicate that the existing Miller WTP might not meet Safe Drinking Water Act MCL requirements for certain extreme future conditions. However, it was found that the risk of MCL violations under future conditions could be controlled by enhancing existing WTP design and operation or by process retrofitting and modification.
Repetition priming in selective attention: A TVA analysis.
Ásgeirsson, Árni Gunnar; Kristjánsson, Árni; Bundesen, Claus
2015-09-01
Current behavior is influenced by events in the recent past. In visual attention, this is expressed in many variations of priming effects. Here, we investigate color priming in a brief exposure digit-recognition task. Observers performed a masked odd-one-out singleton recognition task where the target-color either repeated or changed between subsequent trials. Performance was measured by recognition accuracy over exposure durations. The purpose of the study was to replicate earlier findings of perceptual priming in brief displays and to model those results based on a Theory of Visual Attention (TVA; Bundesen, 1990). We tested 4 different definitions of a generic TVA-model and assessed their explanatory power. Our hypothesis was that priming effects could be explained by selective mechanisms, and that target-color repetitions would only affect the selectivity parameter (α) of our models. Repeating target colors enhanced performance for all 12 observers. As predicted, this was only true under conditions that required selection of a target among distractors, but not when a target was presented alone. Model fits by TVA were obtained with a trial-by-trial maximum likelihood estimation procedure that estimated 4-15 free parameters, depending on the particular model. We draw two main conclusions. Color priming can be modeled simply as a change in selectivity between conditions of repetition or swap of target color. Depending on the desired resolution of analysis; priming can accurately be modeled by a simple four parameter model, where VSTM capacity and spatial biases of attention are ignored, or more fine-grained by a 10 parameter model that takes these aspects into account. Copyright © 2015 Elsevier B.V. All rights reserved.
Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland
NASA Astrophysics Data System (ADS)
Habermann, M.; Truffer, M.; Maxwell, D.
2013-11-01
Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We invert for basal conditions from surface velocity data throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a shallow-shelf approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice-softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of effective basal yield stress in the first 7 km upstream from the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of effective basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Research on the Dynamic Hysteresis Loop Model of the Residence Times Difference (RTD)-Fluxgate
Wang, Yanzhang; Wu, Shujun; Zhou, Zhijian; Cheng, Defu; Pang, Na; Wan, Yunxia
2013-01-01
Based on the core hysteresis features, the RTD-fluxgate core, while working, is repeatedly saturated with excitation field. When the fluxgate simulates, the accurate characteristic model of the core may provide a precise simulation result. As the shape of the ideal hysteresis loop model is fixed, it cannot accurately reflect the actual dynamic changing rules of the hysteresis loop. In order to improve the fluxgate simulation accuracy, a dynamic hysteresis loop model containing the parameters which have actual physical meanings is proposed based on the changing rule of the permeability parameter when the fluxgate is working. Compared with the ideal hysteresis loop model, this model has considered the dynamic features of the hysteresis loop, which makes the simulation results closer to the actual output. In addition, other hysteresis loops of different magnetic materials can be explained utilizing the described model for an example of amorphous magnetic material in this manuscript. The model has been validated by the output response comparison between experiment results and fitting results using the model. PMID:24002230
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
The overconstraint of response time models: rethinking the scaling problem.
Donkin, Chris; Brown, Scott D; Heathcote, Andrew
2009-12-01
Theories of choice response time (RT) provide insight into the psychological underpinnings of simple decisions. Evidence accumulation (or sequential sampling) models are the most successful theories of choice RT. These models all have the same "scaling" property--that a subset of their parameters can be multiplied by the same amount without changing their predictions. This property means that a single parameter must be fixed to allow the estimation of the remaining parameters. In the present article, we show that the traditional solution to this problem has overconstrained these models, unnecessarily restricting their ability to account for data and making implicit--and therefore unexamined--psychological assumptions. We show that versions of these models that address the scaling problem in a minimal way can provide a better description of data than can their overconstrained counterparts, even when increased model complexity is taken into account.
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
NASA Astrophysics Data System (ADS)
Pechlivanidis, Ilias; McIntyre, Neil; Wheater, Howard
2017-04-01
Rainfall, one of the main inputs in hydrological modeling, is a highly heterogeneous process over a wide range of scales in space, and hence the ignorance of the spatial rainfall information could affect the simulated streamflow. Calibration of hydrological model parameters is rarely a straightforward task due to parameter equifinality and parameters' 'nature' to compensate for other uncertainties, i.e. structural and forcing input. In here, we analyse the significance of spatial variability of rainfall on streamflow as a function of catchment scale and type, and antecedent conditions using the continuous time, semi-distributed PDM hydrological model at the Upper Lee catchment, UK. The impact of catchment scale and type is assessed using 11 nested catchments ranging in scale from 25 to 1040 km2, and further assessed by artificially changing the catchment characteristics and translating these to model parameters with uncertainty using model regionalisation. Synthetic rainfall events are introduced to directly relate the change in simulated streamflow to the spatial variability of rainfall. Overall, we conclude that the antecedent catchment wetness and catchment type play an important role in controlling the significance of the spatial distribution of rainfall on streamflow. Results show a relationship between hydrograph characteristics (streamflow peak and volume) and the degree of spatial variability of rainfall for the impermeable catchments under dry antecedent conditions, although this decreases at larger scales; however this sensitivity is significantly undermined under wet antecedent conditions. Although there is indication that the impact of spatial rainfall on streamflow varies as a function of catchment scale, the variability of antecedent conditions between the synthetic catchments seems to mask this significance. Finally, hydrograph responses to different spatial patterns in rainfall depend on assumptions used for model parameter estimation and also the spatial variation in parameters indicating the need of an uncertainty framework in such investigation.
NASA Astrophysics Data System (ADS)
Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas
2015-04-01
Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional validation on spatial results was done for the groundwater head values at observation wells. To ensure that the lumped model can produce results as accurate as the spatially distributed models or close regardless to the number of parameters and implemented physical processes, it was checked whether the structure of the lumped models had to be adjusted. The concept has been implemented in a PCRaster - Python platform and tested for two Belgian case studies (catchments of the rivers Dijle and Grote Nete). So far, use is made of existing model structures (NAM, PDM, VHM and HBV). Acknowledgement: These results were obtained within the scope of research activities for the Flemish Environment Agency (VMM) - division Operational Water Management on "Next Generation hydrological modeling", in cooperation with IMDC consultants, and for Flanders Hydraulics Research (Waterbouwkundig Laboratorium) on "Effect of climate change on the hydrological regime of navigable watercourses in Belgium".
Multidimensional extended spatial evolutionary games.
Krześlak, Michał; Świerniak, Andrzej
2016-02-01
The goal of this paper is to study the classical hawk-dove model using mixed spatial evolutionary games (MSEG). In these games, played on a lattice, an additional spatial layer is introduced for dependence on more complex parameters and simulation of changes in the environment. Furthermore, diverse polymorphic equilibrium points dependent on cell reproduction, model parameters, and their simulation are discussed. Our analysis demonstrates the sensitivity properties of MSEGs and possibilities for further development. We discuss applications of MSEGs, particularly algorithms for modelling cell interactions during the development of tumours. Copyright © 2015 Elsevier Ltd. All rights reserved.
Overview of Icing Physics Relevant to Scaling
NASA Technical Reports Server (NTRS)
Anderson, David N.; Tsao, Jen-Ching
2005-01-01
An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.
Rhodes, Samhita S; Camara, Amadou KS; Ropella, Kristina M; Audi, Said H; Riess, Matthias L; Pagel, Paul S; Stowe, David F
2006-01-01
Background The phase-space relationship between simultaneously measured myoplasmic [Ca2+] and isovolumetric left ventricular pressure (LVP) in guinea pig intact hearts is altered by ischemic and inotropic interventions. Our objective was to mathematically model this phase-space relationship between [Ca2+] and LVP with a focus on the changes in cross-bridge kinetics and myofilament Ca2+ sensitivity responsible for alterations in Ca2+-contraction coupling due to inotropic drugs in the presence and absence of ischemia reperfusion (IR) injury. Methods We used a four state computational model to predict LVP using experimentally measured, averaged myoplasmic [Ca2+] transients from unpaced, isolated guinea pig hearts as the model input. Values of model parameters were estimated by minimizing the error between experimentally measured LVP and model-predicted LVP. Results We found that IR injury resulted in reduced myofilament Ca2+ sensitivity, and decreased cross-bridge association and dissociation rates. Dopamine (8 μM) reduced myofilament Ca2+ sensitivity before, but enhanced it after ischemia while improving cross-bridge kinetics before and after IR injury. Dobutamine (4 μM) reduced myofilament Ca2+ sensitivity while improving cross-bridge kinetics before and after ischemia. Digoxin (1 μM) increased myofilament Ca2+ sensitivity and cross-bridge kinetics after but not before ischemia. Levosimendan (1 μM) enhanced myofilament Ca2+ affinity and cross-bridge kinetics only after ischemia. Conclusion Estimated model parameters reveal mechanistic changes in Ca2+-contraction coupling due to IR injury, specifically the inefficient utilization of Ca2+ for contractile function with diastolic contracture (increase in resting diastolic LVP). The model parameters also reveal drug-induced improvements in Ca2+-contraction coupling before and after IR injury. PMID:16512898
Optimal pricing and marketing planning for deteriorating items.
Moosavi Tabatabaei, Seyed Reza; Sadjadi, Seyed Jafar; Makui, Ahmad
2017-01-01
Optimal pricing and marketing planning plays an essential role in production decisions on deteriorating items. This paper presents a mathematical model for a three-level supply chain, which includes one producer, one distributor and one retailer. The proposed study considers the production of a deteriorating item where demand is influenced by price, marketing expenditure, quality of product and after-sales service expenditures. The proposed model is formulated as a geometric programming with 5 degrees of difficulty and the problem is solved using the recent advances in optimization techniques. The study is supported by several numerical examples and sensitivity analysis is performed to analyze the effects of the changes in different parameters on the optimal solution. The preliminary results indicate that with the change in parameters influencing on demand, inventory holding, inventory deteriorating and set-up costs change and also significantly affect total revenue.
Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1991-01-01
A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.
Uncertainty in simulating wheat yields under climate change
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P. J.; Rötter, R. P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P. K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, A. J.; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, R.; Heng, L.; Hooker, J.; Hunt, L. A.; Ingwersen, J.; Izaurralde, R. C.; Kersebaum, K. C.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Osborne, T. M.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M. A.; Shcherbak, I.; Steduto, P.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J. W.; Williams, J. R.; Wolf, J.
2013-09-01
Projections of climate change impacts on crop yields are inherently uncertain. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models are difficult. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development andpolicymaking.
An investigation of using an RQP based method to calculate parameter sensitivity derivatives
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.
Using sobol sequences for planning computer experiments
NASA Astrophysics Data System (ADS)
Statnikov, I. N.; Firsov, G. I.
2017-12-01
Discusses the use for research of problems of multicriteria synthesis of dynamic systems method of Planning LP-search (PLP-search), which not only allows on the basis of the simulation model experiments to revise the parameter space within specified ranges of their change, but also through special randomized nature of the planning of these experiments is to apply a quantitative statistical evaluation of influence of change of varied parameters and their pairwise combinations to analyze properties of the dynamic system.Start your abstract here...
Feng, Xiaohui; Uriarte, María; González, Grizelle; Reed, Sasha; Thompson, Jill; Zimmerman, Jess K; Murphy, Lora
2018-01-01
Tropical forests play a critical role in carbon and water cycles at a global scale. Rapid climate change is anticipated in tropical regions over the coming decades and, under a warmer and drier climate, tropical forests are likely to be net sources of carbon rather than sinks. However, our understanding of tropical forest response and feedback to climate change is very limited. Efforts to model climate change impacts on carbon fluxes in tropical forests have not reached a consensus. Here, we use the Ecosystem Demography model (ED2) to predict carbon fluxes of a Puerto Rican tropical forest under realistic climate change scenarios. We parameterized ED2 with species-specific tree physiological data using the Predictive Ecosystem Analyzer workflow and projected the fate of this ecosystem under five future climate scenarios. The model successfully captured interannual variability in the dynamics of this tropical forest. Model predictions closely followed observed values across a wide range of metrics including aboveground biomass, tree diameter growth, tree size class distributions, and leaf area index. Under a future warming and drying climate scenario, the model predicted reductions in carbon storage and tree growth, together with large shifts in forest community composition and structure. Such rapid changes in climate led the forest to transition from a sink to a source of carbon. Growth respiration and root allocation parameters were responsible for the highest fraction of predictive uncertainty in modeled biomass, highlighting the need to target these processes in future data collection. Our study is the first effort to rely on Bayesian model calibration and synthesis to elucidate the key physiological parameters that drive uncertainty in tropical forests responses to climatic change. We propose a new path forward for model-data synthesis that can substantially reduce uncertainty in our ability to model tropical forest responses to future climate. © 2017 John Wiley & Sons Ltd.
Feng, Xiaohui; Uriarte, María; González, Grizelle; Reed, Sasha C.; Thompson, Jill; Zimmerman, Jess K.; Murphy, Lora
2018-01-01
Tropical forests play a critical role in carbon and water cycles at a global scale. Rapid climate change is anticipated in tropical regions over the coming decades and, under a warmer and drier climate, tropical forests are likely to be net sources of carbon rather than sinks. However, our understanding of tropical forest response and feedback to climate change is very limited. Efforts to model climate change impacts on carbon fluxes in tropical forests have not reached a consensus. Here we use the Ecosystem Demography model (ED2) to predict carbon fluxes of a Puerto Rican tropical forest under realistic climate change scenarios. We parameterized ED2 with species-specific tree physiological data using the Predictive Ecosystem Analyzer workflow and projected the fate of this ecosystem under five future climate scenarios. The model successfully captured inter-annual variability in the dynamics of this tropical forest. Model predictions closely followed observed values across a wide range of metrics including above-ground biomass, tree diameter growth, tree size class distributions, and leaf area index. Under a future warming and drying climate scenario, the model predicted reductions in carbon storage and tree growth, together with large shifts in forest community composition and structure. Such rapid changes in climate led the forest to transition from a sink to a source of carbon. Growth respiration and root allocation parameters were responsible for the highest fraction of predictive uncertainty in modeled biomass, highlighting the need to target these processes in future data collection. Our study is the first effort to rely on Bayesian model calibration and synthesis to elucidate the key physiological parameters that drive uncertainty in tropical forests responses to climatic change. We propose a new path forward for model-data synthesis that can substantially reduce uncertainty in our ability to model tropical forest responses to future climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less
Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.; ...
2017-07-26
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less
Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.
2015-01-01
Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.
NASA Astrophysics Data System (ADS)
Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon
2018-05-01
A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.
Multi-frequency parameter mapping of electrical impedance scanning using two kinds of circuit model.
Liu, Ruigang; Dong, Xiuzhen; Fu, Feng; You, Fusheng; Shi, Xuetao; Ji, Zhenyu; Wang, Kan
2007-07-01
Electrical impedance scanning (EIS) is a kind of potential bio-impedance measurement technology, especially aiding the diagnosis of breast cancer in women. By changing the frequency of the driving signal in turn while keeping the other conditions stable, multi-frequency measurement results on the object can be obtained. According to the least square method and circuit theory, the parameters in two models are deduced when measured with data at multiple driving frequencies. The arcs, in the real and imaginary parts of a trans-admittance coordinate, made by the evaluated parameters fit well the realistic data measured by our EIS device on female subjects. The Cole-Cole model in the form of admittance is closer to the measured data than the three-element model. Based on the evaluation of the multi-frequency parameters, we presented parameter mapping of EIS using two kinds of circuit model: one is the three-element model in the form of admittance and the other is the Cole-Cole model in the form of admittance. Comparing with classical admittance mapping at a single frequency, the multi-frequency parameter mapping will provide a novel vision to study EIS. The multi-frequency approach can provide the mappings of four parameters, which is helpful to identify different diseases with a similar characteristic in classical EIS mapping. From plots of the real and imaginary parts of the admittance, it is easy to make sure whether there exists abnormal tissue.
The Matching Relation and Situation-Specific Bias Modulation in Professional Football Play Selection
Stilling, Stephanie T; Critchfield, Thomas S
2010-01-01
The utility of a quantitative model depends on the extent to which its fitted parameters vary systematically with environmental events of interest. Professional football statistics were analyzed to determine whether play selection (passing versus rushing plays) could be accounted for with the generalized matching equation, and in particular whether variations in play selection across game situations would manifest as changes in the equation's fitted parameters. Statistically significant changes in bias were found for each of five types of game situations; no systematic changes in sensitivity were observed. Further analyses suggested relationships between play selection bias and both turnover probability (which can be described in terms of punishment) and yards-gained variance (which can be described in terms of variable-magnitude reinforcement schedules). The present investigation provides a useful demonstration of association between face-valid, situation-specific effects in a domain of everyday interest, and a theoretically important term of a quantitative model of behavior. Such associations, we argue, are an essential focus in translational extensions of quantitative models. PMID:21119855
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.
2005-12-01
This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
NASA Astrophysics Data System (ADS)
Zhu, Hongchun; Zhao, Yipeng; Liu, Haiying
2018-04-01
Scale is the basic attribute for expressing and describing spatial entity and phenomena. It offers theoretical significance in the study of gully structure information, variable characteristics of watershed morphology, and development evolution at different scales. This research selected five different areas in China's Loess Plateau as the experimental region and used DEM data at different scales as the experimental data. First, the change rule of the characteristic parameters of the data at different scales was analyzed. The watershed structure information did not change along with a change in the data scale. This condition was proven by selecting indices of gully bifurcation ratio and fractal dimension as characteristic parameters of watershed structure information. Then, the change rule of the characteristic parameters of gully structure with different analysis scales was analyzed by setting the scale sequence of analysis at the extraction gully. The gully structure of the watershed changed with variations in the analysis scale, and the change rule was obvious when the gully level changed. Finally, the change rule of the characteristic parameters of the gully structure at different areas was analyzed. The gully fractal dimension showed a significant numerical difference in different areas, whereas the variation of the gully branch ratio was small. The change rule indicated that the development degree of the gully obviously varied in different regions, but the morphological structure was basically similar.
NASA Astrophysics Data System (ADS)
Zhu, Hongchun; Zhao, Yipeng; Liu, Haiying
2018-06-01
Scale is the basic attribute for expressing and describing spatial entity and phenomena. It offers theoretical significance in the study of gully structure information, variable characteristics of watershed morphology, and development evolution at different scales. This research selected five different areas in China's Loess Plateau as the experimental region and used DEM data at different scales as the experimental data. First, the change rule of the characteristic parameters of the data at different scales was analyzed. The watershed structure information did not change along with a change in the data scale. This condition was proven by selecting indices of gully bifurcation ratio and fractal dimension as characteristic parameters of watershed structure information. Then, the change rule of the characteristic parameters of gully structure with different analysis scales was analyzed by setting the scale sequence of analysis at the extraction gully. The gully structure of the watershed changed with variations in the analysis scale, and the change rule was obvious when the gully level changed. Finally, the change rule of the characteristic parameters of the gully structure at different areas was analyzed. The gully fractal dimension showed a significant numerical difference in different areas, whereas the variation of the gully branch ratio was small. The change rule indicated that the development degree of the gully obviously varied in different regions, but the morphological structure was basically similar.
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-08-01
Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao-Pham, Thanh-Trang; Tran, Ly-Binh-An; Colliez, Florence
Purpose: In an effort to develop noninvasive in vivo methods for mapping tumor oxygenation, magnetic resonance (MR)-derived parameters are being considered, including global R{sub 1}, water R{sub 1}, lipids R{sub 1}, and R{sub 2}*. R{sub 1} is sensitive to dissolved molecular oxygen, whereas R{sub 2}* is sensitive to blood oxygenation, detecting changes in dHb. This work compares global R{sub 1}, water R{sub 1}, lipids R{sub 1}, and R{sub 2}* with pO{sub 2} assessed by electron paramagnetic resonance (EPR) oximetry, as potential markers of the outcome of radiation therapy (RT). Methods and Materials: R{sub 1}, R{sub 2}*, and EPR were performed onmore » rhabdomyosarcoma and 9L-glioma tumor models, under air and carbogen breathing conditions (95% O{sub 2}, 5% CO{sub 2}). Because the models demonstrated different radiosensitivity properties toward carbogen, a growth delay (GD) assay was performed on the rhabdomyosarcoma model and a tumor control dose 50% (TCD50) was performed on the 9L-glioma model. Results: Magnetic resonance imaging oxygen-sensitive parameters detected the positive changes in oxygenation induced by carbogen within tumors. No consistent correlation was seen throughout the study between MR parameters and pO{sub 2}. Global and lipids R{sub 1} were found to be correlated to pO{sub 2} in the rhabdomyosarcoma model, whereas R{sub 2}* was found to be inversely correlated to pO{sub 2} in the 9L-glioma model (P=.05 and .03). Carbogen increased the TCD50 of 9L-glioma but did not increase the GD of rhabdomyosarcoma. Only R{sub 2}* was predictive (P<.05) for the curability of 9L-glioma at 40 Gy, a dose that showed a difference in response to RT between carbogen and air-breathing groups. {sup 18}F-FAZA positron emission tomography imaging has been shown to be a predictive marker under the same conditions. Conclusion: This work illustrates the sensitivity of oxygen-sensitive R{sub 1} and R{sub 2}* parameters to changes in tumor oxygenation. However, R{sub 1} parameters showed limitations in terms of predicting the outcome of RT in the tumor models studied, whereas R{sub 2}* was found to be correlated with the outcome in the responsive model.« less
Effect of correlated observation error on parameters, predictions, and uncertainty
Tiedeman, Claire; Green, Christopher T.
2013-01-01
Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.
Wedenberg, Minna; Lind, Bengt K; Hårdemark, Björn
2013-04-01
The biological effects of particles are often expressed in relation to that of photons through the concept of relative biological effectiveness, RBE. In proton radiotherapy, a constant RBE of 1.1 is usually assumed. However, there is experimental evidence that RBE depends on various factors. The aim of this study is to develop a model to predict the RBE based on linear energy transfer (LET), dose, and the tissue specific parameter α/β of the linear-quadratic model for the reference radiation. Moreover, the model should capture the basic features of the RBE using a minimum of assumptions, each supported by experimental data. The α and β parameters for protons were studied with respect to their dependence on LET. An RBE model was proposed where the dependence of LET is affected by the (α/β)phot ratio of photons. Published cell survival data with a range of well-defined LETs and cell types were selected for model evaluation rendering a total of 10 cell lines and 24 RBE values. A statistically significant relation was found between α for protons and LET. Moreover, the strength of that relation varied significantly with (α/β)phot. In contrast, no significant relation between β and LET was found. On the whole, the resulting RBE model provided a significantly improved fit (p-value < 0.01) to the experimental data compared to the standard constant RBE. By accounting for the α/β ratio of photons, clearer trends between RBE and LET of protons were found, and our results suggest that late responding tissues are more sensitive to LET changes than early responding tissues and most tumors. An advantage with the proposed RBE model in optimization and evaluation of treatment plans is that it only requires dose, LET, and (α/β)phot as input parameters. Hence, no proton specific biological parameters are needed.
Effects of reaction-kinetic parameters on modeling reaction pathways in GaN MOVPE growth
NASA Astrophysics Data System (ADS)
Zhang, Hong; Zuo, Ran; Zhang, Guoyi
2017-11-01
In the modeling of the reaction-transport process in GaN MOVPE growth, the selections of kinetic parameters (activation energy Ea and pre-exponential factor A) for gas reactions are quite uncertain, which cause uncertainties in both gas reaction path and growth rate. In this study, numerical modeling of the reaction-transport process for GaN MOVPE growth in a vertical rotating disk reactor is conducted with varying kinetic parameters for main reaction paths. By comparisons of the molar concentrations of major Ga-containing species and the growth rates, the effects of kinetic parameters on gas reaction paths are determined. The results show that, depending on the values of the kinetic parameters, the gas reaction path may be dominated either by adduct/amide formation path, or by TMG pyrolysis path, or by both. Although the reaction path varies with different kinetic parameters, the predicted growth rates change only slightly because the total transport rate of Ga-containing species to the substrate changes slightly with reaction paths. This explains why previous authors using different chemical models predicted growth rates close to the experiment values. By varying the pre-exponential factor for the amide trimerization, it is found that the more trimers are formed, the lower the growth rates are than the experimental value, which indicates that trimers are poor growth precursors, because of thermal diffusion effect caused by high temperature gradient. The effective order for the contribution of major species to growth rate is found as: pyrolysis species > amides > trimers. The study also shows that radical reactions have little effect on gas reaction path because of the generation and depletion of H radicals in the chain reactions when NH2 is considered as the end species.
Lake Number, a quantitative indicator of mixing used to estimate changes in dissolved oxygen
Robertson, Dale M.; Imberger, Jorg
1994-01-01
Lake Number, LN, values are shown to be quantitative indicators of deep mixing in lakes and reservoirs that can be used to estimate changes in deep water dissolved oxygen (DO) concentrations. LN is a dimensionless parameter defined as the ratio of the moments about the center of volume of the water body, of the stabilizing force of gravity associated with density stratification to the destabilizing forces supplied by wind, cooling, inflow, outflow, and other artificial mixing devices. To demonstrate the universality of this parameter, LN values are used to describe the extent of deep mixing and are compared with changes in DO concentrations in three reservoirs in Australia and four lakes in the U.S.A., which vary in productivity and mixing regimes. A simple model is developed which relates changes in LN values, i.e., the extent of mixing, to changes in near bottom DO concentrations. After calibrating the model for a specific system, it is possible to use real-time LN values, calculated using water temperature profiles and surface wind velocities, to estimate changes in DO concentrations (assuming unchanged trophic conditions).
Climate Change for Agriculture, Forest Cover and 3d Urban Models
NASA Astrophysics Data System (ADS)
Kapoor, M.; Bassir, D.
2014-11-01
This research demonstrates the important role of the remote sensing in finding out the different parameters behind the agricultural crop change, forest cover and urban 3D models. Standalone software is developed to view and analysis the different factors effecting the change in crop productions. Open-source libraries from the Open Source Geospatial Foundation have been used for the development of the shape-file viewer. Software can be used to get the attribute information, scale, zoom in/out and pan the shapefiles. Environmental changes due to pollution and population that are increasing the urbanisation and decreasing the forest cover on the earth. Satellite imagery such as Landsat 5(1984) to Landsat TRIS/8 (2014), Landsat Data Continuity Mission (LDCM) and NDVI are used to analyse the different parameters that are effecting the agricultural crop production change and forest change. It is advisable for the development of good quality of NDVI and forest cover maps to use data collected from the same processing methods for the complete region. Management practices have been developed from the analysed data for the betterment of the crop and saving the forest cover
Nanosecond electric modification of order parameters
NASA Astrophysics Data System (ADS)
Borshch, Volodymyr
In this Dissertation, we study a nanosecond electro-optic response of a nematic liquid crystal in a geometry where an applied electric field E modifies the tensor order parameter but does not change the orientation of the optic axis (director N̂). We use nematics with negative dielectric anisotropy with the electric field applied perpendicularly to N̂. The field changes the dielectric tensor at optical frequencies (optic tensor), due to the following mechanisms: (a) nanosecond creation of biaxial orientational order; (b) uniaxial modification of the orientational order that occurs over the timescales of tens of nanoseconds, and (c) quenching of director fluctuations with a wide range of characteristic times up to milliseconds. We develop a model to describe the dynamics of all three mechanisms. We design the experimental conditions to selectively suppress the contributions from the quenching of director fluctuations (c) and from the biaxial order effect (a) and thus, separate the contributions of the three mechanisms in the electro-optic response. As a result, the experimental data can be well fitted with the model parameters. The analysis provides a rather detailed physical picture of how the liquid crystal responds to a strong electric field, E ˜ 108 V/m, on a timescale of nanoseconds. This work provides a useful guide in the current search of the biaxial nematic phase. Namely, the temperature dependence of the biaxial susceptibility allows one to estimate the temperature of the potential uniaxial-to-biaxial phase transition. An analysis of the quenching of director fluctuations indicates that on a timescale of nanoseconds, the classic model with constant viscoelastic material parameters might reach its limit of validity. The effect of nanosecond electric modification of the order parameter (NEMOP) can be used in applications in which one needs to achieve ultrafast (nanosecond) changes of optical characteristics, such as birefringence.
NASA Astrophysics Data System (ADS)
Pettijohn, J. C.; Law, B. E.; Williams, M. D.; Stoeckli, R.; Thornton, P. E.; Hudiburg, T. M.; Thomas, C. K.; Martin, J.; Hill, T. C.
2009-12-01
The assimilation of terrestrial carbon, water and nutrient cycle measurements into land surface models of these processes is fundamental to improving our ability to predict how these ecosystems may respond to climate change. A combination of measurements and models, each with their own systematic biases, must be considered when constraining the nonlinear behavior of these coupled dynamics. As such, we use the sequential Ensemble Kalman Filter (EnKF) to assimilate eddy covariance (EC) and other site-level AmeriFlux measurements into the NCAR Community Land Model with Carbon-Nitrogen coupling (CLM-CN v3.5), run in single-column mode at a 30-minute time step, to improve estimates of relatively unconstrained model state variables and parameters. Specifically, we focus on a semi-arid ponderosa pine site (US-ME2) in the Pacific Northwest to identify the mechanisms by which this ecosystem responds to severe late summer drought. Our EnKF analysis includes water, carbon, energy and nitrogen state variables (e.g., 10 volumetric soil moisture levels (0-3.43 m), ponderosa pine and shrub evapotranspiration and net ecosystem exchange of carbon dioxide stocks and flux components, snow depth, etc.) and associated parameters (e.g., PFT-level rooting distribution parameters, maximum subsurface runoff coefficient, soil hydraulic conductivity decay factor, snow aging parameters, maximum canopy conductance, C:N ratios, etc.). The effectiveness of the EnKF in constraining state variables and associated parameters is sensitive to their relative frequencies, in that C-N state variables and parameters with long time constants require similarly long time series in the analysis. We apply the EnKF kernel perturbation routine to disrupt preliminary convergence of covariances, which has been found in recent studies to be a problem more characteristic of low frequency vegetation state variables and parameters than high frequency ones more heavily coupled with highly varying climate (e.g., shallow soil moisture, snow depth). Preliminary results demonstrate that the assimilation of EC and other available AmeriFlux site physical, chemical and biological data significantly helps quantify and reduce CLM-CN model uncertainties and helps to constrain ‘hidden’ states and parameters that are essential in the coupled water, carbon, energy and nutrient dynamics of these sites. Such site-level calibration of CLM-CN is an initial step in identifying model deficiencies and in forecasts of future ecosystem responses to climate change.
Estimation of a Nonlinear Intervention Phase Trajectory for Multiple-Baseline Design Data
ERIC Educational Resources Information Center
Hembry, Ian; Bunuan, Rommel; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2015-01-01
A multilevel logistic model for estimating a nonlinear trajectory in a multiple-baseline design is introduced. The model is applied to data from a real multiple-baseline design study to demonstrate interpretation of relevant parameters. A simple change-in-levels (?"Levels") model and a model involving a quadratic function…
Cotten, Cameron; Reed, Jennifer L
2013-01-30
Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.
2013-01-01
Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254
NASA Astrophysics Data System (ADS)
Jiang, Sanyuan; Zhang, Qi
2017-04-01
Phosphorus losses from excessive fertilizer application and improper land exploitation were found to be the limiting factor for freshwater quality deterioration and eutrophication. Phosphorus transport from uplands to river is related to hydrological, soil erosion and sediment transport processes, which is impacted by several physiographic and meteorological factors. The objective of this study was to investigate the spatiotemporal variation of phosphorus losses and response to climate change at a typical upstream tributary (Le'An river) of Poyang Lake. To this end, a process-oriented hydrological and nutrient transport model HYPE (Hydrological Predictions for the Environment) was set up for discharge and phosphorus transport simulation at Le'An catchment. Parameter ESTimator (PEST) was combined with HYPE model for parameter sensitivity analysis and optimisation. In runoff modelling, potential evapotranspiration rate of the dominant land use (forest) is most sensitive; parameters of surface runoff rate and percolation capacity for the red soil are also very sensitive. In phosphorus transport modelling, the exponent of equation for soil erosion processes induced by surface runoff is most sensitive, coefficient of adsorption/desorption processes for red soil is also very sensitive. Flow dynamics and water balance were simulated well at all sites for the whole period (1978-1986) with NSE≥0.80 and PBIAS≤14.53%. The optimized hydrological parameter set were transferable for the independent period (2009-2010) with NSE≥0.90 and highest PBIAS of -7.44% in stream flow simulation. Seasonal dynamics and balance of stream water TP (Total Phosphorus ) concentrations were captured satisfactorily indicated by NSE≥0.53 and highest PBIAS of 16.67%. In annual scale, most phosphorus is transported via surface runoff during heavy storm flow events, which may account for about 70% of annual TP loads. Based on future climate change analysis under three different emission scenarios (RCP 2.6, RCP 4.5 and RCP 8.5), there is no considerable change in average annual rainfall amount in 2020-2035 while increasing occurrence frequency and intensity of extreme rainfall events were predicted. The validated HYPE model was run on the three emission scenarios. Overall increase of TP loads was found in future with the largest increase of annual TP loads under the high emission scenario (RCP 8.5). The outcomes of this study (i) verified the transferability of HYPE model at humid subtropical and heterogeneous catchment; (ii) revealed the sensitive hydrological and phosphorus transport processes and relevant parameters; (iii) implied more TP losses in future in response to increasing extreme rainfall events.
Saggar, Manish; Zanesco, Anthony P; King, Brandon G; Bridwell, David A; MacLean, Katherine A; Aichele, Stephen R; Jacobs, Tonya L; Wallace, B Alan; Saron, Clifford D; Miikkulainen, Risto
2015-07-01
Meditation training has been shown to enhance attention and improve emotion regulation. However, the brain processes associated with such training are poorly understood and a computational modeling framework is lacking. Modeling approaches that can realistically simulate neurophysiological data while conforming to basic anatomical and physiological constraints can provide a unique opportunity to generate concrete and testable hypotheses about the mechanisms supporting complex cognitive tasks such as meditation. Here we applied the mean-field computational modeling approach using the scalp-recorded electroencephalogram (EEG) collected at three assessment points from meditating participants during two separate 3-month-long shamatha meditation retreats. We modeled cortical, corticothalamic, and intrathalamic interactions to generate a simulation of EEG signals recorded across the scalp. We also present two novel extensions to the mean-field approach that allow for: (a) non-parametric analysis of changes in model parameter values across all channels and assessments; and (b) examination of variation in modeled thalamic reticular nucleus (TRN) connectivity over the retreat period. After successfully fitting whole-brain EEG data across three assessment points within each retreat, two model parameters were found to replicably change across both meditation retreats. First, after training, we observed an increased temporal delay between modeled cortical and thalamic cells. This increase provides a putative neural mechanism for a previously observed reduction in individual alpha frequency in these same participants. Second, we found decreased inhibitory connection strength between the TRN and secondary relay nuclei (SRN) of the modeled thalamus after training. This reduction in inhibitory strength was found to be associated with increased dynamical stability of the model. Altogether, this paper presents the first computational approach, taking core aspects of physiology and anatomy into account, to formally model brain processes associated with intensive meditation training. The observed changes in model parameters inform theoretical accounts of attention training through meditation, and may motivate future study on the use of meditation in a variety of clinical populations. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Kunhwi; Rutqvist, Jonny; Nakagawa, Seiji; Birkholzer, Jens
2017-11-01
This paper presents coupled hydro-mechanical modeling of hydraulic fracturing processes in complex fractured media using a discrete fracture network (DFN) approach. The individual physical processes in the fracture propagation are represented by separate program modules: the TOUGH2 code for multiphase flow and mass transport based on the finite volume approach; and the rigid-body-spring network (RBSN) model for mechanical and fracture-damage behavior, which are coupled with each other. Fractures are modeled as discrete features, of which the hydrological properties are evaluated from the fracture deformation and aperture change. The verification of the TOUGH-RBSN code is performed against a 2D analytical model for single hydraulic fracture propagation. Subsequently, modeling capabilities for hydraulic fracturing are demonstrated through simulations of laboratory experiments conducted on rock-analogue (soda-lime glass) samples containing a designed network of pre-existing fractures. Sensitivity analyses are also conducted by changing the modeling parameters, such as viscosity of injected fluid, strength of pre-existing fractures, and confining stress conditions. The hydraulic fracturing characteristics attributed to the modeling parameters are investigated through comparisons of the simulation results.
MODELING DYNAMIC VEGETATION RESPONSE TO RAPID CLIMATE CHANGE USING BIOCLIMATIC CLASSIFICATION
Modeling potential global redistribution of terrestrial vegetation frequently is based on bioclimatic classifications which relate static regional vegetation zones (biomes) to a set of static climate parameters. The equilibrium character of the relationships limits our confidence...
Quantitative Evaluation of Ionosphere Models for Reproducing Regional TEC During Geomagnetic Storms
NASA Astrophysics Data System (ADS)
Shim, J. S.; Kuznetsova, M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B.; Foster, B.; Fuller-Rowell, T. J.; Goncharenko, L. P.; Huba, J.; Mitchell, C. N.; Ridley, A. J.; Fedrizzi, M.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.
2015-12-01
TEC (Total Electron Content) is one of the key parameters in description of the ionospheric variability that has influence on the accuracy of navigation and communication systems. To assess current TEC modeling capability of ionospheric models during geomagnetic storms and to establish a baseline against which future improvement can be compared, we quantified the ionospheric models' performance by comparing modeled vertical TEC values with ground-based GPS TEC measurements and Multi-Instrument Data Analysis System (MIDAS) TEC. The comparison focused on North America and Europe sectors during selected two storm events: 2006 AGU storm (14-15 Dec. 2006) and 2013 March storm (17-19 Mar. 2013). The ionospheric models used for this study range from empirical to physics-based, and physics-based data assimilation models. We investigated spatial and temporal variations of TEC during the storms. In addition, we considered several parameters to quantify storm impacts on TEC: TEC changes compared to quiet time, rate of TEC change, and maximum increase/decrease during the storms. In this presentation, we focus on preliminary results of the comparison of the models performance in reproducing the storm-time TEC variations using the parameters and skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.