Regression Methods for Categorical Dependent Variables: Effects on a Model of Student College Choice
ERIC Educational Resources Information Center
Rapp, Kelly E.
2012-01-01
The use of categorical dependent variables with the classical linear regression model (CLRM) violates many of the model's assumptions and may result in biased estimates (Long, 1997; O'Connell, Goldstein, Rogers, & Peng, 2008). Many dependent variables of interest to educational researchers (e.g., professorial rank, educational attainment) are…
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)
NASA Astrophysics Data System (ADS)
Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi
2017-06-01
Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.
NASA Astrophysics Data System (ADS)
Khan, F.; Pilz, J.; Spöck, G.
2017-12-01
Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Dong, Chunjiao; Xie, Kun; Zeng, Jin; Li, Xia
2018-04-01
Highway safety laws aim to influence driver behaviors so as to reduce the frequency and severity of crashes, and their outcomes. For one specific highway safety law, it would have different effects on the crashes across severities. Understanding such effects can help policy makers upgrade current laws and hence improve traffic safety. To investigate the effects of highway safety laws on crashes across severities, multivariate models are needed to account for the interdependency issues in crash counts across severities. Based on the characteristics of the dependent variables, multivariate dynamic Tobit (MVDT) models are proposed to analyze crash counts that are aggregated at the state level. Lagged observed dependent variables are incorporated into the MVDT models to account for potential temporal correlation issues in crash data. The state highway safety law related factors are used as the explanatory variables and socio-demographic and traffic factors are used as the control variables. Three models, a MVDT model with lagged observed dependent variables, a MVDT model with unobserved random variables, and a multivariate static Tobit (MVST) model are developed and compared. The results show that among the investigated models, the MVDT models with lagged observed dependent variables have the best goodness-of-fit. The findings indicate that, compared to the MVST, the MVDT models have better explanatory power and prediction accuracy. The MVDT model with lagged observed variables can better handle the stochasticity and dependency in the temporal evolution of the crash counts and the estimated values from the model are closer to the observed values. The results show that more lives could be saved if law enforcement agencies can make a sustained effort to educate the public about the importance of motorcyclists wearing helmets. Motor vehicle crash-related deaths, injuries, and property damages could be reduced if states enact laws for stricter text messaging rules, higher speeding fines, older licensing age, and stronger graduated licensing provisions. Injury and PDO crashes would be significantly reduced with stricter laws prohibiting the use of hand-held communication devices and higher fines for drunk driving. Copyright © 2018 Elsevier Ltd. All rights reserved.
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
NASA Astrophysics Data System (ADS)
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Modified Regression Correlation Coefficient for Poisson Regression Model
NASA Astrophysics Data System (ADS)
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
DOT National Transportation Integrated Search
2015-12-01
We develop an econometric framework for incorporating spatial dependence in integrated model systems of latent variables and multidimensional mixed data outcomes. The framework combines Bhats Generalized Heterogeneous Data Model (GHDM) with a spat...
Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions
ERIC Educational Resources Information Center
Vuolo, Mike
2017-01-01
Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…
Predator Persistence through Variability of Resource Productivity in Tritrophic Systems.
Soudijn, Floor H; de Roos, André M
2017-12-01
The trophic structure of species communities depends on the energy transfer between trophic levels. Primary productivity varies strongly through time, challenging the persistence of species at higher trophic levels. Yet resource variability has mostly been studied in systems with only one or two trophic levels. We test the effect of variability in resource productivity in a tritrophic model system including a resource, a size-structured consumer, and a size-specific predator. The model complies with fundamental principles of mass conservation and the body-size dependence of individual-level energetics and predator-prey interactions. Surprisingly, we find that resource variability may promote predator persistence. The positive effect of variability on the predator arises through periods with starvation mortality of juvenile prey, which reduces the intraspecific competition in the prey population. With increasing variability in productivity and starvation mortality in the juvenile prey, the prey availability increases in the size range preferred by the predator. The positive effect of prey mortality on the trophic transfer efficiency depends on the biologically realistic consideration of body size-dependent and food-dependent functions for growth and reproduction in our model. Our findings show that variability may promote the trophic transfer efficiency, indicating that environmental variability may sustain species at higher trophic levels in natural ecosystems.
Modelling space of spread Dengue Hemorrhagic Fever (DHF) in Central Java use spatial durbin model
NASA Astrophysics Data System (ADS)
Ispriyanti, Dwi; Prahutama, Alan; Taryono, Arkadina PN
2018-05-01
Dengue Hemorrhagic Fever is one of the major public health problems in Indonesia. From year to year, DHF causes Extraordinary Event in most parts of Indonesia, especially Central Java. Central Java consists of 35 districts or cities where each region is close to each other. Spatial regression is an analysis that suspects the influence of independent variables on the dependent variables with the influences of the region inside. In spatial regression modeling, there are spatial autoregressive model (SAR), spatial error model (SEM) and spatial autoregressive moving average (SARMA). Spatial Durbin model is the development of SAR where the dependent and independent variable have spatial influence. In this research dependent variable used is number of DHF sufferers. The independent variables observed are population density, number of hospitals, residents and health centers, and mean years of schooling. From the multiple regression model test, the variables that significantly affect the spread of DHF disease are the population and mean years of schooling. By using queen contiguity and rook contiguity, the best model produced is the SDM model with queen contiguity because it has the smallest AIC value of 494,12. Factors that generally affect the spread of DHF in Central Java Province are the number of population and the average length of school.
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Selection of latent variables for multiple mixed-outcome models
ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI
2014-01-01
Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219
Determination of riverbank erosion probability using Locally Weighted Logistic Regression
NASA Astrophysics Data System (ADS)
Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos
2015-04-01
Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
Modeling Time-Dependent Association in Longitudinal Data: A Lag as Moderator Approach
ERIC Educational Resources Information Center
Selig, James P.; Preacher, Kristopher J.; Little, Todd D.
2012-01-01
We describe a straightforward, yet novel, approach to examine time-dependent association between variables. The approach relies on a measurement-lag research design in conjunction with statistical interaction models. We base arguments in favor of this approach on the potential for better understanding the associations between variables by…
A Reduced Form Model for Ozone Based on Two Decades of ...
A Reduced Form Model (RFM) is a mathematical relationship between the inputs and outputs of an air quality model, permitting estimation of additional modeling without costly new regional-scale simulations. A 21-year Community Multiscale Air Quality (CMAQ) simulation for the continental United States provided the basis for the RFM developed in this study. Predictors included the principal component scores (PCS) of emissions and meteorological variables, while the predictand was the monthly mean of daily maximum 8-hour CMAQ ozone for the ozone season at each model grid. The PCS form an orthogonal basis for RFM inputs. A few PCS incorporate most of the variability of emissions and meteorology, thereby reducing the dimensionality of the source-receptor problem. Stochastic kriging was used to estimate the model. The RFM was used to separate the effects of emissions and meteorology on ozone concentrations. by running the RFM with emissions constant (ozone dependent on meteorology), or constant meteorology (ozone dependent on emissions). Years with ozone-conducive meteorology were identified, and meteorological variables best explaining meteorology-dependent ozone were identified. Meteorology accounted for 19% to 55% of ozone variability in the eastern US, and 39% to 92% in the western US. Temporal trends estimated for original CMAQ ozone data and emission-dependent ozone were mostly negative, but the confidence intervals for emission-dependent ozone are much
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Rúa-Uribe, Guillermo L; Suárez-Acosta, Carolina; Chauca, José; Ventosilla, Palmira; Almanza, Rita
2013-09-01
Dengue fever is a major impact on public health vector-borne disease, and its transmission is influenced by entomological, sociocultural and economic factors. Additionally, climate variability plays an important role in the transmission dynamics. A large scientific consensus has indicated that the strong association between climatic variables and disease could be used to develop models to explain the incidence of the disease. To develop a model that provides a better understanding of dengue transmission dynamics in Medellin and predicts increases in the incidence of the disease. The incidence of dengue fever was used as dependent variable, and weekly climatic factors (maximum, mean and minimum temperature, relative humidity and precipitation) as independent variables. Expert Modeler was used to develop a model to better explain the behavior of the disease. Climatic variables with significant association to the dependent variable were selected through ARIMA models. The model explains 34% of observed variability. Precipitation was the climatic variable showing statistically significant association with the incidence of dengue fever, but with a 20 weeks delay. In Medellin, the transmission of dengue fever was influenced by climate variability, especially precipitation. The strong association dengue fever/precipitation allowed the construction of a model to help understand dengue transmission dynamics. This information will be useful to develop appropriate and timely strategies for dengue control.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
NASA Technical Reports Server (NTRS)
Rubesin, M. W.; Rose, W. C.
1973-01-01
The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Furuya, Keiichiro; Ishizuka, Shinichi
2018-07-01
Model-based controllers with adaptive design variables are often used to control an object with time-dependent characteristics. However, the controller's performance is influenced by many factors such as modeling accuracy and fluctuations in the object's characteristics. One method to overcome these negative factors is to tune model-based controllers. Herein we propose an online tuning method to maintain control performance for an object that exhibits time-dependent variations. The proposed method employs the poles of the controller as design variables because the poles significantly impact performance. Specifically, we use the simultaneous perturbation stochastic approximation (SPSA) to optimize a model-based controller with multiple design variables. Moreover, a vibration control experiment of an object with time-dependent characteristics as the temperature is varied demonstrates that the proposed method allows adaptive control and stably maintains the closed-loop characteristics.
On the explaining-away phenomenon in multivariate latent variable models.
van Rijn, Peter; Rijmen, Frank
2015-02-01
Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore themore » robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.« less
NASA Astrophysics Data System (ADS)
Cannon, Alex J.
2018-01-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
ERIC Educational Resources Information Center
Svanum, Soren; Bringle, Robert G.
1980-01-01
The confluence model of cognitive development was tested on 7,060 children. Family size, sibling order within family sizes, and hypothesized age-dependent effects were tested. Findings indicated an inverse relationship between family size and the cognitive measures; age-dependent effects and other confluence variables were found to be…
A Mulitivariate Statistical Model Describing the Compound Nature of Soil Moisture Drought
NASA Astrophysics Data System (ADS)
Manning, Colin; Widmann, Martin; Bevacqua, Emanuele; Maraun, Douglas; Van Loon, Anne; Vrac, Mathieu
2017-04-01
Soil moisture in Europe acts to partition incoming energy into sensible and latent heat fluxes, thereby exerting a large influence on temperature variability. Soil moisture is predominantly controlled by precipitation and evapotranspiration. When these meteorological variables are accumulated over different timescales, their joint multivariate distribution and dependence structure can be used to provide information of soil moisture. We therefore consider soil moisture drought as a compound event of meteorological drought (deficits of precipitation) and heat waves, or more specifically, periods of high Potential Evapotraspiration (PET). We present here a statistical model of soil moisture based on Pair Copula Constructions (PCC) that can describe the dependence amongst soil moisture and its contributing meteorological variables. The model is designed in such a way that it can account for concurrences of meteorological drought and heat waves and describe the dependence between these conditions at a local level. The model is composed of four variables; daily soil moisture (h); a short term and a long term accumulated precipitation variable (Y1 and Y_2) that account for the propagation of meteorological drought to soil moisture drought; and accumulated PET (Y_3), calculated using the Penman Monteith equation, which can represent the effect of a heat wave on soil conditions. Copula are multivariate distribution functions that allow one to model the dependence structure of given variables separately from their marginal behaviour. PCCs then allow in theory for the formulation of a multivariate distribution of any dimension where the multivariate distribution is decomposed into a product of marginal probability density functions and two-dimensional copula, of which some are conditional. We apply PCC here in such a way that allows us to provide estimates of h and their uncertainty through conditioning on the Y in the form h=h|y_1,y_2,y_3 (1) Applying the model to various Fluxnet sites across Europe, we find the model has good skill and can particularly capture periods of low soil moisture well. We illustrate the relevance of the dependence structure of these Y variables to soil moisture and show how it may be generalised to offer information of soil moisture on a widespread scale where few observations of soil moisture exist. We then present results from a validation study of a selection of EURO CORDEX climate models where we demonstrate the skill of these models in representing these dependencies and so offer insight into the skill seen in the representation of soil moisture in these models.
NASA Astrophysics Data System (ADS)
Shang, De-Yi; Zhong, Liang-Cai
2017-01-01
Our novel models for fluid's variable physical properties are improved and reported systematically in this work for enhancement of theoretical and practical value on study of convection heat and mass transfer. It consists of three models, namely (1) temperature parameter model, (2) polynomial model, and (3) weighted-sum model, respectively for treatment of temperature-dependent physical properties of gases, temperature-dependent physical properties of liquids, and concentration- and temperature-dependent physical properties of vapour-gas mixture. Two related components are proposed, and involved in each model for fluid's variable physical properties. They are basic physic property equations and theoretical similarity equations on physical property factors. The former, as the foundation of the latter, is based on the typical experimental data and physical analysis. The latter is built up by similarity analysis and mathematical derivation based on the former basic physical properties equations. These models are available for smooth simulation and treatment of fluid's variable physical properties for assurance of theoretical and practical value of study on convection of heat and mass transfer. Especially, so far, there has been lack of available study on heat and mass transfer of film condensation convection of vapour-gas mixture, and the wrong heat transfer results existed in widespread studies on the related research topics, due to ignorance of proper consideration of the concentration- and temperature-dependent physical properties of vapour-gas mixture. For resolving such difficult issues, the present novel physical property models have their special advantages.
Dynamic rupture modeling with laboratory-derived constitutive relations
Okubo, P.G.
1989-01-01
A laboratory-derived state variable friction constitutive relation is used in the numerical simulation of the dynamic growth of an in-plane or mode II shear crack. According to this formulation, originally presented by J.H. Dieterich, frictional resistance varies with the logarithm of the slip rate and with the logarithm of the frictional state variable as identified by A.L. Ruina. Under conditions of steady sliding, the state variable is proportional to (slip rate)-1. Following suddenly introduced increases in slip rate, the rate and state dependencies combine to produce behavior which resembles slip weakening. When rupture nucleation is artificially forced at fixed rupture velocity, rupture models calculated with the state variable friction in a uniformly distributed initial stress field closely resemble earlier rupture models calculated with a slip weakening fault constitutive relation. Model calculations suggest that dynamic rupture following a state variable friction relation is similar to that following a simpler fault slip weakening law. However, when modeling the full cycle of fault motions, rate-dependent frictional responses included in the state variable formulation are important at low slip rates associated with rupture nucleation. -from Author
The development and evaluation of accident predictive models
NASA Astrophysics Data System (ADS)
Maleck, T. L.
1980-12-01
A mathematical model that will predict the incremental change in the dependent variables (accident types) resulting from changes in the independent variables is developed. The end product is a tool for estimating the expected number and type of accidents for a given highway segment. The data segments (accidents) are separated in exclusive groups via a branching process and variance is further reduced using stepwise multiple regression. The standard error of the estimate is calculated for each model. The dependent variables are the frequency, density, and rate of 18 types of accidents among the independent variables are: district, county, highway geometry, land use, type of zone, speed limit, signal code, type of intersection, number of intersection legs, number of turn lanes, left-turn control, all-red interval, average daily traffic, and outlier code. Models for nonintersectional accidents did not fit nor validate as well as models for intersectional accidents.
SIZE DEPENDENT MODEL OF HAZARDOUS SUBSTANCES IN Q AQUATIC FOOD CHAIN
A model of toxic substance accumulation is constructed that introduces organism size as an additional independent variable. The model represents an ecological continuum through size dependency; classical compartment analyses are therefore a special case of the continuous model. S...
Symbol-and-Arrow Diagrams in Teaching Pharmacokinetics.
ERIC Educational Resources Information Center
Hayton, William L.
1990-01-01
Symbol-and-arrow diagrams are helpful adjuncts to equations derived from pharmacokinetic models. Both show relationships among dependent and independent variables. Diagrams show only qualitative relationships, but clearly show which variables are dependent and which are independent, helping students understand complex but important functional…
Sources of Sex Discrimination in Educational Systems: A Conceptual Model
ERIC Educational Resources Information Center
Kutner, Nancy G.; Brogan, Donna
1976-01-01
A conceptual model is presented relating numerous variables contributing to sexism in American education. Discrimination is viewed as intervening between two sets of interrelated independent variables and the dependent variable of sex inequalities in educational attainment. Sex-role orientation changes are the key to significant change in the…
Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects
ERIC Educational Resources Information Center
Biesanz, Jeremy C.; Falk, Carl F.; Savalei, Victoria
2010-01-01
Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses (Baron & Kenny, 1986; Sobel, 1982) have in recent years…
Bayesian Model Comparison for the Order Restricted RC Association Model
ERIC Educational Resources Information Center
Iliopoulos, G.; Kateri, M.; Ntzoufras, I.
2009-01-01
Association models constitute an attractive alternative to the usual log-linear models for modeling the dependence between classification variables. They impose special structure on the underlying association by assigning scores on the levels of each classification variable, which can be fixed or parametric. Under the general row-column (RC)…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd
Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test ofmore » the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.« less
NASA Astrophysics Data System (ADS)
Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd; Baharum, Adam
2015-10-01
Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test of the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.
Design of Malaria Diagnostic Criteria for the Sysmex XE-2100 Hematology Analyzer
Campuzano-Zuluaga, Germán; Álvarez-Sánchez, Gonzalo; Escobar-Gallo, Gloria Elcy; Valencia-Zuluaga, Luz Marina; Ríos-Orrego, Alexandra Marcela; Pabón-Vidal, Adriana; Miranda-Arboleda, Andrés Felipe; Blair-Trujillo, Silvia; Campuzano-Maya, Germán
2010-01-01
Thick film, the standard diagnostic procedure for malaria, is not always ordered promptly. A failsafe diagnostic strategy using an XE-2100 analyzer is proposed, and for this strategy, malaria diagnostic models for the XE-2100 were developed and tested for accuracy. Two hundred eighty-one samples were distributed into Plasmodium vivax, P. falciparum, and acute febrile syndrome groups for model construction. Model validation was performed using 60% of malaria cases and a composite control group of samples from AFS and healthy participants from endemic and non-endemic regions. For P. vivax, two observer-dependent models (accuracy = 95.3–96.9%), one non–observer-dependent model using built-in variables (accuracy = 94.7%), and one non–observer-dependent model using new and built-in variables (accuracy = 96.8%) were developed. For P. falciparum, two non–observer-dependent models (accuracies = 85% and 89%) were developed. These models could be used by health personnel or be integrated as a malaria alarm for the XE-2100 to prompt early malaria microscopic diagnosis. PMID:20207864
Are your covariates under control? How normalization can re-introduce covariate effects.
Pain, Oliver; Dudbridge, Frank; Ronald, Angelica
2018-04-30
Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.
Analysis of the labor productivity of enterprises via quantile regression
NASA Astrophysics Data System (ADS)
Türkan, Semra
2017-07-01
In this study, we have analyzed the factors that affect the performance of Turkey's Top 500 Industrial Enterprises using quantile regression. The variable about labor productivity of enterprises is considered as dependent variable, the variableabout assets is considered as independent variable. The distribution of labor productivity of enterprises is right-skewed. If the dependent distribution is skewed, linear regression could not catch important aspects of the relationships between the dependent variable and its predictors due to modeling only the conditional mean. Hence, the quantile regression, which allows modelingany quantilesof the dependent distribution, including the median,appears to be useful. It examines whether relationships between dependent and independent variables are different for low, medium, and high percentiles. As a result of analyzing data, the effect of total assets is relatively constant over the entire distribution, except the upper tail. It hasa moderately stronger effect in the upper tail.
NASA Astrophysics Data System (ADS)
Nacif el Alaoui, Reda
Mechanical structure-property relations have been quantified for AISI 4140 steel. under different strain rates and temperatures. The structure-property relations were used. to calibrate a microstructure-based internal state variable plasticity-damage model for. monotonic tension, compression and torsion plasticity, as well as damage evolution. Strong stress state and temperature dependences were observed for the AISI 4140 steel. Tension tests on three different notched Bridgman specimens were undertaken to study. the damage-triaxiality dependence for model validation purposes. Fracture surface. analysis was performed using Scanning Electron Microscopy (SEM) to quantify the void. nucleation and void sizes in the different specimens. The stress-strain behavior exhibited. a fairly large applied stress state (tension, compression dependence, and torsion), a. moderate temperature dependence, and a relatively small strain rate dependence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
NASA Astrophysics Data System (ADS)
Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.
2012-04-01
Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the relationships between variable are set a priori according to literature and/or experience in the field (expert knowledge). The statistical validation is verified according to error rate of classification, and the significance of the variables. Sensibility analysis has also been performed to characterise the relative influence of every single variable in the model. Once validated, the models allow the estimation of impact of each variable on the behaviour of the water supply or sanitation providing an interesting mean to test scenarios and predict variables behaviours. The choices made, methods and description of the various models, for each cluster as well as the global model for water supply and sanitation will be presented. Key results and interpretation of the relationships depicted by the models will be detailed during the conference.
Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method
NASA Astrophysics Data System (ADS)
Prahutama, Alan; Sudarno
2018-05-01
The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).
Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport
NASA Astrophysics Data System (ADS)
Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike
2017-04-01
Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.
Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos
2015-05-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Attentional modulation of neuronal variability in circuit models of cortex
Kanashiro, Tatjana; Ocker, Gabriel Koch; Cohen, Marlene R; Doiron, Brent
2017-01-01
The circuit mechanisms behind shared neural variability (noise correlation) and its dependence on neural state are poorly understood. Visual attention is well-suited to constrain cortical models of response variability because attention both increases firing rates and their stimulus sensitivity, as well as decreases noise correlations. We provide a novel analysis of population recordings in rhesus primate visual area V4 showing that a single biophysical mechanism may underlie these diverse neural correlates of attention. We explore model cortical networks where top-down mediated increases in excitability, distributed across excitatory and inhibitory targets, capture the key neuronal correlates of attention. Our models predict that top-down signals primarily affect inhibitory neurons, whereas excitatory neurons are more sensitive to stimulus specific bottom-up inputs. Accounting for trial variability in models of state dependent modulation of neuronal activity is a critical step in building a mechanistic theory of neuronal cognition. DOI: http://dx.doi.org/10.7554/eLife.23978.001 PMID:28590902
A size-dependent constitutive model of bulk metallic glasses in the supercooled liquid region
Yao, Di; Deng, Lei; Zhang, Mao; Wang, Xinyun; Tang, Na; Li, Jianjun
2015-01-01
Size effect is of great importance in micro forming processes. In this paper, micro cylinder compression was conducted to investigate the deformation behavior of bulk metallic glasses (BMGs) in supercooled liquid region with different deformation variables including sample size, temperature and strain rate. It was found that the elastic and plastic behaviors of BMGs have a strong dependence on the sample size. The free volume and defect concentration were introduced to explain the size effect. In order to demonstrate the influence of deformation variables on steady stress, elastic modulus and overshoot phenomenon, four size-dependent factors were proposed to construct a size-dependent constitutive model based on the Maxwell-pulse type model previously presented by the authors according to viscosity theory and free volume model. The proposed constitutive model was then adopted in finite element method simulations, and validated by comparing the micro cylinder compression and micro double cup extrusion experimental data with the numerical results. Furthermore, the model provides a new approach to understanding the size-dependent plastic deformation behavior of BMGs. PMID:25626690
Forecasting defoliation by the gypsy moth in oak stands
Robert W. Campbell; Joseph P. Standaert
1974-01-01
A multiple-regression model is presented that reflects statistically significant correlations between defoliation by the gypsy moth, the dependent variable, and a series of biotic and physical independent variables. Both possible uses and shortcomings of this model are discussed.
Natural variability of marine ecosystems inferred from a coupled climate to ecosystem simulation
NASA Astrophysics Data System (ADS)
Le Mézo, Priscilla; Lefort, Stelly; Séférian, Roland; Aumont, Olivier; Maury, Olivier; Murtugudde, Raghu; Bopp, Laurent
2016-01-01
This modeling study analyzes the simulated natural variability of pelagic ecosystems in the North Atlantic and North Pacific. Our model system includes a global Earth System Model (IPSL-CM5A-LR), the biogeochemical model PISCES and the ecosystem model APECOSM that simulates upper trophic level organisms using a size-based approach and three interactive pelagic communities (epipelagic, migratory and mesopelagic). Analyzing an idealized (e.g., no anthropogenic forcing) 300-yr long pre-industrial simulation, we find that low and high frequency variability is dominant for the large and small organisms, respectively. Our model shows that the size-range exhibiting the largest variability at a given frequency, defined as the resonant range, also depends on the community. At a given frequency, the resonant range of the epipelagic community includes larger organisms than that of the migratory community and similarly, the latter includes larger organisms than the resonant range of the mesopelagic community. This study shows that the simulated temporal variability of marine pelagic organisms' abundance is not only influenced by natural climate fluctuations but also by the structure of the pelagic community. As a consequence, the size- and community-dependent response of marine ecosystems to climate variability could impact the sustainability of fisheries in a warming world.
Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM
NASA Astrophysics Data System (ADS)
Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak
2015-04-01
Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models. PMID:24574916
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Estella Gilbert; James A. Powell; Jesse A. Logan; Barbara J. Bentz
2004-01-01
In all organisms, phenotypic variability is an evolutionary stipulation. Because the development of poikilothermic organisms depends directly on the temperature of their habitat, environmental variability is also an integral factor in models of their phenology. In this paper we present two existing phenology models, the distributed delay model and the Sharpe and...
Population activity statistics dissect subthreshold and spiking variability in V1.
Bányai, Mihály; Koman, Zsombor; Orbán, Gergő
2017-07-01
Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.
A hazard rate analysis of fertility using duration data from Malaysia.
Chang, C
1988-01-01
Data from the Malaysia Fertility and Family Planning Survey (MFLS) of 1974 were used to investigate the effects of biological and socioeconomic variables on fertility based on the hazard rate model. Another study objective was to investigate the robustness of the findings of Trussell et al. (1985) by comparing the findings of this study with theirs. The hazard rate of conception for the jth fecundable spell of the ith woman, hij, is determined by duration dependence, tij, measured by the waiting time to conception; unmeasured heterogeneity (HETi; the time-invariant variables, Yi (race, cohort, education, age at marriage); and time-varying variables, Xij (age, parity, opportunity cost, income, child mortality, child sex composition). In this study, all the time-varying variables were constant over a spell. An asymptotic X2 test for the equality of constant hazard rates across birth orders, allowing time-invariant variables and heterogeneity, showed the importance of time-varying variables and duration dependence. Under the assumption of fixed effects heterogeneity and the Weibull distribution for the duration of waiting time to conception, the empirical results revealed a negative parity effect, a negative impact from male children, and a positive effect from child mortality on the hazard rate of conception. The estimates of step functions for the hazard rate of conception showed parity-dependent fertility control, evidence of heterogeneity, and the possibility of nonmonotonic duration dependence. In a hazard rate model with piecewise-linear-segment duration dependence, the socioeconomic variables such as cohort, child mortality, income, and race had significant effects, after controlling for the length of the preceding birth. The duration dependence was consistant with the common finding, i.e., first increasing and then decreasing at a slow rate. The effects of education and opportunity cost on fertility were insignificant.
Compensation for Lithography Induced Process Variations during Physical Design
NASA Astrophysics Data System (ADS)
Chin, Eric Yiow-Bing
This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.
Spatial analysis of agri-environmental policy uptake and expenditure in Scotland.
Yang, Anastasia L; Rounsevell, Mark D A; Wilson, Ronald M; Haggett, Claire
2014-01-15
Agri-environment is one of the most widely supported rural development policy measures in Scotland in terms of number of participants and expenditure. It comprises 69 management options and sub-options that are delivered primarily through the competitive 'Rural Priorities scheme'. Understanding the spatial determinants of uptake and expenditure would assist policy-makers in guiding future policy targeting efforts for the rural environment. This study is unique in examining the spatial dependency and determinants of Scotland's agri-environmental measures and categorised options uptake and payments at the parish level. Spatial econometrics is applied to test the influence of 40 explanatory variables on farming characteristics, land capability, designated sites, accessibility and population. Results identified spatial dependency for each of the dependent variables, which supported the use of spatially-explicit models. The goodness of fit of the spatial models was better than for the aspatial regression models. There was also notable improvement in the models for participation compared with the models for expenditure. Furthermore a range of expected explanatory variables were found to be significant and varied according to the dependent variable used. The majority of models for both payment and uptake showed a significant positive relationship with SSSI (Sites of Special Scientific Interest), which are designated sites prioritised in Scottish policy. These results indicate that environmental targeting efforts by the government for AEP uptake in designated sites can be effective. However habitats outside of SSSI, termed here the 'wider countryside' may not be sufficiently competitive to receive funding in the current policy system. Copyright © 2013 Elsevier Ltd. All rights reserved.
An IRT Model with a Parameter-Driven Process for Change
ERIC Educational Resources Information Center
Rijmen, Frank; De Boeck, Paul; van der Maas, Han L. J.
2005-01-01
An IRT model with a parameter-driven process for change is proposed. Quantitative differences between persons are taken into account by a continuous latent variable, as in common IRT models. In addition, qualitative inter-individual differences and auto-dependencies are accounted for by assuming within-subject variability with respect to the…
Manifest Variable Granger Causality Models for Developmental Research: A Taxonomy
ERIC Educational Resources Information Center
von Eye, Alexander; Wiedermann, Wolfgang
2015-01-01
Granger models are popular when it comes to testing hypotheses that relate series of measures causally to each other. In this article, we propose a taxonomy of Granger causality models. The taxonomy results from crossing the four variables Order of Lag, Type of (Contemporaneous) Effect, Direction of Effect, and Segment of Dependent Series…
Falkenberg, A; Nyfjäll, M; Hellgren, C; Vingård, E
2012-01-01
The aim of this longitudinal study is to investigate how different aspects of social support at work and in leisure time are associated with self rated health and sickness absence. The 541 participants in the study were representative for a working population in the public sector in Sweden with a majority being woman. Most of the variables were created from data from a questionnaire in March-April 2005. There were four independent variables and two dependent variables. The dependent were based on data from November 2006. A logistic regression model was used for the analysis of associations. A separate model was adapted for each of the explanatory variables for each outcome, which gave five models per independent variable. The study has given a greater awareness of the importance of employees receiving social support, regardless of type of support or from whom the support is coming. Social support has a strong association with SRH in a longitudinal perspective and no association between social support and sickness absence.
NASA Technical Reports Server (NTRS)
Carlson, C. R.
1981-01-01
The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.
NASA Astrophysics Data System (ADS)
Tang, J.; Riley, W. J.
2017-12-01
Most existing soil carbon cycle models have modeled the moisture and temperature dependence of soil respiration using deterministic response functions. However, empirical data suggest abundant variability in both of these dependencies. We here use the recently developed SUPECA (Synthesizing Unit and Equilibrium Chemistry Approximation) theory and a published dynamic energy budget based microbial model to investigate how soil carbon decomposition responds to changes in soil moisture and temperature under the influence of organo-mineral interactions. We found that both the temperature and moisture responses are hysteretic and cannot be represented by deterministic functions. We then evaluate how the multi-scale variability in temperature and moisture forcing affect soil carbon decomposition. Our results indicate that when the model is run in scenarios mimicking laboratory incubation experiments, the often-observed temperature and moisture response functions can be well reproduced. However, when such response functions are used for model extrapolation involving more transient variability in temperature and moisture forcing (as found in real ecosystems), the dynamic model that explicitly accounts for hysteresis in temperature and moisture dependency produces significantly different estimations of soil carbon decomposition, suggesting there are large biases in models that do not resolve such hysteresis. We call for more studies on organo-mineral interactions to improve modeling of such hysteresis.
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
The Importance of Distance to Resources in the Spatial Modelling of Bat Foraging Habitat
Rainho, Ana; Palmeirim, Jorge M.
2011-01-01
Many bats are threatened by habitat loss, but opportunities to manage their habitats are now increasing. Success of management depends greatly on the capacity to determine where and how interventions should take place, so models predicting how animals use landscapes are important to plan them. Bats are quite distinctive in the way they use space for foraging because (i) most are colonial central-place foragers and (ii) exploit scattered and distant resources, although this increases flying costs. To evaluate how important distances to resources are in modelling foraging bat habitat suitability, we radio-tracked two cave-dwelling species of conservation concern (Rhinolophus mehelyi and Miniopterus schreibersii) in a Mediterranean landscape. Habitat and distance variables were evaluated using logistic regression modelling. Distance variables greatly increased the performance of models, and distance to roost and to drinking water could alone explain 86 and 73% of the use of space by M. schreibersii and R. mehelyi, respectively. Land-cover and soil productivity also provided a significant contribution to the final models. Habitat suitability maps generated by models with and without distance variables differed substantially, confirming the shortcomings of maps generated without distance variables. Indeed, areas shown as highly suitable in maps generated without distance variables proved poorly suitable when distance variables were also considered. We concluded that distances to resources are determinant in the way bats forage across the landscape, and that using distance variables substantially improves the accuracy of suitability maps generated with spatially explicit models. Consequently, modelling with these variables is important to guide habitat management in bats and similarly mobile animals, particularly if they are central-place foragers or depend on spatially scarce resources. PMID:21547076
Indicators of Dysphagia in Aged Care Facilities.
Pu, Dai; Murry, Thomas; Wong, May C M; Yiu, Edwin M L; Chan, Karen M K
2017-09-18
The current cross-sectional study aimed to investigate risk factors for dysphagia in elderly individuals in aged care facilities. A total of 878 individuals from 42 aged care facilities were recruited for this study. The dependent outcome was speech therapist-determined swallowing function. Independent factors were Eating Assessment Tool score, oral motor assessment score, Mini-Mental State Examination, medical history, and various functional status ratings. Binomial logistic regression was used to identify independent variables associated with dysphagia in this cohort. Two statistical models were constructed. Model 1 used variables from case files without the need for hands-on assessment, and Model 2 used variables that could be obtained from hands-on assessment. Variables positively associated with dysphagia identified in Model 1 were male gender, total dependence for activities of daily living, need for feeding assistance, mobility, requiring assistance walking or using a wheelchair, and history of pneumonia. Variables positively associated with dysphagia identified in Model 2 were Mini-Mental State Examination score, edentulousness, and oral motor assessments score. Cognitive function, dentition, and oral motor function are significant indicators associated with the presence of swallowing in the elderly. When assessing the frail elderly, case file information can help clinicians identify frail elderly individuals who may be suffering from dysphagia.
Variables affecting the academic and social integration of nursing students.
Zeitlin-Ophir, Iris; Melitz, Osnat; Miller, Rina; Podoshin, Pia; Mesh, Gustavo
2004-07-01
This study attempted to analyze the variables that influence the academic integration of nursing students. The theoretical model presented by Leigler was adapted to the existing conditions in a school of nursing in northern Israel. The independent variables included the student's background; amount of support received in the course of studies; extent of outside family and social commitments; satisfaction with the school's facilities and services; and level of social integration. The dependent variable was the student's level of academic integration. The findings substantiated four central hypotheses, with the study model explaining approximately 45% of the variance in the dependent variable. Academic integration is influenced by a number of variables, the most prominent of which is the social integration of the student with colleagues and educational staff. Among the background variables, country of origin was found to be significant to both social and academic integration for two main groups in the sample: Israeli-born students (both Jewish and Arab) and immigrant students.
A provisional effective evaluation when errors are present in independent variables
NASA Technical Reports Server (NTRS)
Gurin, L. S.
1983-01-01
Algorithms are examined for evaluating the parameters of a regression model when there are errors in the independent variables. The algorithms are fast and the estimates they yield are stable with respect to the correlation of errors and measurements of both the dependent variable and the independent variables.
ERIC Educational Resources Information Center
Woolley, Kristin K.
Many researchers are unfamiliar with suppressor variables and how they operate in multiple regression analyses. This paper describes the role suppressor variables play in a multiple regression model and provides practical examples that explain how they can change research results. A variable that when added as another predictor increases the total…
NASA Astrophysics Data System (ADS)
Guo, A.; Wang, Y.
2017-12-01
Investigating variability in dependence structures of hydrological processes is of critical importance for developing an understanding of mechanisms of hydrological cycles in changing environments. In focusing on this topic, present work involves the following: (1) identifying and eliminating serial correlation and conditional heteroscedasticity in monthly streamflow (Q), precipitation (P) and potential evapotranspiration (PE) series using the ARMA-GARCH model (ARMA: autoregressive moving average; GARCH: generalized autoregressive conditional heteroscedasticity); (2) describing dependence structures of hydrological processes using partial copula coupled with the ARMA-GARCH model and identifying their variability via copula-based likelihood-ratio test method; and (3) determining conditional probability of annual Q under different climate scenarios on account of above results. This framework enables us to depict hydrological variables in the presence of conditional heteroscedasticity and to examine dependence structures of hydrological processes while excluding the influence of covariates by using partial copula-based ARMA-GARCH model. Eight major catchments across the Loess Plateau (LP) are used as study regions. Results indicate that (1) The occurrence of change points in dependence structures of Q and P (PE) varies across the LP. Change points of P-PE dependence structures in all regions almost fully correspond to the initiation of global warming, i.e., the early 1980s. (3) Conditional probabilities of annual Q under various P and PE scenarios are estimated from the 3-dimensional joint distribution of (Q, P and PE) based on the above change points. These findings shed light on mechanisms of the hydrological cycle and can guide water supply planning and management, particularly in changing environments.
Dripps, W.R.; Bradbury, K.R.
2010-01-01
Recharge varies spatially and temporally as it depends on a wide variety of factors (e.g. vegetation, precipitation, climate, topography, geology, and soil type), making it one of the most difficult, complex, and uncertain hydrologic parameters to quantify. Despite its inherent variability, groundwater modellers, planners, and policy makers often ignore recharge variability and assume a single average recharge value for an entire watershed. Relatively few attempts have been made to quantify or incorporate spatial and temporal recharge variability into water resource planning or groundwater modelling efforts. In this study, a simple, daily soil-water balance model was developed and used to estimate the spatial and temporal distribution of groundwater recharge of the Trout Lake basin of northern Wisconsin for 1996-2000 as a means to quantify recharge variability. For the 5 years of study, annual recharge varied spatially by as much as 18 cm across the basin; vegetation was the predominant control on this variability. Recharge also varied temporally with a threefold annual difference over the 5-year period. Intra-annually, recharge was limited to a few isolated events each year and exhibited a distinct seasonal pattern. The results suggest that ignoring recharge variability may not only be inappropriate, but also, depending on the application, may invalidate model results and predictions for regional and local water budget calculations, water resource management, nutrient cycling, and contaminant transport studies. Recharge is spatially and temporally variable, and should be modelled as such. Copyright ?? 2009 John Wiley & Sons, Ltd.
Gravity dependence of subjective visual vertical variability.
Tarnutzer, A A; Bockisch, C; Straumann, D; Olasagasti, I
2009-09-01
The brain integrates sensory input from the otolith organs, the semicircular canals, and the somatosensory and visual systems to determine self-orientation relative to gravity. Only the otoliths directly sense the gravito-inertial force vector and therefore provide the major input for perceiving static head-roll relative to gravity, as measured by the subjective visual vertical (SVV). Intraindividual SVV variability increases with head roll, which suggests that the effectiveness of the otolith signal is roll-angle dependent. We asked whether SVV variability reflects the spatial distribution of the otolithic sensors and the otolith-derived acceleration estimate. Subjects were placed in different roll orientations (0-360 degrees, 15 degrees steps) and asked to align an arrow with perceived vertical. Variability was minimal in upright, increased with head-roll peaking around 120-135 degrees, and decreased to intermediate values at 180 degrees. Otolith-dependent variability was modeled by taking into consideration the nonuniform distribution of the otolith afferents and their nonlinear firing rate. The otolith-derived estimate was combined with an internal bias shifting the estimated gravity-vector toward the body-longitudinal. Assuming an efficient otolith estimator at all roll angles, peak variability of the model matched our data; however, modeled variability in upside-down and upright positions was very similar, which is at odds with our findings. By decreasing the effectiveness of the otolith estimator with increasing roll, simulated variability matched our experimental findings better. We suggest that modulations of SVV precision in the roll plane are related to the properties of the otolith sensors and to central computational mechanisms that are not optimally tuned for roll-angles distant from upright.
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
NASA Astrophysics Data System (ADS)
Bhattacharyya, Sidhakam; Bandyopadhyay, Gautam
2010-10-01
The council of most of the Urban Local Bodies (ULBs) has a limited scope for decision making in the absence of appropriate financial control mechanism. The information about expected amount of own fund during a particular period is of great importance for decision making. Therefore, in this paper, efforts are being made to present set of findings and to establish a model of estimating receipts of own sources and payments thereof using multiple regression analysis. Data for sixty months from a reputed ULB in West Bengal have been considered for ascertaining the regression models. This can be used as a part of financial management and control procedure by the council to estimate the effect on own fund. In our study we have considered two models using multiple regression analysis. "Model I" comprises of total adjusted receipt as the dependent variable and selected individual receipts as the independent variables. Similarly "Model II" consists of total adjusted payments as the dependent variable and selected individual payments as independent variables. The resultant of Model I and Model II is the surplus or deficit effecting own fund. This may be applied for decision making purpose by the council.
Wang, Xiao; Gu, Jinghua; Hilakivi-Clarke, Leena; Clarke, Robert; Xuan, Jianhua
2017-01-15
The advent of high-throughput DNA methylation profiling techniques has enabled the possibility of accurate identification of differentially methylated genes for cancer research. The large number of measured loci facilitates whole genome methylation study, yet posing great challenges for differential methylation detection due to the high variability in tumor samples. We have developed a novel probabilistic approach, D: ifferential M: ethylation detection using a hierarchical B: ayesian model exploiting L: ocal D: ependency (DM-BLD), to detect differentially methylated genes based on a Bayesian framework. The DM-BLD approach features a joint model to capture both the local dependency of measured loci and the dependency of methylation change in samples. Specifically, the local dependency is modeled by Leroux conditional autoregressive structure; the dependency of methylation changes is modeled by a discrete Markov random field. A hierarchical Bayesian model is developed to fully take into account the local dependency for differential analysis, in which differential states are embedded as hidden variables. Simulation studies demonstrate that DM-BLD outperforms existing methods for differential methylation detection, particularly when the methylation change is moderate and the variability of methylation in samples is high. DM-BLD has been applied to breast cancer data to identify important methylated genes (such as polycomb target genes and genes involved in transcription factor activity) associated with breast cancer recurrence. A Matlab package of DM-BLD is available at http://www.cbil.ece.vt.edu/software.htm CONTACT: Xuan@vt.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Missel, P J
2000-01-01
Four methods are proposed for modeling diffusion in heterogeneous media where diffusion and partition coefficients take on differing values in each subregion. The exercise was conducted to validate finite element modeling (FEM) procedures in anticipation of modeling drug diffusion with regional partitioning into ocular tissue, though the approach can be useful for other organs, or for modeling diffusion in laminate devices. Partitioning creates a discontinuous value in the dependent variable (concentration) at an intertissue boundary that is not easily handled by available general-purpose FEM codes, which allow for only one value at each node. The discontinuity is handled using a transformation on the dependent variable based upon the region-specific partition coefficient. Methods were evaluated by their ability to reproduce a known exact result, for the problem of the infinite composite medium (Crank, J. The Mathematics of Diffusion, 2nd ed. New York: Oxford University Press, 1975, pp. 38-39.). The most physically intuitive method is based upon the concept of chemical potential, which is continuous across an interphase boundary (method III). This method makes the equation of the dependent variable highly nonlinear. This can be linearized easily by a change of variables (method IV). Results are also given for a one-dimensional problem simulating bolus injection into the vitreous, predicting time disposition of drug in vitreous and retina.
Scheidegger, Stephan; Fuchs, Hans U; Zaugg, Kathrin; Bodis, Stephan; Füchslin, Rudolf M
2013-01-01
In order to overcome the limitations of the linear-quadratic model and include synergistic effects of heat and radiation, a novel radiobiological model is proposed. The model is based on a chain of cell populations which are characterized by the number of radiation induced damages (hits). Cells can shift downward along the chain by collecting hits and upward by a repair process. The repair process is governed by a repair probability which depends upon state variables used for a simplistic description of the impact of heat and radiation upon repair proteins. Based on the parameters used, populations up to 4-5 hits are relevant for the calculation of the survival. The model describes intuitively the mathematical behaviour of apoptotic and nonapoptotic cell death. Linear-quadratic-linear behaviour of the logarithmic cell survival, fractionation, and (with one exception) the dose rate dependencies are described correctly. The model covers the time gap dependence of the synergistic cell killing due to combined application of heat and radiation, but further validation of the proposed approach based on experimental data is needed. However, the model offers a work bench for testing different biological concepts of damage induction, repair, and statistical approaches for calculating the variables of state.
Review of Methods for Buildings Energy Performance Modelling
NASA Astrophysics Data System (ADS)
Krstić, Hrvoje; Teni, Mihaela
2017-10-01
Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting - replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance predictive model.
A viscoelastic higher-order beam finite element
NASA Technical Reports Server (NTRS)
Johnson, Arthur R.; Tressler, Alexander
1996-01-01
A viscoelastic internal variable constitutive theory is applied to a higher-order elastic beam theory and finite element formulation. The behavior of the viscous material in the beam is approximately modeled as a Maxwell solid. The finite element formulation requires additional sets of nodal variables for each relaxation time constant needed by the Maxwell solid. Recent developments in modeling viscoelastic material behavior with strain variables that are conjugate to the elastic strain measures are combined with advances in modeling through-the-thickness stresses and strains in thick beams. The result is a viscous thick-beam finite element that possesses superior characteristics for transient analysis since its nodal viscous forces are not linearly dependent an the nodal velocities, which is the case when damping matrices are used. Instead, the nodal viscous forces are directly dependent on the material's relaxation spectrum and the history of the nodal variables through a differential form of the constitutive law for a Maxwell solid. The thick beam quasistatic analysis is explored herein as a first step towards developing more complex viscoelastic models for thick plates and shells, and for dynamic analyses. The internal variable constitutive theory is derived directly from the Boltzmann superposition theorem. The mechanical strains and the conjugate internal strains are shown to be related through a system of first-order, ordinary differential equations. The total time-dependent stress is the superposition of its elastic and viscous components. Equations of motion for the solid are derived from the virtual work principle using the total time-dependent stress. Numerical examples for the problems of relaxation, creep, and cyclic creep are carried out for a beam made from an orthotropic Maxwell solid.
How Robust Is Linear Regression with Dummy Variables?
ERIC Educational Resources Information Center
Blankmeyer, Eric
2006-01-01
Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.…
Review of Factors, Methods, and Outcome Definition in Designing Opioid Abuse Predictive Models.
Alzeer, Abdullah H; Jones, Josette; Bair, Matthew J
2018-05-01
Several opioid risk assessment tools are available to prescribers to evaluate opioid analgesic abuse among chronic patients. The objectives of this study are to 1) identify variables available in the literature to predict opioid abuse; 2) explore and compare methods (population, database, and analysis) used to develop statistical models that predict opioid abuse; and 3) understand how outcomes were defined in each statistical model predicting opioid abuse. The OVID database was searched for this study. The search was limited to articles written in English and published from January 1990 to April 2016. This search generated 1,409 articles. Only seven studies and nine models met our inclusion-exclusion criteria. We found nine models and identified 75 distinct variables. Three studies used administrative claims data, and four studies used electronic health record data. The majority, four out of seven articles (six out of nine models), were primarily dependent on the presence or absence of opioid abuse or dependence (ICD-9 diagnosis code) to define opioid abuse. However, two articles used a predefined list of opioid-related aberrant behaviors. We identified variables used to predict opioid abuse from electronic health records and administrative data. Medication variables are the recurrent variables in the articles reviewed (33 variables). Age and gender are the most consistent demographic variables in predicting opioid abuse. Overall, there is similarity in the sampling method and inclusion/exclusion criteria (age, number of prescriptions, follow-up period, and data analysis methods). Intuitive research to utilize unstructured data may increase opioid abuse models' accuracy.
Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela
2018-01-01
This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.
Mathematical model for production of an industry focusing on worker status
NASA Astrophysics Data System (ADS)
Visalakshi, V.; kiran kumari, Sheshma
2018-04-01
Productivity improvement is posing a great challenge for industry everyday because of the difficulties in keeping track and priorising the variables that have significant impact on the productivity. The variation in production depends on the linguistic variables such as worker commitment, worker motivation and worker skills. Since the variables are linguistic we try to propose a model which gives an appropriate production of an industry. Fuzzy models aids the relationship between the factors and status. The model will support the industry to focus on the mentality of worker to increase the production.
A continuum state variable theory to model the size-dependent surface energy of nanostructures.
Jamshidian, Mostafa; Thamburaja, Prakash; Rabczuk, Timon
2015-10-14
We propose a continuum-based state variable theory to quantify the excess surface free energy density throughout a nanostructure. The size-dependent effect exhibited by nanoplates and spherical nanoparticles i.e. the reduction of surface energy with reducing nanostructure size is well-captured by our continuum state variable theory. Our constitutive theory is also able to predict the reducing energetic difference between the surface and interior (bulk) portions of a nanostructure with decreasing nanostructure size.
Density dependence in demography and dispersal generates fluctuating invasion speeds
Li, Bingtuan; Miller, Tom E. X.
2017-01-01
Density dependence plays an important role in population regulation and is known to generate temporal fluctuations in population density. However, the ways in which density dependence affects spatial population processes, such as species invasions, are less understood. Although classical ecological theory suggests that invasions should advance at a constant speed, empirical work is illuminating the highly variable nature of biological invasions, which often exhibit nonconstant spreading speeds, even in simple, controlled settings. Here, we explore endogenous density dependence as a mechanism for inducing variability in biological invasions with a set of population models that incorporate density dependence in demographic and dispersal parameters. We show that density dependence in demography at low population densities—i.e., an Allee effect—combined with spatiotemporal variability in population density behind the invasion front can produce fluctuations in spreading speed. The density fluctuations behind the front can arise from either overcompensatory population growth or density-dependent dispersal, both of which are common in nature. Our results show that simple rules can generate complex spread dynamics and highlight a source of variability in biological invasions that may aid in ecological forecasting. PMID:28442569
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Harris, Katherine M.; Koenig, Harold G.; Han, Xiaotong; Sullivan, Greer; Mattox, Rhonda; Tang, Lingqi
2009-01-01
Objective The negative association between religiosity (religious beliefs and church attendance) and the likelihood of substance use disorders is well established, but the mechanism(s) remain poorly understood. We investigated whether this association was mediated by social support or mental health status. Method We utilized cross-sectional data from the 2002 National Survey on Drug Use and Health (n = 36,370). We first used logistic regression to regress any alcohol use in the past year on sociodemographic and religiosity variables. Then, among individuals who drank in the past year, we regressed past year alcohol abuse/dependence on sociodemographic and religiosity variables. To investigate whether social support mediated the association between religiosity and alcohol use and alcohol abuse/dependence we repeated the above models, adding the social support variables. To the extent that these added predictors modified the magnitude of the effect of the religiosity variables, we interpreted social support as a possible mediator. We also formally tested for mediation using path analysis. We investigated the possible mediating role of mental health status analogously. Parallel sets of analyses were conducted for any drug use, and drug abuse/dependence among those using any drugs as the dependent variables. Results The addition of social support and mental health status variables to logistic regression models had little effect on the magnitude of the religiosity coefficients in any of the models. While some of the tests of mediation were significant in the path analyses, the results were not always in the expected direction, and the magnitude of the effects was small. Conclusions The association between religiosity and decreased likelihood of a substance use disorder does not appear to be substantively mediated by either social support or mental health status. PMID:19714282
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
ERIC Educational Resources Information Center
Kessler, Lawrence M.
2013-01-01
In this paper I propose Bayesian estimation of a nonlinear panel data model with a fractional dependent variable (bounded between 0 and 1). Specifically, I estimate a panel data fractional probit model which takes into account the bounded nature of the fractional response variable. I outline estimation under the assumption of strict exogeneity as…
Predictive Inference Using Latent Variables with Covariates*
Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.
2014-01-01
Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627
Average inactivity time model, associated orderings and reliability properties
NASA Astrophysics Data System (ADS)
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
The use of auxiliary variables in capture-recapture and removal experiments
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1984-01-01
The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Lv, Shao-Wa; Liu, Dong; Hu, Pan-Pan; Ye, Xu-Yan; Xiao, Hong-Bin; Kuang, Hai-Xue
2010-03-01
To optimize the process of extracting effective constituents from Aralia elata by response surface methodology. The independent variables were ethanol concentration, reflux time and solvent fold, the dependent variable was extraction rate of total saponins in Aralia elata. Linear or no-linear mathematic models were used to estimate the relationship between independent and dependent variables. Response surface methodology was used to optimize the process of extraction. The prediction was carried out through comparing the observed and predicted values. Regression coefficient of binomial fitting complex model was as high as 0.9617, the optimum conditions of extraction process were 70% ethanol, 2.5 hours for reflux, 20-fold solvent and 3 times for extraction. The bias between observed and predicted values was -2.41%. It shows the optimum model is highly predictive.
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
Modeling of laser transmission contour welding process using FEA and DoE
NASA Astrophysics Data System (ADS)
Acherjee, Bappa; Kuar, Arunanshu S.; Mitra, Souren; Misra, Dipten
2012-07-01
In this research, a systematic investigation on laser transmission contour welding process is carried out using finite element analysis (FEA) and design of experiments (DoE) techniques. First of all, a three-dimensional thermal model is developed to simulate the laser transmission contour welding process with a moving heat source. The commercial finite element code ANSYS® multi-physics is used to obtain the numerical results by implementing a volumetric Gaussian heat source, and combined convection-radiation boundary conditions. Design of experiments together with regression analysis is then employed to plan the experiments and to develop mathematical models based on simulation results. Four key process parameters, namely power, welding speed, beam diameter, and carbon black content in absorbing polymer, are considered as independent variables, while maximum temperature at weld interface, weld width, and weld depths in transparent and absorbing polymers are considered as dependent variables. Sensitivity analysis is performed to determine how different values of an independent variable affect a particular dependent variable.
A Better Lemon Squeezer? Maximum-Likelihood Regression with Beta-Distributed Dependent Variables
ERIC Educational Resources Information Center
Smithson, Michael; Verkuilen, Jay
2006-01-01
Uncorrectable skew and heteroscedasticity are among the "lemons" of psychological data, yet many important variables naturally exhibit these properties. For scales with a lower and upper bound, a suitable candidate for models is the beta distribution, which is very flexible and models skew quite well. The authors present…
NASA Astrophysics Data System (ADS)
Gemitzi, Alexandra; Stefanopoulos, Kyriakos
2011-06-01
SummaryGroundwaters and their dependent ecosystems are affected both by the meteorological conditions as well as from human interventions, mainly in the form of groundwater abstractions for irrigation needs. This work aims at investigating the quantitative effects of meteorological conditions and man intervention on groundwater resources and their dependent ecosystems. Various seasonal Auto-Regressive Integrated Moving Average (ARIMA) models with external predictor variables were used in order to model the influence of meteorological conditions and man intervention on the groundwater level time series. Initially, a seasonal ARIMA model that simulates the abstraction time series using as external predictor variable temperature ( T) was prepared. Thereafter, seasonal ARIMA models were developed in order to simulate groundwater level time series in 8 monitoring locations, using the appropriate predictor variables determined for each individual case. The spatial component was introduced through the use of Geographical Information Systems (GIS). Application of the proposed methodology took place in the Neon Sidirochorion alluvial aquifer (Northern Greece), for which a 7-year long time series (i.e., 2003-2010) of piezometric and groundwater abstraction data exists. According to the developed ARIMA models, three distinct groups of groundwater level time series exist; the first one proves to be dependent only on the meteorological parameters, the second group demonstrates a mixed dependence both on meteorological conditions and on human intervention, whereas the third group shows a clear influence from man intervention. Moreover, there is evidence that groundwater abstraction has affected an important protected ecosystem.
ERIC Educational Resources Information Center
Sugiharto
2015-01-01
The aims of this research were to determine the effect of cooperative learning model and learning styles on learning result. This quasi-experimental study employed a 2x2 treatment by level, involved independent variables, i.e. cooperative learning model and learning styles, and learning result as the dependent variable. Findings signify that: (1)…
Partner dependence and sexual risk behavior among STI clinic patients.
Senn, Theresa E; Carey, Michael P; Vanable, Peter A; Coury-Doniger, Patricia
2010-01-01
To investigate the relation between partner dependence and sexual risk behavior in the context of the information-motivation-behavioral skills (IMB) model. STI clinic patients (n = 1432) completed a computerized interview assessing partner dependence, condom use, and IMB variables. Men had higher partner-dependence scores than women did. Patients reporting greater dependence reported less condom use. Gender did not moderate the partner dependence-condom-use relationship. Partner dependence did not moderate the relation between IMB constructs and condom use. Further research is needed to determine how partner dependence can be incorporated into conceptual models of safer sex behaviors.
NASA Astrophysics Data System (ADS)
Werner, Micha; Blyth, Eleanor; Schellekens, Jaap
2016-04-01
Global hydrological and land-surface models are becoming increasingly available, and as the resolution of these improves, as well how hydrological processes are represented, so does their potential. These offer consistent datasets at the global scale, which can be used to establish water balances and derive policy relevant indicators in medium to large basins, including those that are poorly gauged. However, differences in model structure, model parameterisation, and model forcing may result in quite different indicator values being derived, depending on the model used. In this paper we explore indicators developed using four land surface models (LSM) and five global hydrological models (GHM). Results from these models have been made available through the Earth2Observe project, a recent research initiative funded by the European Union 7th Research Framework. All models have a resolution of 0.5 arc degrees, and are forced using the same WATCH-ERA-Interim (WFDEI) meteorological re-analysis data at a daily time step for the 32 year period from 1979 to 2012. We explore three water resources indicators; an aridity index, a simplified water exploitation index; and an indicator that calculates the frequency of occurrence of root zone stress. We compare indicators derived over selected areas/basins in Europe, Colombia, Southern Africa, the Indian Subcontinent and Australia/New Zealand. The hydrological fluxes calculated show quite significant differences between the nine models, despite the common forcing dataset, with these differences reflected in the indicators subsequently derived. The results show that the variability between models is related to the different climates types, with that variability quite logically depending largely on the availability of water. Patterns are also found in the type of models that dominate different parts of the distribution of the indicator values, with LSM models providing lower values, and GHM models providing higher values in some climates, and vice versa in others. How important this variability is in supporting a policy decision, depends largely on how a decision thresholds are set. For example in the case of the aridity index, with areas being denoted as arid with an index of 0.6 or above, we show that the variability is primarily of interest in transitional climates, such as the Mediterranean The analysis shows that while both LSM's and GHM's provide useful data, indices derived to support water resources management planning may differ substantially, depending on the model used. The analysis also identifies in which climates improvements to the models are particularly relevant to support the confidence with which decisions can be taken based on derived indicators.
Factors influencing crime rates: an econometric analysis approach
NASA Astrophysics Data System (ADS)
Bothos, John M. A.; Thomopoulos, Stelios C. A.
2016-05-01
The scope of the present study is to research the dynamics that determine the commission of crimes in the US society. Our study is part of a model we are developing to understand urban crime dynamics and to enhance citizens' "perception of security" in large urban environments. The main targets of our research are to highlight dependence of crime rates on certain social and economic factors and basic elements of state anticrime policies. In conducting our research, we use as guides previous relevant studies on crime dependence, that have been performed with similar quantitative analyses in mind, regarding the dependence of crime on certain social and economic factors using statistics and econometric modelling. Our first approach consists of conceptual state space dynamic cross-sectional econometric models that incorporate a feedback loop that describes crime as a feedback process. In order to define dynamically the model variables, we use statistical analysis on crime records and on records about social and economic conditions and policing characteristics (like police force and policing results - crime arrests), to determine their influence as independent variables on crime, as the dependent variable of our model. The econometric models we apply in this first approach are an exponential log linear model and a logit model. In a second approach, we try to study the evolvement of violent crime through time in the US, independently as an autonomous social phenomenon, using autoregressive and moving average time-series econometric models. Our findings show that there are certain social and economic characteristics that affect the formation of crime rates in the US, either positively or negatively. Furthermore, the results of our time-series econometric modelling show that violent crime, viewed solely and independently as a social phenomenon, correlates with previous years crime rates and depends on the social and economic environment's conditions during previous years.
Free oscillations in a climate model with ice-sheet dynamics
NASA Technical Reports Server (NTRS)
Kallen, E.; Crafoord, C.; Ghil, M.
1979-01-01
A study of stable periodic solutions to a simple nonlinear model of the ocean-atmosphere-ice system is presented. The model has two dependent variables: ocean-atmosphere temperature and latitudinal extent of the ice cover. No explicit dependence on latitude is considered in the model. Hence all variables depend only on time and the model consists of a coupled set of nonlinear ordinary differential equations. The globally averaged ocean-atmosphere temperature in the model is governed by the radiation balance. The reflectivity to incoming solar radiation, i.e., the planetary albedo, includes separate contributions from sea ice and from continental ice sheets. The major physical mechanisms active in the model are (1) albedo-temperature feedback, (2) continental ice-sheet dynamics and (3) precipitation-rate variations. The model has three-equilibrium solutions, two of which are linearly unstable, while one is linearly stable. For some choices of parameters, the stability picture changes and sustained, finite-amplitude oscillations obtain around the previously stable equilibrium solution. The physical interpretation of these oscillations points to the possibility of internal mechanisms playing a role in glaciation cycles.
Wiedermann, Wolfgang; Li, Xintong
2018-04-16
In nonexperimental data, at least three possible explanations exist for the association of two variables x and y: (1) x is the cause of y, (2) y is the cause of x, or (3) an unmeasured confounder is present. Statistical tests that identify which of the three explanatory models fits best would be a useful adjunct to the use of theory alone. The present article introduces one such statistical method, direction dependence analysis (DDA), which assesses the relative plausibility of the three explanatory models on the basis of higher-moment information about the variables (i.e., skewness and kurtosis). DDA involves the evaluation of three properties of the data: (1) the observed distributions of the variables, (2) the residual distributions of the competing models, and (3) the independence properties of the predictors and residuals of the competing models. When the observed variables are nonnormally distributed, we show that DDA components can be used to uniquely identify each explanatory model. Statistical inference methods for model selection are presented, and macros to implement DDA in SPSS are provided. An empirical example is given to illustrate the approach. Conceptual and empirical considerations are discussed for best-practice applications in psychological data, and sample size recommendations based on previous simulation studies are provided.
A Bayesian approach for convex combination of two Gumbel-Barnett copulas
NASA Astrophysics Data System (ADS)
Fernández, M.; González-López, V. A.
2013-10-01
In this paper it was applied a new Bayesian approach to model the dependence between two variables of interest in public policy: "Gonorrhea Rates per 100,000 Population" and "400% Federal Poverty Level and over" with a small number of paired observations (one pair for each U.S. state). We use a mixture of Gumbel-Barnett copulas suitable to represent situations with weak and negative dependence, which is the case treated here. The methodology allows even making a prediction of the dependence between the variables from one year to another, showing whether there was any alteration in the dependence.
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.
Revealing the ISO/IEC 9126-1 Clique Tree for COTS Software Evaluation
NASA Technical Reports Server (NTRS)
Morris, A. Terry
2007-01-01
Previous research has shown that acyclic dependency models, if they exist, can be extracted from software quality standards and that these models can be used to assess software safety and product quality. In the case of commercial off-the-shelf (COTS) software, the extracted dependency model can be used in a probabilistic Bayesian network context for COTS software evaluation. Furthermore, while experts typically employ Bayesian networks to encode domain knowledge, secondary structures (clique trees) from Bayesian network graphs can be used to determine the probabilistic distribution of any software variable (attribute) using any clique that contains that variable. Secondary structures, therefore, provide insight into the fundamental nature of graphical networks. This paper will apply secondary structure calculations to reveal the clique tree of the acyclic dependency model extracted from the ISO/IEC 9126-1 software quality standard. Suggestions will be provided to describe how the clique tree may be exploited to aid efficient transformation of an evaluation model.
The Bilinear Product Model of Hysteresis Phenomena
NASA Astrophysics Data System (ADS)
Kádár, György
1989-01-01
In ferromagnetic materials non-reversible magnetization processes are represented by rather complex hysteresis curves. The phenomenological description of such curves needs the use of multi-valued, yet unambiguous, deterministic functions. The history dependent calculation of consecutive Everett-integrals of the two-variable Preisach-function can account for the main features of hysteresis curves in uniaxial magnetic materials. The traditional Preisach model has recently been modified on the basis of population dynamics considerations, removing the non-real congruency property of the model. The Preisach-function was proposed to be a product of two factors of distinct physical significance: a magnetization dependent function taking into account the overall magnetization state of the body and a bilinear form of a single variable, magnetic field dependent, switching probability function. The most important statement of the bilinear product model is, that the switching process of individual particles is to be separated from the book-keeping procedure of their states. This empirical model of hysteresis can easily be extended to other irreversible physical processes, such as first order phase transitions.
Pressure model of a four-way spool valve for simulating electrohydraulic control systems
NASA Technical Reports Server (NTRS)
Gebben, V. D.
1976-01-01
An equation that relates the pressure flow characteristics of hydraulic spool valves was developed. The dependent variable is valve output pressure, and the independent variables are spool position and flow. This causal form of equation is preferred in applications that simulate the effects of hydraulic line dynamics. Results from this equation are compared with those from the conventional valve equation, whose dependent variable is flow. A computer program of the valve equations includes spool stops, leakage spool clearances, and dead-zone characteristics of overlap spools.
Qiao, Yuanhua; West, Harry H; Mannan, M Sam; Johnson, David W; Cornwell, John B
2006-03-17
Liquefied natural gas (LNG) release, spread, evaporation, and dispersion processes are illustrated using the Federal Energy Regulatory Commission models in this paper. The spillage consequences are dependent upon the tank conditions, release scenarios, and the environmental conditions. The effects of the contributing variables, including the tank configuration, breach hole size, ullage pressure, wind speed and stability class, and surface roughness, on the consequence of LNG spillage onto water are evaluated using the models. The sensitivities of the consequences to those variables are discussed.
Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.
Fu, Michael J; Cavuşoğlu, M Cenk
2012-12-01
Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.
NASA Astrophysics Data System (ADS)
Alpert, P. A.; Knopf, D. A.
2015-05-01
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature (T) and relative humidity (RH) at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling rate dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nuclei (IN) all have the same IN surface area (ISA), however the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses physically observable parameters including the total number of droplets (Ntot) and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time dependent isothermal frozen fractions exhibiting non-exponential behavior with time can be readily explained by this model considering varying ISA. An apparent cooling rate dependence ofJhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. In an idealized cloud parcel model applying variability in ISAs for each droplet, the model predicts enhanced immersion freezing temperatures and greater ice crystal production compared to a case when ISAs are uniform in each droplet. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.
Avoiding and Correcting Bias in Score-Based Latent Variable Regression with Discrete Manifest Items
ERIC Educational Resources Information Center
Lu, Irene R. R.; Thomas, D. Roland
2008-01-01
This article considers models involving a single structural equation with latent explanatory and/or latent dependent variables where discrete items are used to measure the latent variables. Our primary focus is the use of scores as proxies for the latent variables and carrying out ordinary least squares (OLS) regression on such scores to estimate…
Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models
NASA Astrophysics Data System (ADS)
Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura
2014-09-01
Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.
Scale dependency of American marten (Martes americana) habitat relations [Chapter 12
Andrew J. Shirk; Tzeidle N. Wasserman; Samuel A. Cushman; Martin G. Raphael
2012-01-01
Animals select habitat resources at multiple spatial scales; therefore, explicit attention to scale-dependency when modeling habitat relations is critical to understanding how organisms select habitat in complex landscapes. Models that evaluate habitat variables calculated at a single spatial scale (e.g., patch, home range) fail to account for the effects of...
Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.
Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander
2017-01-01
Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.
Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G
2012-09-01
This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.
Fraysse, Bodvaël; Barthélémy, Inès; Qannari, El Mostafa; Rouger, Karl; Thorin, Chantal; Blot, Stéphane; Le Guiner, Caroline; Chérel, Yan; Hogrel, Jean-Yves
2017-04-12
Accelerometric analysis of gait abnormalities in golden retriever muscular dystrophy (GRMD) dogs is of limited sensitivity, and produces highly complex data. The use of discriminant analysis may enable simpler and more sensitive evaluation of treatment benefits in this important preclinical model. Accelerometry was performed twice monthly between the ages of 2 and 12 months on 8 healthy and 20 GRMD dogs. Seven accelerometric parameters were analysed using linear discriminant analysis (LDA). Manipulation of the dependent and independent variables produced three distinct models. The ability of each model to detect gait alterations and their pattern change with age was tested using a leave-one-out cross-validation approach. Selecting genotype (healthy or GRMD) as the dependent variable resulted in a model (Model 1) allowing a good discrimination between the gait phenotype of GRMD and healthy dogs. However, this model was not sufficiently representative of the disease progression. In Model 2, age in months was added as a supplementary dependent variable (GRMD_2 to GRMD_12 and Healthy_2 to Healthy_9.5), resulting in a high overall misclassification rate (83.2%). To improve accuracy, a third model (Model 3) was created in which age was also included as an explanatory variable. This resulted in an overall misclassification rate lower than 12%. Model 3 was evaluated using blinded data pertaining to 81 healthy and GRMD dogs. In all but one case, the model correctly matched gait phenotype to the actual genotype. Finally, we used Model 3 to reanalyse data from a previous study regarding the effects of immunosuppressive treatments on muscular dystrophy in GRMD dogs. Our model identified significant effect of immunosuppressive treatments on gait quality, corroborating the original findings, with the added advantages of direct statistical analysis with greater sensitivity and more comprehensible data representation. Gait analysis using LDA allows for improved analysis of accelerometry data by applying a decision-making analysis approach to the evaluation of preclinical treatment benefits in GRMD dogs.
Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.
ERIC Educational Resources Information Center
Olson, Jeffery E.
Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…
ERIC Educational Resources Information Center
Hocking, Matthew C.; Lochman, John E.
2005-01-01
This review paper examines the literature on psychosocial factors associated with adjustment to sickle cell disease and insulin-dependent diabetes mellitus in children through the framework of the transactional stress and coping (TSC) model. The transactional stress and coping model views adaptation to a childhood chronic illness as mediated by…
Saha, Dibakar; Alluri, Priyanka; Gan, Albert; Wu, Wanyang
2018-02-21
The objective of this study was to investigate the relationship between bicycle crash frequency and their contributing factors at the census block group level in Florida, USA. Crashes aggregated over the census block groups tend to be clustered (i.e., spatially dependent) rather than randomly distributed. To account for the effect of spatial dependence across the census block groups, the class of conditional autoregressive (CAR) models were employed within the hierarchical Bayesian framework. Based on four years (2011-2014) of crash data, total and fatal-and-severe injury bicycle crash frequencies were modeled as a function of a large number of variables representing demographic and socio-economic characteristics, roadway infrastructure and traffic characteristics, and bicycle activity characteristics. This study explored and compared the performance of two CAR models, namely the Besag's model and the Leroux's model, in crash prediction. The Besag's models, which differ from the Leroux's models by the structure of how spatial autocorrelation are specified in the models, were found to fit the data better. A 95% Bayesian credible interval was selected to identify the variables that had credible impact on bicycle crashes. A total of 21 variables were found to be credible in the total crash model, while 18 variables were found to be credible in the fatal-and-severe injury crash model. Population, daily vehicle miles traveled, age cohorts, household automobile ownership, density of urban roads by functional class, bicycle trip miles, and bicycle trip intensity had positive effects in both the total and fatal-and-severe crash models. Educational attainment variables, truck percentage, and density of rural roads by functional class were found to be negatively associated with both total and fatal-and-severe bicycle crash frequencies. Published by Elsevier Ltd.
Wang, Xiuquan; Huang, Guohe; Zhao, Shan; Guo, Junhong
2015-09-01
This paper presents an open-source software package, rSCA, which is developed based upon a stepwise cluster analysis method and serves as a statistical tool for modeling the relationships between multiple dependent and independent variables. The rSCA package is efficient in dealing with both continuous and discrete variables, as well as nonlinear relationships between the variables. It divides the sample sets of dependent variables into different subsets (or subclusters) through a series of cutting and merging operations based upon the theory of multivariate analysis of variance (MANOVA). The modeling results are given by a cluster tree, which includes both intermediate and leaf subclusters as well as the flow paths from the root of the tree to each leaf subcluster specified by a series of cutting and merging actions. The rSCA package is a handy and easy-to-use tool and is freely available at http://cran.r-project.org/package=rSCA . By applying the developed package to air quality management in an urban environment, we demonstrate its effectiveness in dealing with the complicated relationships among multiple variables in real-world problems.
NASA Astrophysics Data System (ADS)
Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Torres, L.; Escobar-Jiménez, R. F.
2018-02-01
A reaction-diffusion system can be represented by the Gray-Scott model. The reaction-diffusion dynamic is described by a pair of time and space dependent Partial Differential Equations (PDEs). In this paper, a generalization of the Gray-Scott model by using variable-order fractional differential equations is proposed. The variable-orders were set as smooth functions bounded in (0 , 1 ] and, specifically, the Liouville-Caputo and the Atangana-Baleanu-Caputo fractional derivatives were used to express the time differentiation. In order to find a numerical solution of the proposed model, the finite difference method together with the Adams method were applied. The simulations results showed the chaotic behavior of the proposed model when different variable-orders are applied.
Partner Dependence and Sexual Risk Behavior Among STI Clinic Patients
Senn, Theresa E.; Carey, Michael P.; Vanable, Peter A.; Coury-Doniger, Patricia
2010-01-01
Objectives To investigate the relation between partner dependence and sexual risk behavior in the context of the information-motivation-behavioral skills (IMB) model. Methods STI clinic patients (n = 1432) completed a computerized interview assessing partner dependence, condom use, and IMB variables. Results Men had higher partner-dependence scores than women did. Patients reporting greater dependence reported less condom use. Gender did not moderate the partner dependence-condom-use relationship. Partner dependence did not moderate the relation between IMB constructs and condom use. Conclusions Further research is needed to determine how partner dependence can be incorporated into conceptual models of safer sex behaviors. PMID:20001183
Tredennick, Andrew T; Adler, Peter B; Adler, Frederick R
2017-08-01
Theory relating species richness to ecosystem variability typically ignores the potential for environmental variability to promote species coexistence. Failure to account for fluctuation-dependent coexistence may explain deviations from the expected negative diversity-ecosystem variability relationship, and limits our ability to predict the consequences of increases in environmental variability. We use a consumer-resource model to explore how coexistence via the temporal storage effect and relative nonlinearity affects ecosystem variability. We show that a positive, rather than negative, diversity-ecosystem variability relationship is possible when ecosystem function is sampled across a natural gradient in environmental variability and diversity. We also show how fluctuation-dependent coexistence can buffer ecosystem functioning against increasing environmental variability by promoting species richness and portfolio effects. Our work provides a general explanation for variation in observed diversity-ecosystem variability relationships and highlights the importance of conserving regional species pools to help buffer ecosystems against predicted increases in environmental variability. © 2017 John Wiley & Sons Ltd/CNRS.
NASA Astrophysics Data System (ADS)
Alpert, Peter A.; Knopf, Daniel A.
2016-02-01
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, Ntot, and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of Jhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.
ERIC Educational Resources Information Center
Tsitsipis, Georgios; Stamovlasis, Dimitrios; Papageorgiou, George
2012-01-01
In this study, the effect of 3 cognitive variables such as logical thinking, field dependence/field independence, and convergent/divergent thinking on some specific students' answers related to the particulate nature of matter was investigated by means of probabilistic models. Besides recording and tabulating the students' responses, a combination…
Anisotropic constitutive modeling for nickel-base single crystal superalloys. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sheh, Michael Y.
1988-01-01
An anisotropic constitutive model was developed based on crystallographic slip theory for nickel base single crystal superalloys. The constitutive equations developed utilizes drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments were conducted to evaluate the existence of back stress in single crystal superalloy Rene N4 at 982 C. The results suggest that: (1) the back stress is orientation dependent; and (2) the back stress state variable is required for the current model to predict material anelastic recovery behavior. The model was evaluated for its predictive capability on single crystal material behavior including orientation dependent stress-strain response, tension/compression asymmetry, strain rate sensitivity, anelastic recovery behavior, cyclic hardening and softening, stress relaxation, creep and associated crystal lattice rotation. Limitation and future development needs are discussed.
Does Nationality Matter in the B2C Environment? Results from a Two Nation Study
NASA Astrophysics Data System (ADS)
Peikari, Hamid Reza
Different studies have explored the relations between different dimensions of e-commerce transactions and lots of models and findings have been proposed to the academic and business worlds. However, there is a doubt on the applications and generalization of such models and findings in different countries and nations. In other words, this study argues that the relations among the variables of a model ay differ in different countries, which raises questions on the findings of researchers collecting data in one country to test their hypotheses. This study intends to examine if different nations have different perceptions toward the elements of Website interface, security and purchase intention on Internet. Moreover, a simple model was developed to investigate whether the independent variables of the model are equally important in different nations and significantly influence the dependent variable in such nations or not. Since majority of the studies in the context of e-commerce were either focused on the developed countries which have a high e-readiness indices and overall ranks, two developing countries with different e-readiness indices and ranks were selected for the data collection. The results showed that the samples had different significant perceptions of security and some of the Website interface factors. Moreover, it was found that the significance of relations among the independent variables ad the dependent variable are different between the samples, which questions the findings of the researchers testing their model and hypotheses only based on the data collected in one country.
NASA Astrophysics Data System (ADS)
Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.
2018-05-01
Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.
NASA Technical Reports Server (NTRS)
Hubbard, R.
1974-01-01
The radially-streaming particle model for broad quasar and Seyfert galaxy emission features is modified to include sources of time dependence. The results are suggestive of reported observations of multiple components, variability, and transient features in the wings of Seyfert and quasi-stellar emission lines.
An Exploratory Contingency Model for Schools.
ERIC Educational Resources Information Center
Whorton, David M.
In an application of contingency theory, data from 45 Arizona schools were analyzed to determine the relationships between three sets of independent variables (organizational structure, leadership style, and environmental characteristics) and the dependent variable (organizational effectiveness as perceived by principals and teachers). Contingency…
Applications of Black Scholes Complexity Concepts to Combat Modelling
2009-03-01
Lauren, G C McIntosh, N D Perry and J Moffat, Chaos 17, 2007. 4 Lanchester Models of Warfare Volumes 1 and 2, J G Taylor, Operations Research Society...transformation matrix A Lanchester Equation solution parameter bi Dependent model variables b(x,t) Variable variance rate B Lanchester Equation solution...distribution. The similarity between this equation and the Lanchester Equations (equation 1) is clear. This suggests an obvious solution to the question of
Replicates in high dimensions, with applications to latent variable graphical models.
Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han
2016-12-01
In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
Survival curve estimation with dependent left truncated data using Cox's model.
Mackenzie, Todd
2012-10-19
The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.
Selection of climate change scenario data for impact modelling.
Sloth Madsen, M; Maule, C Fox; MacKellar, N; Olesen, J E; Christensen, J Hesselbjerg
2012-01-01
Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented in this paper, applied to relative humidity, but it could be adopted to other variables if needed.
New approach to probability estimate of femoral neck fracture by fall (Slovak regression model).
Wendlova, J
2009-01-01
3,216 Slovak women with primary or secondary osteoporosis or osteopenia, aged 20-89 years, were examined with the bone densitometer DXA (dual energy X-ray absorptiometry, GE, Prodigy - Primo), x = 58.9, 95% C.I. (58.42; 59.38). The values of the following variables for each patient were measured: FSI (femur strength index), T-score total hip left, alpha angle - left, theta angle - left, HAL (hip axis length) left, BMI (body mass index) was calculated from the height and weight of the patients. Regression model determined the following order of independent variables according to the intensity of their influence upon the occurrence of values of dependent FSI variable: 1. BMI, 2. theta angle, 3. T-score total hip, 4. alpha angle, 5. HAL. The regression model equation, calculated from the variables monitored in the study, enables a doctor in praxis to determine the probability magnitude (absolute risk) for the occurrence of pathological value of FSI (FSI < 1) in the femoral neck area, i. e., allows for probability estimate of a femoral neck fracture by fall for Slovak women. 1. The Slovak regression model differs from regression models, published until now, in chosen independent variables and a dependent variable, belonging to biomechanical variables, characterising the bone quality. 2. The Slovak regression model excludes the inaccuracies of other models, which are not able to define precisely the current and past clinical condition of tested patients (e.g., to define the length and dose of exposure to risk factors). 3. The Slovak regression model opens the way to a new method of estimating the probability (absolute risk) or the odds for a femoral neck fracture by fall, based upon the bone quality determination. 4. It is assumed that the development will proceed by improving the methods enabling to measure the bone quality, determining the probability of fracture by fall (Tab. 6, Fig. 3, Ref. 22). Full Text (Free, PDF) www.bmj.sk.
NASA Technical Reports Server (NTRS)
Rabitz, Herschel
1987-01-01
The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.
The nature and use of prediction skills in a biological computer simulation
NASA Astrophysics Data System (ADS)
Lavoie, Derrick R.; Good, Ron
The primary goal of this study was to examine the science process skill of prediction using qualitative research methodology. The think-aloud interview, modeled after Ericsson and Simon (1984), let to the identification of 63 program exploration and prediction behaviors.The performance of seven formal and seven concrete operational high-school biology students were videotaped during a three-phase learning sequence on water pollution. Subjects explored the effects of five independent variables on two dependent variables over time using a computer-simulation program. Predictions were made concerning the effect of the independent variables upon dependent variables through time. Subjects were identified according to initial knowledge of the subject matter and success at solving three selected prediction problems.Successful predictors generally had high initial knowledge of the subject matter and were formal operational. Unsuccessful predictors generally had low initial knowledge and were concrete operational. High initial knowledge seemed to be more important to predictive success than stage of Piagetian cognitive development.Successful prediction behaviors involved systematic manipulation of the independent variables, note taking, identification and use of appropriate independent-dependent variable relationships, high interest and motivation, and in general, higher-level thinking skills. Behaviors characteristic of unsuccessful predictors were nonsystematic manipulation of independent variables, lack of motivation and persistence, misconceptions, and the identification and use of inappropriate independent-dependent variable relationships.
Why climate change will invariably alter selection pressures on phenology.
Gienapp, Phillip; Reed, Thomas E; Visser, Marcel E
2014-10-22
The seasonal timing of lifecycle events is closely linked to individual fitness and hence, maladaptation in phenological traits may impact population dynamics. However, few studies have analysed whether and why climate change will alter selection pressures and hence possibly induce maladaptation in phenology. To fill this gap, we here use a theoretical modelling approach. In our models, the phenologies of consumer and resource are (potentially) environmentally sensitive and depend on two different but correlated environmental variables. Fitness of the consumer depends on the phenological match with the resource. Because we explicitly model the dependence of the phenologies on environmental variables, we can test how differential (heterogeneous) versus equal (homogeneous) rates of change in the environmental variables affect selection on consumer phenology. As expected, under heterogeneous change, phenotypic plasticity is insufficient and thus selection on consumer phenology arises. However, even homogeneous change leads to directional selection on consumer phenology. This is because the consumer reaction norm has historically evolved to be flatter than the resource reaction norm, owing to time lags and imperfect cue reliability. Climate change will therefore lead to increased selection on consumer phenology across a broad range of situations. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
Angular position of the cleat according to torsional parameters of the cyclist's lower limb.
Ramos-Ortega, Javier; Domínguez, Gabriel; Castillo, José Manuel; Fernández-Seguín, Lourdes; Munuera, Pedro V
2014-05-01
The aim of this work was to study the relationship of torsional and rotational parameters of the lower limb with a specific angular position of the cleat to establish whether these variables affect the adjustment of the cleat. Correlational study. Motion analysis laboratory. Thirty-seven male cyclists of high performance. The variables studied of the cyclist's lower limb were hip rotation (internal and external), tibial torsion angle, Q angle, and forefoot adductus angle. The cleat angle was measured through a photograph of the sole and with an Rx of this using the software AutoCAD 2008. The variables were photograph angle (photograph), the variable denominated cleat-tarsus minor angle, and a variable denominated cleat-second metatarsal angle (Rx). Analysis included the intraclass correlation coefficient for the reliability of the measurements, Student's t test performed on the dependent variables to compare side, and the multiple linear regression models were calculated using the software SPSS 15.0 for Windows. The Student's t test performed on the dependent variables to compare side showed no significant differences (P = 0.209 for the photograph angle, P = 0.735 for the cleat-tarsus minor angle, and P = 0.801 for the cleat-second metatarsal angle). Values of R and R2 for the photograph angle model were 0.303 and 0.092 (P = 0.08), the cleat/tarsus minor angle model were 0.683 and 0.466 (P < 0.001), and the cleat/second metatarsal angle model were 0.618 and 0.382, respectively (P < 0.001). The equation given by the model was cleat-tarsus minor angle = 75.094 - (0.521 × forefoot adductus angle) + (0.116 × outward rotation of the hips) + (0.220 × Q angle).
Piper, Megan E.; Bolt, Daniel M.; Kim, Su-Young; Japuntich, Sandra J.; Smith, Stevens S.; Niederdeppe, Jeff; Cannon, Dale S.; Baker, Timothy B.
2008-01-01
The construct of tobacco dependence is important from both scientific and public health perspectives, but it is poorly understood. The current research integrates person-centered analyses (e.g., latent profile analysis) and variable-centered analyses (e.g., exploratory factor analysis) to understand better the latent structure of dependence and to guide distillation of the phenotype. Using data from four samples of smokers (including treatment and non-treatment samples), latent profiles were derived using the Wisconsin Inventory of Smoking Dependence Motives (WISDM) subscale scores. Across all four samples, results revealed a unique latent profile that had relative elevations on four dependence motive subscales (Automaticity, Craving, Loss of Control, and Tolerance). Variable-centered analyses supported the uniqueness of these four subscales both as measures of a common factor distinct from that underlying the other nine subscales, and as the strongest predictors of relapse, withdrawal and other dependence criteria. Conversely, the remaining nine motives carried little unique predictive validity regarding dependence. Applications of a factor mixture model further support the presence of a unique class of smokers in relation to a common factor underlying the four subscales. The results illustrate how person-centered analyses may be useful as a supplement to variable-centered analyses for uncovering variables that are necessary and/or sufficient predictors of disorder criteria, as they may uncover small segments of a population in which the variables are uniquely distributed. The results also suggest that severe dependence is associated with a pattern of smoking that is heavy, pervasive, automatic and relatively unresponsive to instrumental contingencies. PMID:19025223
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
Argasinski, Krzysztof
2006-07-01
This paper contains the basic extensions of classical evolutionary games (multipopulation and density dependent models). It is shown that classical bimatrix approach is inconsistent with other approaches because it does not depend on proportion between populations. The main conclusion is that interspecific proportion parameter is important and must be considered in multipopulation models. The paper provides a synthesis of both extensions (a metasimplex concept) which solves the problem intrinsic in the bimatrix model. It allows us to model interactions among any number of subpopulations including density dependence effects. We prove that all modern approaches to evolutionary games are closely related. All evolutionary models (except classical bimatrix approaches) can be reduced to a single population general model by a simple change of variables. Differences between classic bimatrix evolutionary games and a new model which is dependent on interspecific proportion are shown by examples.
Giorgio Vacchiano; John D. Shaw; R. Justin DeRose; James N. Long
2008-01-01
Diameter increment is an important variable in modeling tree growth. Most facets of predicted tree development are dependent in part on diameter or diameter increment, the most commonly measured stand variable. The behavior of the Forest Vegetation Simulator (FVS) largely relies on the performance of the diameter increment model and the subsequent use of predicted dbh...
Jones, C Jessie; Rutledge, Dana N; Aquino, Jordan
2010-07-01
The purposes of this study were to determine whether people with and without fibromyalgia (FM) age 50 yr and above showed differences in physical performance and perceived functional ability and to determine whether age, gender, depression, and physical activity level altered the impact of FM status on these factors. Dependent variables included perceived function and 6 performance measures (multidimensional balance, aerobic endurance, overall functional mobility, lower body strength, and gait velocity-normal or fast). Independent (predictor) variables were FM status, age, gender, depression, and physical activity level. Results indicated significant differences between adults with and without FM on all physical-performance measures and perceived function. Linear-regression models showed that the contribution of significant predictors was in expected directions. All regression models were significant, accounting for 16-65% of variance in the dependent variables.
Marshall, Leon; Carvalheiro, Luísa G; Aguirre-Gutiérrez, Jesús; Bos, Merijn; de Groot, G Arjen; Kleijn, David; Potts, Simon G; Reemer, Menno; Roberts, Stuart; Scheper, Jeroen; Biesmeijer, Jacobus C
2015-10-01
Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs' usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and flower availability. Additionally, testing SDMs with field surveys should involve multiple collection techniques.
Improved modeling of photon observables with the event-by-event fission model FREYA
Vogt, R.; Randrup, J.
2017-12-28
The event-by-event fission model FREYA has been improved, in particular to address deficiencies in the calculation of photon observables. In this paper, we discuss the improvements that have been made and introduce several new variables, some detector dependent, that affect the photon observables. We show the sensitivity of FREYA to these variables. Finally, we then compare the results to the available photon data from spontaneous and thermal neutron-induced fission.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Expanding Stress Generation Theory: Test of a Transdiagnostic Model
Conway, Christopher C.; Hammen, Constance; Brennan, Patricia A.
2016-01-01
Originally formulated to understand the recurrence of depressive disorders, the stress generation hypothesis has recently been applied in research on anxiety and externalizing disorders. Results from these investigations, in combination with findings of extensive comorbidity between depression and other mental disorders, suggest the need for an expansion of stress generation models to include the stress generating effects of transdiagnostic pathology as well as those of specific syndromes. Employing latent variable modeling techniques to parse the general and specific elements of commonly co-occurring Axis I syndromes, the current study examined the associations of transdiagnostic internalizing and externalizing dimensions with stressful life events over time. Analyses revealed that, after adjusting for the covariation between the dimensions, internalizing was a significant predictor of interpersonal dependent stress, whereas externalizing was a significant predictor of noninterpersonal dependent stress. Neither latent dimension was associated with the occurrence of independent, or fateful, stressful life events. At the syndrome level, once variance due to the internalizing factor was partialled out, unipolar depression contributed incrementally to the generation of interpersonal dependent stress. In contrast, the presence of panic disorder produced a “stress inhibition” effect, predicting reduced exposure to interpersonal dependent stress. Additionally, dysthymia was associated with an excess of noninterpersonal dependent stress. The latent variable modeling framework used here is discussed in terms of its potential as an integrative model for stress generation research. PMID:22428789
Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies
NASA Astrophysics Data System (ADS)
Perez Hoyos, Isabel Cristina
The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.
Recharge characteristics of an unconfined aquifer from the rainfall-water table relationship
NASA Astrophysics Data System (ADS)
Viswanathan, M. N.
1984-02-01
The determination of recharge levels of unconfined aquifers, recharged entirely by rainfall, is done by developing a model for the aquifer that estimates the water-table levels from the history of rainfall observations and past water-table levels. In the present analysis, the model parameters that influence the recharge were not only assumed to be time dependent but also to have varying dependence rates for various parameters. Such a model is solved by the use of a recursive least-squares method. The variable-rate parameter variation is incorporated using a random walk model. From the field tests conducted at Tomago Sandbeds, Newcastle, Australia, it was observed that the assumption of variable rates of time dependency of recharge parameters produced better estimates of water-table levels compared to that with constant-recharge parameters. It was observed that considerable recharge due to rainfall occurred on the very same day of rainfall. The increase in water-table level was insignificant for subsequent days of rainfall. The level of recharge very much depends upon the intensity and history of rainfall. Isolated rainfalls, even of the order of 25 mm day -1, had no significant effect on the water-table levels.
MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.
Lok, Judith J
2017-04-01
In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.
NASA Astrophysics Data System (ADS)
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
Virtual Levels and Role Models: N-Level Structural Equations Model of Reciprocal Ratings Data.
Mehta, Paras D
2018-01-01
A general latent variable modeling framework called n-Level Structural Equations Modeling (NL-SEM) for dependent data-structures is introduced. NL-SEM is applicable to a wide range of complex multilevel data-structures (e.g., cross-classified, switching membership, etc.). Reciprocal dyadic ratings obtained in round-robin design involve complex set of dependencies that cannot be modeled within Multilevel Modeling (MLM) or Structural Equations Modeling (SEM) frameworks. The Social Relations Model (SRM) for round robin data is used as an example to illustrate key aspects of the NL-SEM framework. NL-SEM introduces novel constructs such as 'virtual levels' that allows a natural specification of latent variable SRMs. An empirical application of an explanatory SRM for personality using xxM, a software package implementing NL-SEM is presented. Results show that person perceptions are an integral aspect of personality. Methodological implications of NL-SEM for the analyses of an emerging class of contextual- and relational-SEMs are discussed.
On the use of internal state variables in thermoviscoplastic constitutive equations
NASA Technical Reports Server (NTRS)
Allen, D. H.; Beek, J. M.
1985-01-01
The general theory of internal state variables are reviewed to apply it to inelastic metals in use in high temperature environments. In this process, certain constraints and clarifications will be made regarding internal state variables. It is shown that the Helmholtz free energy can be utilized to construct constitutive equations which are appropriate for metallic superalloys. Internal state variables are shown to represent locally averaged measures of dislocation arrangement, dislocation density, and intergranular fracture. The internal state variable model is demonstrated to be a suitable framework for comparison of several currently proposed models for metals and can therefore be used to exhibit history dependence, nonlinearity, and rate as well as temperature sensitivity.
Dynamic Quantum Allocation and Swap-Time Variability in Time-Sharing Operating Systems.
ERIC Educational Resources Information Center
Bhat, U. Narayan; Nance, Richard E.
The effects of dynamic quantum allocation and swap-time variability on central processing unit (CPU) behavior are investigated using a model that allows both quantum length and swap-time to be state-dependent random variables. Effective CPU utilization is defined to be the proportion of a CPU busy period that is devoted to program processing, i.e.…
GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.
Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N
2018-01-01
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.
You, Ming P.; Rensing, Kelly; Renton, Michael; Barbetti, Martin J.
2017-01-01
Subterranean clover (Trifolium subterraneum) is a critical pasture legume in Mediterranean regions of southern Australia and elsewhere, including Mediterranean-type climatic regions in Africa, Asia, Australia, Europe, North America, and South America. Pythium damping-off and root disease caused by Pythium irregulare is a significant threat to subterranean clover in Australia and a study was conducted to define how environmental factors (viz. temperature, soil type, moisture and nutrition) as well as variety, influence the extent of damping-off and root disease as well as subterranean clover productivity under challenge by this pathogen. Relationships were statistically modeled using linear and generalized linear models and boosted regression trees. Modeling found complex relationships between explanatory variables and the extent of Pythium damping-off and root rot. Linear modeling identified high-level (4 or 5-way) significant interactions for each dependent variable (dry shoot and root weight, emergence, tap and lateral root disease index). Furthermore, all explanatory variables (temperature, soil, moisture, nutrition, variety) were found significant as part of some interaction within these models. A significant five-way interaction between all explanatory variables was found for both dry shoot and root dry weights, and a four way interaction between temperature, soil, moisture, and nutrition was found for both tap and lateral root disease index. A second approach to modeling using boosted regression trees provided support for and helped clarify the complex nature of the relationships found in linear models. All explanatory variables showed at least 5% relative influence on each of the five dependent variables. All models indicated differences due to soil type, with the sand-based soil having either higher weights, greater emergence, or lower disease indices; while lowest weights and less emergence, as well as higher disease indices, were found for loam soil and low temperature. There was more severe tap and lateral root rot disease in higher moisture situations. PMID:29184544
Prediction of hourly PM2.5 using a space-time support vector regression model
NASA Astrophysics Data System (ADS)
Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang
2018-05-01
Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.
Pak, Mehmet; Gülci, Sercan; Okumuş, Arif
2018-01-06
This study focuses on the geo-statistical assessment of spatial estimation models in forest crimes. Used widely in the assessment of crime and crime-dependent variables, geographic information system (GIS) helps the detection of forest crimes in rural regions. In this study, forest crimes (forest encroachment, illegal use, illegal timber logging, etc.) are assessed holistically and modeling was performed with ten different independent variables in GIS environment. The research areas are three Forest Enterprise Chiefs (Baskonus, Cinarpinar, and Hartlap) affiliated to Kahramanmaras Forest Regional Directorate in Kahramanmaras. An estimation model was designed using ordinary least squares (OLS) and geographically weighted regression (GWR) methods, which are often used in spatial association. Three different models were proposed in order to increase the accuracy of the estimation model. The use of variables with a variance inflation factor (VIF) value of lower than 7.5 in Model I and lower than 4 in Model II and dependent variables with significant robust probability values in Model III are associated with forest crimes. Afterwards, the model with the lowest corrected Akaike Information Criterion (AIC c ), and the highest R 2 value was selected as the comparison criterion. Consequently, Model III proved to be more accurate compared to other models. For Model III, while AIC c was 328,491 and R 2 was 0.634 for OLS-3 model, AIC c was 318,489 and R 2 was 0.741 for GWR-3 model. In this respect, the uses of GIS for combating forest crimes provide different scenarios and tangible information that will help take political and strategic measures.
A variable turbulent Prandtl and Schmidt number model study for scramjet applications
NASA Astrophysics Data System (ADS)
Keistler, Patrick
A turbulence model that allows for the calculation of the variable turbulent Prandtl (Prt) and Schmidt (Sct) numbers as part of the solution is presented. The model also accounts for the interactions between turbulence and chemistry by modeling the corresponding terms. Four equations are added to the baseline k-zeta turbulence model: two equations for enthalpy variance and its dissipation rate to calculate the turbulent diffusivity, and two equations for the concentrations variance and its dissipation rate to calculate the turbulent diffusion coefficient. The underlying turbulence model already accounts for compressibility effects. The variable Prt /Sct turbulence model is validated and tuned by simulating a wide variety of experiments. Included in the experiments are two-dimensional, axisymmetric, and three-dimensional mixing and combustion cases. The combustion cases involved either hydrogen and air, or hydrogen, ethylene, and air. Two chemical kinetic models are employed for each of these situations. For the hydrogen and air cases, a seven species/seven reaction model where the reaction rates are temperature dependent and a nine species/nineteen reaction model where the reaction rates are dependent on both pressure and temperature are used. For the cases involving ethylene, a 15 species/44 reaction reduced model that is both pressure and temperature dependent is used, along with a 22 species/18 global reaction reduced model that makes use of the quasi-steady-state approximation. In general, fair to good agreement is indicated for all simulated experiments. The turbulence/chemistry interaction terms are found to have a significant impact on flame location for the two-dimensional combustion case, with excellent experimental agreement when the terms are included. In most cases, the hydrogen chemical mechanisms behave nearly identically, but for one case, the pressure dependent model would not auto-ignite at the same conditions as the experiment and the other chemical model. The model was artificially ignited in that case. For the cases involving ethylene combustion, the chemical model has a profound impact on the flame size, shape, and ignition location. However, without quantitative experimental data, it is difficult to determine which one is more suitable for this particular application.
Effect of suction-dependent soil deformability on landslide susceptibility maps
NASA Astrophysics Data System (ADS)
Lizarraga, Jose J.; Buscarnera, Giuseppe; Frattini, Paolo; Crosta, Giovanni B.
2016-04-01
This contribution presents a physically-based, spatially-distributed model for shallow landslides promoted by rainfall infiltration. The model features a set of Factor of Safety values aimed to capture different failure mechanisms, namely frictional slips with limited mobility and flowslide events associated with the liquefaction of the considered soils. Indices of failure associated with these two modes of instability have been derived from unsaturated soil stability principles. In particular, the propensity to wetting-induced collapse of unsaturated soils is quantified through the introduction of a rigid-plastic model with suction-dependent yielding and strength properties. The model is combined with an analytical approach (TRIGRS) to track the spatio-temporal evolution of soil suction in slopes subjected to transient infiltration. The model has been tested to reply the triggering of shallow landslides in pyroclastic deposits in Sarno (1998, Campania Region, Southern Italy). It is shown that suction-dependent mechanical properties, such as soil deformability, have important effects on the predicted landslide susceptibility scenarios, resulting on computed unstable zones that may encompass a wide range of slope inclinations, saturation levels, and depths. Such preliminary results suggest that the proposed methodology offers an alternative mechanistic interpretation to the variability in behavior of rainfall-induced landslides. Differently to standard methods the explanation to this variability is based on suction-dependent soil behavior characteristics.
Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.
2006-01-01
Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.
Systems and methods for modeling and analyzing networks
Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W
2013-10-29
The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.
Electrical Advantages of Dendritic Spines
Gulledge, Allan T.; Carnevale, Nicholas T.; Stuart, Greg J.
2012-01-01
Many neurons receive excitatory glutamatergic input almost exclusively onto dendritic spines. In the absence of spines, the amplitudes and kinetics of excitatory postsynaptic potentials (EPSPs) at the site of synaptic input are highly variable and depend on dendritic location. We hypothesized that dendritic spines standardize the local geometry at the site of synaptic input, thereby reducing location-dependent variability of local EPSP properties. We tested this hypothesis using computational models of simplified and morphologically realistic spiny neurons that allow direct comparison of EPSPs generated on spine heads with EPSPs generated on dendritic shafts at the same dendritic locations. In all morphologies tested, spines greatly reduced location-dependent variability of local EPSP amplitude and kinetics, while having minimal impact on EPSPs measured at the soma. Spine-dependent standardization of local EPSP properties persisted across a range of physiologically relevant spine neck resistances, and in models with variable neck resistances. By reducing the variability of local EPSPs, spines standardized synaptic activation of NMDA receptors and voltage-gated calcium channels. Furthermore, spines enhanced activation of NMDA receptors and facilitated the generation of NMDA spikes and axonal action potentials in response to synaptic input. Finally, we show that dynamic regulation of spine neck geometry can preserve local EPSP properties following plasticity-driven changes in synaptic strength, but is inefficient in modifying the amplitude of EPSPs in other cellular compartments. These observations suggest that one function of dendritic spines is to standardize local EPSP properties throughout the dendritic tree, thereby allowing neurons to use similar voltage-sensitive postsynaptic mechanisms at all dendritic locations. PMID:22532875
A consistent framework for Horton regression statistics that leads to a modified Hack's law
Furey, P.R.; Troutman, B.M.
2008-01-01
A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Comlekoglu, T.; Weinberg, S. H.
2017-09-01
Cardiac memory is the dependence of electrical activity on the prior history of one or more system state variables, including transmembrane potential (Vm), ionic current gating, and ion concentrations. While prior work has represented memory either phenomenologically or with biophysical detail, in this study, we consider an intermediate approach of a minimal three-variable cardiomyocyte model, modified with fractional-order dynamics, i.e., a differential equation of order between 0 and 1, to account for history-dependence. Memory is represented via both capacitive memory, due to fractional-order Vm dynamics, that arises due to non-ideal behavior of membrane capacitance; and ionic current gating memory, due to fractional-order gating variable dynamics, that arises due to gating history-dependence. We perform simulations for varying Vm and gating variable fractional-orders and pacing cycle length and measure action potential duration (APD) and incidence of alternans, loss of capture, and spontaneous activity. In the absence of ionic current gating memory, we find that capacitive memory, i.e., decreased Vm fractional-order, typically shortens APD, suppresses alternans, and decreases the minimum cycle length (MCL) for loss of capture. However, in the presence of ionic current gating memory, capacitive memory can prolong APD, promote alternans, and increase MCL. Further, we find that reduced Vm fractional order (typically less than 0.75) can drive phase 4 depolarizations that promote spontaneous activity. Collectively, our results demonstrate that memory reproduced by a fractional-order model can play a role in alternans formation and pacemaking, and in general, can greatly increase the range of electrophysiological characteristics exhibited by a minimal model.
NASA Technical Reports Server (NTRS)
Johnson, R. A.; Wehrly, T.
1976-01-01
Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.
Ram Kumar Deo; Robert E. Froese; Michael J. Falkowski; Andrew T. Hudak
2016-01-01
The conventional approach to LiDAR-based forest inventory modeling depends on field sample data from fixed-radius plots (FRP). Because FRP sampling is cost intensive, combining variable-radius plot (VRP) sampling and LiDAR data has the potential to improve inventory efficiency. The overarching goal of this study was to evaluate the integration of LiDAR and VRP data....
ERIC Educational Resources Information Center
Waller, Niels; Jones, Jeff
2011-01-01
We describe methods for assessing all possible criteria (i.e., dependent variables) and subsets of criteria for regression models with a fixed set of predictors, x (where x is an n x 1 vector of independent variables). Our methods build upon the geometry of regression coefficients (hereafter called regression weights) in n-dimensional space. For a…
Developing a Model for Forecasting Road Traffic Accident (RTA) Fatalities in Yemen
NASA Astrophysics Data System (ADS)
Karim, Fareed M. A.; Abdo Saleh, Ali; Taijoobux, Aref; Ševrović, Marko
2017-12-01
The aim of this paper is to develop a model for forecasting RTA fatalities in Yemen. The yearly fatalities was modeled as the dependent variable, while the number of independent variables included the population, number of vehicles, GNP, GDP and Real GDP per capita. It was determined that all these variables are highly correlated with the correlation coefficient (r ≈ 0.9); in order to avoid multicollinearity in the model, a single variable with the highest r value was selected (real GDP per capita). A simple regression model was developed; the model was very good (R2=0.916); however, the residuals were serially correlated. The Prais-Winsten procedure was used to overcome this violation of the regression assumption. The data for a 20-year period from 1991-2010 were analyzed to build the model; the model was validated by using data for the years 2011-2013; the historical fit for the period 1991 - 2011 was very good. Also, the validation for 2011-2013 proved accurate.
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
Solar array model corrections from Mars Pathfinder lander data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewell, R.C.; Burger, D.R.
1997-12-31
The MESUR solar array power model initially assumed values for input variables. After landing early surface variables such as array tilt and azimuth or early environmental variables such as array temperature can be corrected. Correction of later environmental variables such as tau versus time, spectral shift, dust deposition, and UV darkening is dependent upon time, on-board science instruments, and ability to separate effects of variables. Engineering estimates had to be made for additional shadow losses and Voc sensor temperature corrections. Some variations had not been expected such as tau versus time of day, and spectral shift versus time of day.more » Additions needed to the model are thermal mass of lander petal and correction between Voc sensor and temperature sensor. Conclusions are: the model works well; good battery predictions are difficult; inclusion of Isc and Voc sensors was valuable; and the IMP and MAE science experiments greatly assisted the data analysis and model correction.« less
NASA Astrophysics Data System (ADS)
Zulvia, Pepi; Kurnia, Anang; Soleh, Agus M.
2017-03-01
Individual and environment are a hierarchical structure consist of units grouped at different levels. Hierarchical data structures are analyzed based on several levels, with the lowest level nested in the highest level. This modeling is commonly call multilevel modeling. Multilevel modeling is widely used in education research, for example, the average score of National Examination (UN). While in Indonesia UN for high school student is divided into natural science and social science. The purpose of this research is to develop multilevel and panel data modeling using linear mixed model on educational data. The first step is data exploration and identification relationships between independent and dependent variable by checking correlation coefficient and variance inflation factor (VIF). Furthermore, we use a simple model approach with highest level of the hierarchy (level-2) is regency/city while school is the lowest of hierarchy (level-1). The best model was determined by comparing goodness-of-fit and checking assumption from residual plots and predictions for each model. Our finding that for natural science and social science, the regression with random effects of regency/city and fixed effects of the time i.e multilevel model has better performance than the linear mixed model in explaining the variability of the dependent variable, which is the average scores of UN.
Change in the magnitude and mechanisms of global temperature variability with warming.
Brown, Patrick T; Ming, Yi; Li, Wenhong; Hill, Spencer A
2017-01-01
Natural unforced variability in global mean surface air temperature (GMST) can mask or exaggerate human-caused global warming, and thus a complete understanding of this variability is highly desirable. Significant progress has been made in elucidating the magnitude and physical origins of present-day unforced GMST variability, but it has remained unclear how such variability may change as the climate warms. Here we present modeling evidence that indicates that the magnitude of low-frequency GMST variability is likely to decline in a warmer climate and that its generating mechanisms may be fundamentally altered. In particular, a warmer climate results in lower albedo at high latitudes, which yields a weaker albedo feedback on unforced GMST variability. These results imply that unforced GMST variability is dependent on the background climatological conditions, and thus climate model control simulations run under perpetual preindustrial conditions may have only limited relevance for understanding the unforced GMST variability of the future.
Change in the Magnitude and Mechanisms of Global Temperature Variability with Warming
NASA Astrophysics Data System (ADS)
Brown, P. T.; Ming, Y.; Li, W.; Hill, S. A.
2017-12-01
Natural unforced variability in global mean surface air temperature (GMST) can mask or exaggerate human-caused global warming, and thus a complete understanding of this variability is highly desirable. Significant progress has been made in elucidating the magnitude and physical origins of present-day unforced GMST variability, but it has remained unclear how such variability may change as the climate warms. Here we present modeling evidence that indicates that the magnitude of low-frequency GMST variability is likely to decline in a warmer climate and that its generating mechanisms may be fundamentally altered. In particular, a warmer climate results in lower albedo at high latitudes, which yields a weaker albedo feedback on unforced GMST variability. These results imply that unforced GMST variability is dependent on the background climatological conditions, and thus climate model control simulations run under perpetual preindustrial conditions may have only limited relevance for understanding the unforced GMST variability of the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpert, Peter A.; Knopf, Daniel A.
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimentalmore » data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, N tot, and the heterogeneous ice nucleation rate coefficient, J het( T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of J het is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. Finally, the model simulations allow for a quantitative experimental uncertainty analysis for parameters N tot, T, RH, and the ISA variability. We discuss the implications of our results for experimental analysis and interpretation of the immersion freezing process.« less
Alpert, Peter A.; Knopf, Daniel A.
2016-02-24
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimentalmore » data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, N tot, and the heterogeneous ice nucleation rate coefficient, J het( T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of J het is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. Finally, the model simulations allow for a quantitative experimental uncertainty analysis for parameters N tot, T, RH, and the ISA variability. We discuss the implications of our results for experimental analysis and interpretation of the immersion freezing process.« less
Rough parameter dependence in climate models and the role of Ruelle-Pollicott resonances.
Chekroun, Mickaël David; Neelin, J David; Kondrashov, Dmitri; McWilliams, James C; Ghil, Michael
2014-02-04
Despite the importance of uncertainties encountered in climate model simulations, the fundamental mechanisms at the origin of sensitive behavior of long-term model statistics remain unclear. Variability of turbulent flows in the atmosphere and oceans exhibits recurrent large-scale patterns. These patterns, while evolving irregularly in time, manifest characteristic frequencies across a large range of time scales, from intraseasonal through interdecadal. Based on modern spectral theory of chaotic and dissipative dynamical systems, the associated low-frequency variability may be formulated in terms of Ruelle-Pollicott (RP) resonances. RP resonances encode information on the nonlinear dynamics of the system, and an approach for estimating them--as filtered through an observable of the system--is proposed. This approach relies on an appropriate Markov representation of the dynamics associated with a given observable. It is shown that, within this representation, the spectral gap--defined as the distance between the subdominant RP resonance and the unit circle--plays a major role in the roughness of parameter dependences. The model statistics are the most sensitive for the smallest spectral gaps; such small gaps turn out to correspond to regimes where the low-frequency variability is more pronounced, whereas autocorrelations decay more slowly. The present approach is applied to analyze the rough parameter dependence encountered in key statistics of an El-Niño-Southern Oscillation model of intermediate complexity. Theoretical arguments, however, strongly suggest that such links between model sensitivity and the decay of correlation properties are not limited to this particular model and could hold much more generally.
Rough parameter dependence in climate models and the role of Ruelle-Pollicott resonances
Chekroun, Mickaël David; Neelin, J. David; Kondrashov, Dmitri; McWilliams, James C.; Ghil, Michael
2014-01-01
Despite the importance of uncertainties encountered in climate model simulations, the fundamental mechanisms at the origin of sensitive behavior of long-term model statistics remain unclear. Variability of turbulent flows in the atmosphere and oceans exhibits recurrent large-scale patterns. These patterns, while evolving irregularly in time, manifest characteristic frequencies across a large range of time scales, from intraseasonal through interdecadal. Based on modern spectral theory of chaotic and dissipative dynamical systems, the associated low-frequency variability may be formulated in terms of Ruelle-Pollicott (RP) resonances. RP resonances encode information on the nonlinear dynamics of the system, and an approach for estimating them—as filtered through an observable of the system—is proposed. This approach relies on an appropriate Markov representation of the dynamics associated with a given observable. It is shown that, within this representation, the spectral gap—defined as the distance between the subdominant RP resonance and the unit circle—plays a major role in the roughness of parameter dependences. The model statistics are the most sensitive for the smallest spectral gaps; such small gaps turn out to correspond to regimes where the low-frequency variability is more pronounced, whereas autocorrelations decay more slowly. The present approach is applied to analyze the rough parameter dependence encountered in key statistics of an El-Niño–Southern Oscillation model of intermediate complexity. Theoretical arguments, however, strongly suggest that such links between model sensitivity and the decay of correlation properties are not limited to this particular model and could hold much more generally. PMID:24443553
Nelson, Jon P
2014-01-01
Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bammann, D.; Prantil, V.; Kumar, A.
1996-06-24
An internal state variable formulation for phase transforming alloy steels is presented. We have illustrated how local transformation plasticity can be accommodated by an appropriate choice for the corresponding internal stress field acting between the phases. The state variable framework compares well with a numerical micromechanical calculation providing a discrete dependence of microscopic plasticity on volume fraction and the stress dependence attributable to a softer parent phase. The multiphase model is used to simulate the stress state of a quenched bar and show qualitative trends in the response when the transformation phenomenon is incorporated on the length scale of amore » global boundary value problem.« less
Lee, Jonathan K.; Froehlich, David C.
1987-01-01
Published literature on the application of the finite-element method to solving the equations of two-dimensional surface-water flow in the horizontal plane is reviewed in this report. The finite-element method is ideally suited to modeling two-dimensional flow over complex topography with spatially variable resistance. A two-dimensional finite-element surface-water flow model with depth and vertically averaged velocity components as dependent variables allows the user great flexibility in defining geometric features such as the boundaries of a water body, channels, islands, dikes, and embankments. The following topics are reviewed in this report: alternative formulations of the equations of two-dimensional surface-water flow in the horizontal plane; basic concepts of the finite-element method; discretization of the flow domain and representation of the dependent flow variables; treatment of boundary conditions; discretization of the time domain; methods for modeling bottom, surface, and lateral stresses; approaches to solving systems of nonlinear equations; techniques for solving systems of linear equations; finite-element alternatives to Galerkin's method of weighted residuals; techniques of model validation; and preparation of model input data. References are listed in the final chapter.
Beyond a bigger brain: Multivariable structural brain imaging and intelligence
Ritchie, Stuart J.; Booth, Tom; Valdés Hernández, Maria del C.; Corley, Janie; Maniega, Susana Muñoz; Gow, Alan J.; Royle, Natalie A.; Pattie, Alison; Karama, Sherif; Starr, John M.; Bastin, Mark E.; Wardlaw, Joanna M.; Deary, Ian J.
2015-01-01
People with larger brains tend to score higher on tests of general intelligence (g). It is unclear, however, how much variance in intelligence other brain measurements would account for if included together with brain volume in a multivariable model. We examined a large sample of individuals in their seventies (n = 672) who were administered a comprehensive cognitive test battery. Using structural equation modelling, we related six common magnetic resonance imaging-derived brain variables that represent normal and abnormal features—brain volume, cortical thickness, white matter structure, white matter hyperintensity load, iron deposits, and microbleeds—to g and to fluid intelligence. As expected, brain volume accounted for the largest portion of variance (~ 12%, depending on modelling choices). Adding the additional variables, especially cortical thickness (+~ 5%) and white matter hyperintensity load (+~ 2%), increased the predictive value of the model. Depending on modelling choices, all neuroimaging variables together accounted for 18–21% of the variance in intelligence. These results reveal which structural brain imaging measures relate to g over and above the largest contributor, total brain volume. They raise questions regarding which other neuroimaging measures might account for even more of the variance in intelligence. PMID:26240470
NASA Astrophysics Data System (ADS)
Savina, M.; Lunghi, M.; Archambault, B.; Baulier, L.; Huret, M.; Le Pape, O.
2016-05-01
Simulating fish larval drift helps assess the sensitivity of recruitment variability to early life history. An individual-based model (IBM) coupled to a hydrodynamic model was used to simulate common sole larval supply from spawning areas to coastal and estuarine nursery grounds at the meta-population scale (4 assessed stocks), from the southern North Sea to the Bay of Biscay (Western Europe) on a 26-yr time series, from 1982 to 2007. The IBM allowed each particle released to be transported by currents, to grow depending on temperature, to migrate vertically depending on development stage, to die along pelagic stages or to settle on a nursery, representing the life history from spawning to metamorphosis. The model outputs were analysed to explore interannual patterns in the amounts of settled sole larvae at the population scale; they suggested: (i) a low connectivity between populations at the larval stage, (ii) a moderate influence of interannual variation in the spawning biomass, (iii) dramatic consequences of life history on the abundance of settling larvae and (iv) the effects of climate variability on the interannual variability of the larvae settlement success.
An Analysis on the Unemployment Rate in the Philippines: A Time Series Data Approach
NASA Astrophysics Data System (ADS)
Urrutia, J. D.; Tampis, R. L.; E Atienza, JB
2017-03-01
This study aims to formulate a mathematical model for forecasting and estimating unemployment rate in the Philippines. Also, factors which can predict the unemployment is to be determined among the considered variables namely Labor Force Rate, Population, Inflation Rate, Gross Domestic Product, and Gross National Income. Granger-causal relationship and integration among the dependent and independent variables are also examined using Pairwise Granger-causality test and Johansen Cointegration Test. The data used were acquired from the Philippine Statistics Authority, National Statistics Office, and Bangko Sentral ng Pilipinas. Following the Box-Jenkins method, the formulated model for forecasting the unemployment rate is SARIMA (6, 1, 5) × (0, 1, 1)4 with a coefficient of determination of 0.79. The actual values are 99 percent identical to the predicted values obtained through the model, and are 72 percent closely relative to the forecasted ones. According to the results of the regression analysis, Labor Force Rate and Population are the significant factors of unemployment rate. Among the independent variables, Population, GDP, and GNI showed to have a granger-causal relationship with unemployment. It is also found that there are at least four cointegrating relations between the dependent and independent variables.
NASA Astrophysics Data System (ADS)
Moll, Andreas; Stegert, Christoph
2007-01-01
This paper outlines an approach to couple a structured zooplankton population model with state variables for eggs, nauplii, two copepodites stages and adults adapted to Pseudocalanus elongatus into the complex marine ecosystem model ECOHAM2 with 13 state variables resolving the carbon and nitrogen cycle. Different temperature and food scenarios derived from laboratory culture studies were examined to improve the process parameterisation for copepod stage dependent development processes. To study annual cycles under realistic weather and hydrographic conditions, the coupled ecosystem-zooplankton model is applied to a water column in the northern North Sea. The main ecosystem state variables were validated against observed monthly mean values. Then vertical profiles of selected state variables were compared to the physical forcing to study differences between zooplankton as one biomass state variable or partitioned into five population state variables. Simulated generation times are more affected by temperature than food conditions except during the spring phytoplankton bloom. Up to six generations within the annual cycle can be discerned in the simulation.
A framework for the study of coping, illness behaviour and outcomes.
Shaw, C
1999-05-01
This paper presents a theoretical framework for the study of coping, illness attribution, health behaviour and outcomes. It is based upon models developed within health psychology and aims to provide a theoretical basis for nurse researchers to utilize psychosocial variables. It is an interactionist model which views outcomes as dependent upon both situation and person variables. The situation is viewed as the health threat or illness symptoms as well as the psychosocial context within which the person is operating. This context includes socio-economic factors, social support, social norms, and external factors such as the mass media. The experience of health threat is dependent upon individual appraisal, and the framework incorporates Folkman and Lazarus' transactional model of stress, as well as Leventhal's illness representation model. Behaviour and the perception of threat are also dependent upon outcome expectancies and the appraisal of one's own coping resources, and so the concepts of locus of control and self-efficacy are also incorporated. This framework allows one to identify determinants of behaviour and outcome, and will aid nurses in identifying areas for psycho-social intervention.
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pichara, Karim; Protopapas, Pavlos
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine howmore » classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.« less
On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.
Koyama, Shinsuke
2015-07-01
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
Binary logistic regression modelling: Measuring the probability of relapse cases among drug addict
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Alias, Siti Nor Shadila
2014-07-01
For many years Malaysia faced the drug addiction issues. The most serious case is relapse phenomenon among treated drug addict (drug addict who have under gone the rehabilitation programme at Narcotic Addiction Rehabilitation Centre, PUSPEN). Thus, the main objective of this study is to find the most significant factor that contributes to relapse to happen. The binary logistic regression analysis was employed to model the relationship between independent variables (predictors) and dependent variable. The dependent variable is the status of the drug addict either relapse, (Yes coded as 1) or not, (No coded as 0). Meanwhile the predictors involved are age, age at first taking drug, family history, education level, family crisis, community support and self motivation. The total of the sample is 200 which the data are provided by AADK (National Antidrug Agency). The finding of the study revealed that age and self motivation are statistically significant towards the relapse cases..
Model accuracy impact through rescaled observations in hydrological data assimilation studies
USDA-ARS?s Scientific Manuscript database
Signal and noise time-series variability of soil moisture datasets (e.g. satellite-, model-, station-based) vary greatly. Optimality of the analysis obtained after observations are assimilated into the model depends on the degree that the differences between the signal variances of model and observa...
A Two-Step Approach to Analyze Satisfaction Data
ERIC Educational Resources Information Center
Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.
2011-01-01
In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…
Overcoming multicollinearity in multiple regression using correlation coefficient
NASA Astrophysics Data System (ADS)
Zainodin, H. J.; Yap, S. J.
2013-09-01
Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.
The intermediate endpoint effect in logistic and probit regression
MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM
2010-01-01
Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted conclusions regarding the intermediate effect. PMID:17942466
Aspect-Oriented Model-Driven Software Product Line Engineering
NASA Astrophysics Data System (ADS)
Groher, Iris; Voelter, Markus
Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.
Preserving Flow Variability in Watershed Model Calibrations
Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...
1987-03-01
statistics for storm water quality variables and fractions of phosphorus, solids, and carbon are presented in Tables 7 and 8, respectively. The correlation...matrix and factor analysis (same method as used for baseflow) of storm water quality variables suggested three groups: Group I - TMG, TCA, TNA, TSI...models to predict storm water quality . The 11 static and 3 dynamic storm variables were used as potential dependent variables. All independent and
Limits on determining the skill of North Atlantic Ocean decadal predictions.
Menary, Matthew B; Hermanson, Leon
2018-04-27
The northern North Atlantic is important globally both through its impact on the Atlantic Meridional Overturning Circulation (AMOC) and through widespread atmospheric teleconnections. The region has been shown to be potentially predictable a decade ahead with the skill of decadal predictions assessed against reanalyses of the ocean state. Here, we show that the prediction skill in this region is strongly dependent on the choice of reanalysis used for validation, and describe the causes. Multiannual skill in key metrics such as Labrador Sea density and the AMOC depends on more than simply the choice of the prediction model. Instead, this skill is related to the similarity between the nature of interannual density variability in the underlying climate model and the chosen reanalysis. The climate models used in these decadal predictions are also used in climate projections, which raises questions about the sensitivity of these projections to the models' innate North Atlantic density variability.
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods.
Gonzalez-Navarro, Felix F; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A; Flores-Rios, Brenda L; Ibarra-Esquer, Jorge E
2016-10-26
Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.
Functional Freedom: A Psychological Model of Freedom in Decision-Making.
Lau, Stephan; Hiemisch, Anette
2017-07-05
The freedom of a decision is not yet sufficiently described as a psychological variable. We present a model of functional decision freedom that aims to fill that role. The model conceptualizes functional freedom as a capacity of people that varies depending on certain conditions of a decision episode. It denotes an inner capability to consciously shape complex decisions according to one's own values and needs. Functional freedom depends on three compensatory dimensions: it is greatest when the decision-maker is highly rational, when the structure of the decision is highly underdetermined, and when the decision process is strongly based on conscious thought and reflection. We outline possible research questions, argue for psychological benefits of functional decision freedom, and explicate the model's implications on current knowledge and research. In conclusion, we show that functional freedom is a scientific variable, permitting an additional psychological foothold in research on freedom, and that is compatible with a deterministic worldview.
Crime Modeling using Spatial Regression Approach
NASA Astrophysics Data System (ADS)
Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.
2018-01-01
Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.
Love and Involvement in Romantic Relationships.
ERIC Educational Resources Information Center
Maddex, Barbara E.
This study investigates the effects of predictability, perceived similarity, trust and love on each other and involvement in romantic relationships by developing and testing (by path analysis) two models. One model incorporated involvement in romantic relationships as a dependent variable; the second model incorporated involvement as an…
NASA Astrophysics Data System (ADS)
Aygunes, Gunes
2017-07-01
The objective of this paper is to survey and determine the macroeconomic factors affecting the level of venture capital (VC) investments in a country. The literary depends on venture capitalists' quality and countries' venture capital investments. The aim of this paper is to give relationship between venture capital investment and macro economic variables via statistical computation method. We investigate the countries and macro economic variables. By using statistical computation method, we derive correlation between venture capital investments and macro economic variables. According to method of logistic regression model (logit regression or logit model), macro economic variables are correlated with each other in three group. Venture capitalists regard correlations as a indicator. Finally, we give correlation matrix of our results.
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
Benford's law and continuous dependent random variables
NASA Astrophysics Data System (ADS)
Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine
2018-01-01
Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.
Prediction and experimental observation of damage dependent damping in laminated composite beams
NASA Technical Reports Server (NTRS)
Allen, D. H.; Harris, C. E.; Highsmith, A. L.
1987-01-01
The equations of motion are developed for laminated composite beams with load-induced matrix cracking. The damage is accounted for by utilizing internal state variables. The net result of these variables on the field equations is the introduction of both enhanced damping, and degraded stiffness. Both quantities are history dependent and spatially variable, thus resulting in nonlinear equations of motion. It is explained briefly how these equations may be quasi-linearized for laminated polymeric composites under certain types of structural loading. The coupled heat conduction equation is developed, and it is shown that an enhanced Zener damping effect is produced by the introduction of microstructural damage. The resulting equations are utilized to demonstrate how damage dependent material properties may be obtained from dynamic experiments. Finaly, experimental results are compared to model predictions for several composite layups.
Evaluation of confidence intervals for a steady-state leaky aquifer model
Christensen, S.; Cooley, R.L.
1999-01-01
The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.
Tigers on trails: occupancy modeling for cluster sampling.
Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U
2010-07-01
Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.
Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W
2008-08-01
We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.
Predictions of Poisson's ratio in cross-ply laminates containing matrix cracks and delaminations
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Allen, David H.; Nottorf, Eric W.
1989-01-01
A damage-dependent constitutive model for laminated composites has been developed for the combined damage modes of matrix cracks and delaminations. The model is based on the concept of continuum damage mechanics and uses second-order tensor valued internal state variables to represent each mode of damage. The internal state variables are defined as the local volume average of the relative crack face displacements. Since the local volume for delaminations is specified at the laminate level, the constitutive model takes the form of laminate analysis equations modified by the internal state variables. Model implementation is demonstrated for the laminate engineering modulus E(x) and Poisson's ratio nu(xy) of quasi-isotropic and cross-ply laminates. The model predictions are in close agreement to experimental results obtained for graphite/epoxy laminates.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Variable horizon in a peridynamic medium
Silling, Stewart A.; Littlewood, David J.; Seleson, Pablo
2015-12-10
Here, a notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forcesmore » by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local-nonlocal coupling, illustrate the methods.« less
NASA Astrophysics Data System (ADS)
Vasilyev, V.; Ludwig, H.-G.; Freytag, B.; Lemasle, B.; Marconi, M.
2017-10-01
Context. Standard spectroscopic analyses of Cepheid variables are based on hydrostatic one-dimensional model atmospheres, with convection treated using various formulations of mixing-length theory. Aims: This paper aims to carry out an investigation of the validity of the quasi-static approximation in the context of pulsating stars. We check the adequacy of a two-dimensional time-dependent model of a Cepheid-like variable with focus on its spectroscopic properties. Methods: With the radiation-hydrodynamics code CO5BOLD, we construct a two-dimensional time-dependent envelope model of a Cepheid with Teff = 5600 K, log g = 2.0, solar metallicity, and a 2.8-day pulsation period. Subsequently, we perform extensive spectral syntheses of a set of artificial iron lines in local thermodynamic equilibrium. The set of lines allows us to systematically study effects of line strength, ionization stage, and excitation potential. Results: We evaluate the microturbulent velocity, line asymmetry, projection factor, and Doppler shifts. The microturbulent velocity, averaged over all lines, depends on the pulsational phase and varies between 1.5 and 2.7 km s-1. The derived projection factor lies between 1.23 and 1.27, which agrees with observational results. The mean Doppler shift is non-zero and negative, -1 km s-1, after averaging over several full periods and lines. This residual line-of-sight velocity (related to the "K-term") is primarily caused by horizontal inhomogeneities, and consequently we interpret it as the familiar convective blueshift ubiquitously present in non-pulsating late-type stars. Limited statistics prevent firm conclusions on the line asymmetries. Conclusions: Our two-dimensional model provides a reasonably accurate representation of the spectroscopic properties of a short-period Cepheid-like variable star. Some properties are primarily controlled by convective inhomogeneities rather than by the Cepheid-defining pulsations. Extended multi-dimensional modelling offers new insight into the nature of pulsating stars.
Computational implications of activity-dependent neuronal processes
NASA Astrophysics Data System (ADS)
Goldman, Mark Steven
Synapses, the connections between neurons, often fail to transmit a large percentage of the action potentials that they receive. I describe several models of synaptic transmission at a single stochastic synapse with an activity-dependent probability of transmission and demonstrate how synaptic transmission failures may increase the efficiency with which a synapse transmits information. Spike trains in the visual cortex of freely viewing monkeys have positive auto correlations that are indicative of a redundant representation of the information they contain. I show how a synapse with activity-dependent transmission failures modeled after those occurring in visual cortical synapses can remove this redundancy by transmitting a decorrelated subset of the spike trains it receives. I suggest that redundancy reduction at individual synapses saves synaptic resources while increasing the sensitivity of the postsynaptic neuron to information arriving along many inputs. For a neuron receiving input from many decorrelating synapses, my analysis leads to a prediction of the number of visual inputs to a neuron and the cross-correlations between these inputs and suggests that the time scale of synaptic dynamics observed in sensory areas corresponds to a fundamental time scale for processing sensory information. Systems with activity-dependent changes in their parameters, or plasticity, often display a wide variability in their individual components that belies the stability of their function, Motivated by experiments demonstrating that identified neurons with stereotyped function can have a large variability in the densities of their ion channels, or ionic conductances, I build a conductance-based model of a single neuron. The neuron's firing activity is relatively insensitive to changes in certain combinations of conductances, but markedly sensitive to changes in other combinations. Using a combined modeling and experimental approach, I show that neuromodulators and regulatory processes target sensitive combinations of conductances. I suggest that the variability observed in conductance measurements occurs along insensitive combinations of conductances and could result from homeostatic processes that allow the neuron's conductances to drift without triggering activity- dependent feedback mechanisms. These results together suggest that plastic systems may have a high degree of flexibility and variability in their components without a loss of robustness in their response properties.
Mars dust storms - Interannual variability and chaos
NASA Technical Reports Server (NTRS)
Ingersoll, Andrew P.; Lyons, James R.
1993-01-01
The hypothesis is that the global climate system, consisting of atmospheric dust interacting with the circulation, produces its own interannual variability when forced at the annual frequency. The model has two time-dependent variables representing the amount of atmospheric dust in the northern and southern hemispheres, respectively. Absorption of sunlight by the dust drives a cross-equatorial Hadley cell that brings more dust into the heated hemisphere. The circulation decays when the dust storm covers the globe. Interannual variability manifests itself either as a periodic solution in which the period is a multiple of the Martian year, or as an aperiodic (chaotic) solution that never repeats. Both kinds of solution are found in the model, lending support to the idea that interannual variability is an intrinsic property of the global climate system. The next step is to develop a hierarchy of dust-circulation models capable of being integrated for many years.
Partitioning neuronal variability
Goris, Robbe L.T.; Movshon, J. Anthony; Simoncelli, Eero P.
2014-01-01
Responses of sensory neurons differ across repeated measurements. This variability is usually treated as stochasticity arising within neurons or neural circuits. However, some portion of the variability arises from fluctuations in excitability due to factors that are not purely sensory, such as arousal, attention, and adaptation. To isolate these fluctuations, we developed a model in which spikes are generated by a Poisson process whose rate is the product of a drive that is sensory in origin, and a gain summarizing stimulus-independent modulatory influences on excitability. This model provides an accurate account of response distributions of visual neurons in macaque LGN, V1, V2, and MT, revealing that variability originates in large part from excitability fluctuations which are correlated over time and between neurons, and which increase in strength along the visual pathway. The model provides a parsimonious explanation for observed systematic dependencies of response variability and covariability on firing rate. PMID:24777419
Species distribution model transferability and model grain size - finer may not always be better.
Manzoor, Syed Amir; Griffiths, Geoffrey; Lukac, Martin
2018-05-08
Species distribution models have been used to predict the distribution of invasive species for conservation planning. Understanding spatial transferability of niche predictions is critical to promote species-habitat conservation and forecasting areas vulnerable to invasion. Grain size of predictor variables is an important factor affecting the accuracy and transferability of species distribution models. Choice of grain size is often dependent on the type of predictor variables used and the selection of predictors sometimes rely on data availability. This study employed the MAXENT species distribution model to investigate the effect of the grain size on model transferability for an invasive plant species. We modelled the distribution of Rhododendron ponticum in Wales, U.K. and tested model performance and transferability by varying grain size (50 m, 300 m, and 1 km). MAXENT-based models are sensitive to grain size and selection of variables. We found that over-reliance on the commonly used bioclimatic variables may lead to less accurate models as it often compromises the finer grain size of biophysical variables which may be more important determinants of species distribution at small spatial scales. Model accuracy is likely to increase with decreasing grain size. However, successful model transferability may require optimization of model grain size.
Occupant perception of indoor air and comfort in four hospitality environments.
Moschandreas, D J; Chu, P
2002-01-01
This article reports on a survey of customer and staff perceptions of indoor air quality at two restaurants, a billiard hall, and a casino. The survey was conducted at each environment for 8 days: 2 weekend days on 2 consecutive weekends and 4 weekdays. Before and during the survey, each hospitality environment satisfied ventilation requirements set in ASHRAE Standard 62-1999, Ventilation for Acceptable Indoor Air. An objective of this study was to test the hypothesis: If a hospitality environment satisfies ASHRAE ventilation requirements, then the indoor air is acceptable, that is, fewer than 20% of the exposed occupants perceive the environment as unacceptable. A second objective was to develop a multiple regression model that predicts the dependent variable, the environment is acceptable, as a function of a number of independent perception variables. Occupant perception of environmental, comfort, and physical variables was measured using a questionnaire. This instrument was designed to be efficient and unobtrusive; subjects could complete it within 3 min. Significant differences of occupant environment perception were identified among customers and staff. The dependent variable, the environment is acceptable, is affected by temperature, occupant density, and occupant smoking status, odor perception, health conditions, sensitivity to chemicals, and enjoyment of activities. Depending on the hospitality environment, variation of independent variables explains as much as 77% of the variation of the dependent variable.
NASA Technical Reports Server (NTRS)
Richard, Jacques C.
1995-01-01
This paper presents a dynamic model of an internal combustion engine coupled to a variable pitch propeller. The low-order, nonlinear time-dependent model is useful for simulating the propulsion system of general aviation single-engine light aircraft. This model is suitable for investigating engine diagnostics and monitoring and for control design and development. Furthermore, the model may be extended to provide a tool for the study of engine emissions, fuel economy, component effects, alternative fuels, alternative engine cycles, flight simulators, sensors, and actuators. Results show that the model provides a reasonable representation of the propulsion system dynamics from zero to 10 Hertz.
Modeling the microstructurally dependent mechanical properties of poly(ester-urethane-urea)s.
Warren, P Daniel; Sycks, Dalton G; McGrath, Dominic V; Vande Geest, Jonathan P
2013-12-01
Poly(ester-urethane-urea) (PEUU) is one of many synthetic biodegradable elastomers under scrutiny for biomedical and soft tissue applications. The goal of this study was to investigate the effect of the experimental parameters on mechanical properties of PEUUs following exposure to different degrading environments, similar to that of the human body, using linear regression, producing one predictive model. The model utilizes two independent variables of poly(caprolactone) (PCL) type and copolymer crystallinity to predict the dependent variable of maximum tangential modulus (MTM). Results indicate that comparisons between PCLs at different degradation states are statistically different (p < 0.0003), while the difference between experimental and predicted average MTM is statistically negligible (p < 0.02). The linear correlation between experimental and predicted MTM values is R(2) = 0.75. Copyright © 2013 Wiley Periodicals, Inc., a Wiley Company.
Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.
Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John
2018-03-01
Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.; Díaz, M.
2008-03-01
This work has evaluated the probability of earthquake-triggered landslide occurrence in the whole of El Salvador, with a Geographic Information System (GIS) and a logistic regression model. Slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness are the predictor variables used to determine the dependent variable of occurrence or non-occurrence of landslides within an individual grid cell. The results illustrate the importance of terrain roughness and soil type as key factors within the model — using only these two variables the analysis returned a significance level of 89.4%. The results obtained from the model within the GIS were then used to produce a map of relative landslide susceptibility.
NASA Technical Reports Server (NTRS)
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.
Species interactions may help explain the erratic periodicity of whooping cough dynamics.
Bhattacharyya, Samit; Ferrari, Matthew J; Bjørnstad, Ottar N
2017-12-14
Incidence of whooping cough exhibits variable dynamics across time and space. The periodicity of this disease varies from annual to five years in different geographic regions in both developing and developed countries. Many hypotheses have been put forward to explain this variability such as nonlinearity and seasonality, stochasticity, variable recruitment of susceptible individuals via birth, immunization, and immune boosting. We propose an alternative hypothesis to describe the variability in periodicity - the intricate dynamical variability of whooping cough may arise from interactions between its dominant etiological agents of Bordetella pertussis and Bordetella parapertussis. We develop a two-species age-structured model, where two pathogens are allowed to interact by age-dependent convalescence of individuals with severe illness from infections. With moderate strength of interactions, the model exhibits multi-annual coexisting attractors that depend on the R 0 of the two pathogens. We also examine how perturbation from case importation and noise in transmission may push the system from one dynamical regime to another. The coexistence of multi-annual cycles and the behavior of switching between attractors suggest that variable dynamics of whopping cough could be an emergent property of its multi-agent etiology. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method
NASA Astrophysics Data System (ADS)
Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad
2018-04-01
Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.
Qiao, Yuanhua; Keren, Nir; Mannan, M Sam
2009-08-15
Risk assessment and management of transportation of hazardous materials (HazMat) require the estimation of accident frequency. This paper presents a methodology to estimate hazardous materials transportation accident frequency by utilizing publicly available databases and expert knowledge. The estimation process addresses route-dependent and route-independent variables. Negative binomial regression is applied to an analysis of the Department of Public Safety (DPS) accident database to derive basic accident frequency as a function of route-dependent variables, while the effects of route-independent variables are modeled by fuzzy logic. The integrated methodology provides the basis for an overall transportation risk analysis, which can be used later to develop a decision support system.
Hsu, Chiu-Hsieh; Li, Yisheng; Long, Qi; Zhao, Qiuhong; Lance, Peter
2011-01-01
In colorectal polyp prevention trials, estimation of the rate of recurrence of adenomas at the end of the trial may be complicated by dependent censoring, that is, time to follow-up colonoscopy and dropout may be dependent on time to recurrence. Assuming that the auxiliary variables capture the dependence between recurrence and censoring times, we propose to fit two working models with the auxiliary variables as covariates to define risk groups and then extend an existing weighted logistic regression method for independent censoring to each risk group to accommodate potential dependent censoring. In a simulation study, we show that the proposed method results in both a gain in efficiency and reduction in bias for estimating the recurrence rate. We illustrate the methodology by analyzing a recurrent adenoma dataset from a colorectal polyp prevention trial. PMID:22065985
ERIC Educational Resources Information Center
Anderson, Carolyn J.; Verkuilen, Jay; Peyton, Buddy L.
2010-01-01
Survey items with multiple response categories and multiple-choice test questions are ubiquitous in psychological and educational research. We illustrate the use of log-multiplicative association (LMA) models that are extensions of the well-known multinomial logistic regression model for multiple dependent outcome variables to reanalyze a set of…
NASA Astrophysics Data System (ADS)
Xu, Zexuan; Hu, Bill
2016-04-01
Dual-permeability karst aquifers of porous media and conduit networks with significant different hydrological characteristics are widely distributed in the world. Discrete-continuum numerical models, such as MODFLOW-CFP and CFPv2, have been verified as appropriate approaches to simulate groundwater flow and solute transport in numerical modeling of karst hydrogeology. On the other hand, seawater intrusion associated with fresh groundwater resources contamination has been observed and investigated in numbers of coastal aquifers, especially under conditions of sea level rise. Density-dependent numerical models including SEAWAT are able to quantitatively evaluate the seawater/freshwater interaction processes. A numerical model of variable-density flow and solute transport - conduit flow process (VDFST-CFP) is developed to provide a better description of seawater intrusion and submarine groundwater discharge in a coastal karst aquifer with conduits. The coupling discrete-continuum VDFST-CFP model applies Darcy-Weisbach equation to simulate non-laminar groundwater flow in the conduit system in which is conceptualized and discretized as pipes, while Darcy equation is still used in continuum porous media. Density-dependent groundwater flow and solute transport equations with appropriate density terms in both conduit and porous media systems are derived and numerically solved using standard finite difference method with an implicit iteration procedure. Synthetic horizontal and vertical benchmarks are created to validate the newly developed VDFST-CFP model by comparing with other numerical models such as variable density SEAWAT, couplings of constant density groundwater flow and solute transport MODFLOW/MT3DMS and discrete-continuum CFPv2/UMT3D models. VDFST-CFP model improves the simulation of density dependent seawater/freshwater mixing processes and exchanges between conduit and matrix. Continuum numerical models greatly overestimated the flow rate under turbulent flow condition but discrete-continuum models provide more accurate results. Parameters sensitivities analysis indicates that conduit diameter and friction factor, matrix hydraulic conductivity and porosity are important parameters that significantly affect variable-density flow and solute transport simulation. The pros and cons of model assumptions, conceptual simplifications and numerical techniques in VDFST-CFP are discussed. In general, the development of VDFST-CFP model is an innovation in numerical modeling methodology and could be applied to quantitatively evaluate the seawater/freshwater interaction in coastal karst aquifers. Keywords: Discrete-continuum numerical model; Variable density flow and transport; Coastal karst aquifer; Non-laminar flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keser, Saniye; Duzgun, Sebnem; Department of Geodetic and Geographic Information Technologies, Middle East Technical University, 06800 Ankara
Highlights: Black-Right-Pointing-Pointer Spatial autocorrelation exists in municipal solid waste generation rates for different provinces in Turkey. Black-Right-Pointing-Pointer Traditional non-spatial regression models may not provide sufficient information for better solid waste management. Black-Right-Pointing-Pointer Unemployment rate is a global variable that significantly impacts the waste generation rates in Turkey. Black-Right-Pointing-Pointer Significances of global parameters may diminish at local scale for some provinces. Black-Right-Pointing-Pointer GWR model can be used to create clusters of cities for solid waste management. - Abstract: In studies focusing on the factors that impact solid waste generation habits and rates, the potential spatial dependency in solid waste generation datamore » is not considered in relating the waste generation rates to its determinants. In this study, spatial dependency is taken into account in determination of the significant socio-economic and climatic factors that may be of importance for the municipal solid waste (MSW) generation rates in different provinces of Turkey. Simultaneous spatial autoregression (SAR) and geographically weighted regression (GWR) models are used for the spatial data analyses. Similar to ordinary least squares regression (OLSR), regression coefficients are global in SAR model. In other words, the effect of a given independent variable on a dependent variable is valid for the whole country. Unlike OLSR or SAR, GWR reveals the local impact of a given factor (or independent variable) on the waste generation rates of different provinces. Results show that provinces within closer neighborhoods have similar MSW generation rates. On the other hand, this spatial autocorrelation is not very high for the exploratory variables considered in the study. OLSR and SAR models have similar regression coefficients. GWR is useful to indicate the local determinants of MSW generation rates. GWR model can be utilized to plan waste management activities at local scale including waste minimization, collection, treatment, and disposal. At global scale, the MSW generation rates in Turkey are significantly related to unemployment rate and asphalt-paved roads ratio. Yet, significances of these variables may diminish at local scale for some provinces. At local scale, different factors may be important in affecting MSW generation rates.« less
a New Framework for Characterising Simulated Droughts for Future Climates
NASA Astrophysics Data System (ADS)
Sharma, A.; Rashid, M.; Johnson, F.
2017-12-01
Significant attention has been focussed on metrics for quantifying drought. Lesser attention has been given to the unsuitability of current metrics in quantifying drought in a changing climate due to the clear non-stationarity in potential and actual evapotranspiration well into the future (Asadi-Zarch et al, 2015). This talk presents a new basis for simulating drought designed specifically for use with climate model simulations. Given the known uncertainty of climate model rainfall simulations, along with their inability to represent low-frequency variability attributes, the approach here adopts a predictive model for drought using selected atmospheric indicators. This model is based on a wavelet decomposition of relevant atmospheric predictors to filter out less relevant frequencies and formulate a better characterisation of the drought metric chosen as response. Once ascertained using observed precipication and associated atmospheric variables, these can be formulated from GCM simulations using a multivariate bias correction tool (Mehrotra and Sharma, 2016) that accounts for low-frequency variability, and a regression tool that accounts for nonlinear dependence (Sharma and Mehrotra, 2014). Use of only the relevant frequencies, as well as the corrected representation of cross-variable dependence, allows greater accuracy in characterising observed drought, from GCM simulations. Using simulations from a range of GCMs across Australia, we show here that this new method offers considerable advantages in representing drought compared to traditionally followed alternatives that rely on modelled rainfall instead. Reference:Asadi Zarch, M. A., B. Sivakumar, and A. Sharma (2015), Droughts in a warming climate: A global assessment of Standardized precipitation index (SPI) and Reconnaissance drought index (RDI), Journal of Hydrology, 526, 183-195. Mehrotra, R., and A. Sharma (2016), A Multivariate Quantile-Matching Bias Correction Approach with Auto- and Cross-Dependence across Multiple Time Scales: Implications for Downscaling, Journal of Climate, 29(10), 3519-3539. Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 50, 650-660, doi:10.1002/2013WR013845.
ERIC Educational Resources Information Center
McMicken, Betty; Vento-Wilson, Margaret; Von Berg, Shelley; Rogers, Kelly
2014-01-01
This research examined cineradiographic films (CRF) of articulatory movements in a person with congenital aglossia (PWCA) during speech production of four phrases. Pearson correlations and a multiple regression model investigated co-variation of independent variables, positions of mandible and hyoid; and pseudo-tongue-dependent variables,…
Estimation of Latent Group Effects: Psychometric Technical Report No. 2.
ERIC Educational Resources Information Center
Mislevy, Robert J.
Conventional methods of multivariate normal analysis do not apply when the variables of interest are not observed directly, but must be inferred from fallible or incomplete data. For example, responses to mental test items may depend upon latent aptitude variables, which modeled in turn as functions of demographic effects in the population. A…
Using the entire history in the analysis of nested case cohort samples.
Rivera, C L; Lumley, T
2016-08-15
Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
On Direction of Dependence in Latent Variable Contexts
ERIC Educational Resources Information Center
von Eye, Alexander; Wiedermann, Wolfgang
2014-01-01
Approaches to determining direction of dependence in nonexperimental data are based on the relation between higher-than second-order moments on one side and correlation and regression models on the other. These approaches have experienced rapid development and are being applied in contexts such as research on partner violence, attention deficit…
A System's View of E-Learning Success Model
ERIC Educational Resources Information Center
Eom, Sean B.; Ashill, Nicholas J.
2018-01-01
The past several decades of e-learning empirical research have advanced our understanding of the effective management of critical success factors (CSFs) of e-learning. Meanwhile, the proliferation of measures of dependent and independent variables has been overelaborated. We argue that a significant reduction in dependent and independent variables…
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
[Bioacoustic of the advertisement call of Ceratophrys cranwelli (Anura: Ceratophryidae)].
Valetti, Julián Alonso; Salas, Nancy Edith; Martino, Adolfo Ludovico
2013-03-01
The advertisement call plays an important role in the life history of anuran amphibians, mainly during the breeding season. Call features represent an important character to discriminate species, and sound emissions are very effective to assure or reinforce genetic incompatibility, especially in the case of sibling species. Since frogs are ectotherms, acoustic properties of their calls will vary with temperature. In this study, we described the advertisement call of C. cranwelli, quantifying the temperature effect on its components. The acoustic emissions were recorded during 2007 using a DAT record Sony TCD-100 with stereo microphone ECM-MS907 Sony and tape TDK DAT-RGX 60. As males emit their calls floating in temporary ponds, water temperatures were registered after recording the advertisement calls with a digital thermometer TES 1300+/-0.1 degreeC. Altogether, 54 calls from 18 males were analyzed. The temporal variables of each advertisement call were measured using oscillograms and sonograms and the analyses of dominant frequency were performed using a spectrogram. Multiple correlation analysis was used to identify the temperature-dependent acoustic variables and the temperature effect on these variables was quantified using linear regression models. The advertisement call of C. cranwelli consists of a single pulse group. Call duration, Pulse duration and Pulse interval decreased with the temperature, whereas the Pulse rate increased with temperature. The temperature-dependent variables were standardized at 25 degreeC according to the linear regression model obtained. The acoustic variables that were correlated with the temperature are the variables which emissions depend on laryngeal muscles and the temperature constraints the contractile properties of muscles. Our results indicated that temperature explains an important fraction of the variability in some acoustic variables (79% in the Pulse rate), and demonstrated the importance of considering the effect of temperature in acoustic components. The results suggest that acoustic variables show geographic variation to compare data with previous works.
Quantifying Variability of Avian Colours: Are Signalling Traits More Variable?
Delhey, Kaspar; Peters, Anne
2008-01-01
Background Increased variability in sexually selected ornaments, a key assumption of evolutionary theory, is thought to be maintained through condition-dependence. Condition-dependent handicap models of sexual selection predict that (a) sexually selected traits show amplified variability compared to equivalent non-sexually selected traits, and since males are usually the sexually selected sex, that (b) males are more variable than females, and (c) sexually dimorphic traits more variable than monomorphic ones. So far these predictions have only been tested for metric traits. Surprisingly, they have not been examined for bright coloration, one of the most prominent sexual traits. This omission stems from computational difficulties: different types of colours are quantified on different scales precluding the use of coefficients of variation. Methodology/Principal Findings Based on physiological models of avian colour vision we develop an index to quantify the degree of discriminable colour variation as it can be perceived by conspecifics. A comparison of variability in ornamental and non-ornamental colours in six bird species confirmed (a) that those coloured patches that are sexually selected or act as indicators of quality show increased chromatic variability. However, we found no support for (b) that males generally show higher levels of variability than females, or (c) that sexual dichromatism per se is associated with increased variability. Conclusions/Significance We show that it is currently possible to realistically estimate variability of animal colours as perceived by them, something difficult to achieve with other traits. Increased variability of known sexually-selected/quality-indicating colours in the studied species, provides support to the predictions borne from sexual selection theory but the lack of increased overall variability in males or dimorphic colours in general indicates that sexual differences might not always be shaped by similar selective forces. PMID:18301766
Bounds on internal state variables in viscoplasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1993-01-01
A typical viscoplastic model will introduce up to three types of internal state variables in order to properly describe transient material behavior; they are as follows: the back stress, the yield stress, and the drag strength. Different models employ different combinations of these internal variables--their selection and description of evolution being largely dependent on application and material selection. Under steady-state conditions, the internal variables cease to evolve and therefore become related to the external variables (stress and temperature) through simple functional relationships. A physically motivated hypothesis is presented that links the kinetic equation of viscoplasticity with that of creep under steady-state conditions. From this hypothesis one determines how the internal variables relate to one another at steady state, but most importantly, one obtains bounds on the magnitudes of stress and back stress, and on the yield stress and drag strength.
Developing a theoretical framework for complex community-based interventions.
Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana
2014-01-01
Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
NASA Astrophysics Data System (ADS)
Fernandez-del-Rincon, A.; Garcia, P.; Diez-Ibarbia, A.; de-Juan, A.; Iglesias, M.; Viadero, F.
2017-02-01
Gear transmissions remain as one of the most complex mechanical systems from the point of view of noise and vibration behavior. Research on gear modeling leading to the obtaining of models capable of accurately reproduce the dynamic behavior of real gear transmissions has spread out the last decades. Most of these models, although useful for design stages, often include simplifications that impede their application for condition monitoring purposes. Trying to filling this gap, the model presented in this paper allows us to simulate gear transmission dynamics including most of these features usually neglected by the state of the art models. This work presents a model capable of considering simultaneously the internal excitations due to the variable meshing stiffness (including the coupling among successive tooth pairs in contact, the non-linearity linked with the contacts between surfaces and the dissipative effects), and those excitations consequence of the bearing variable compliance (including clearances or pre-loads). The model can also simulate gear dynamics in a realistic torque dependent scenario. The proposed model combines a hybrid formulation for calculation of meshing forces with a non-linear variable compliance approach for bearings. Meshing forces are obtained by means of a double approach which combines numerical and analytical aspects. The methodology used provides a detailed description of the meshing forces, allowing their calculation even when gear center distance is modified due to shaft and bearing flexibilities, which are unavoidable in real transmissions. On the other hand, forces at bearing level were obtained considering a variable number of supporting rolling elements, depending on the applied load and clearances. Both formulations have been developed and applied to the simulation of the vibration of a sample transmission, focusing the attention on the transmitted load, friction meshing forces and bearing preloads.
Bayesian Unimodal Density Regression for Causal Inference
ERIC Educational Resources Information Center
Karabatsos, George; Walker, Stephen G.
2011-01-01
Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
NASA Astrophysics Data System (ADS)
García-Díaz, J. Carlos
2009-11-01
Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.
Fabric and connectivity as field descriptors for deformations in granular media
NASA Astrophysics Data System (ADS)
Wan, Richard; Pouragha, Mehdi
2015-01-01
Granular materials involve microphysics across the various scales giving rise to distinct behaviours of geomaterials, such as steady states, plastic limit states, non-associativity of plastic and yield flow, as well as instability of homogeneous deformations through strain localization. Incorporating such micro-scale characteristics is one of the biggest challenges in the constitutive modelling of granular materials, especially when micro-variables may be interdependent. With this motivation, we use two micro-variables such as coordination number and fabric anisotropy computed from tessellation of the granular material to describe its state at the macroscopic level. In order to capture functional dependencies between micro-variables, the correlation between coordination number and fabric anisotropy limits is herein formulated at the particle level rather than on an average sense. This is the essence of the proposed work which investigates the evolutions of coordination number distribution (connectivity) and anisotropy (contact normal) distribution curves with deformation history and their inter-dependencies through discrete element modelling in two dimensions. These results enter as probability distribution functions into homogenization expressions during upscaling to a continuum constitutive model using tessellation as an abstract representation of the granular system. The end product is a micro-mechanically inspired continuum model with both coordination number and fabric anisotropy as underlying micro-variables incorporated into a plasticity flow rule. The derived plastic potential bears striking resemblance to cam-clay or stress-dilatancy-type yield surfaces used in soil mechanics.
Chhabria, Karishma; Chakravarthy, V Srinivasa
2016-01-01
The motivation of developing simple minimal models for neuro-glio-vascular (NGV) system arises from a recent modeling study elucidating the bidirectional information flow within the NGV system having 89 dynamic equations (1). While this was one of the first attempts at formulating a comprehensive model for neuro-glio-vascular system, it poses severe restrictions in scaling up to network levels. On the contrary, low--dimensional models are convenient devices in simulating large networks that also provide an intuitive understanding of the complex interactions occurring within the NGV system. The key idea underlying the proposed models is to describe the glio-vascular system as a lumped system, which takes neural firing rate as input and returns an "energy" variable (analogous to ATP) as output. To this end, we present two models: biophysical neuro-energy (Model 1 with five variables), comprising KATP channel activity governed by neuronal ATP dynamics, and the dynamic threshold (Model 2 with three variables), depicting the dependence of neural firing threshold on the ATP dynamics. Both the models show different firing regimes, such as continuous spiking, phasic, and tonic bursting depending on the ATP production coefficient, ɛp, and external current. We then demonstrate that in a network comprising such energy-dependent neuron units, ɛp could modulate the local field potential (LFP) frequency and amplitude. Interestingly, low-frequency LFP dominates under low ɛp conditions, which is thought to be reminiscent of seizure-like activity observed in epilepsy. The proposed "neuron-energy" unit may be implemented in building models of NGV networks to simulate data obtained from multimodal neuroimaging systems, such as functional near infrared spectroscopy coupled to electroencephalogram and functional magnetic resonance imaging coupled to electroencephalogram. Such models could also provide a theoretical basis for devising optimal neurorehabilitation strategies, such as non-invasive brain stimulation for stroke patients.
Porfirio, Luciana L.; Harris, Rebecca M. B.; Lefroy, Edward C.; Hugh, Sonia; Gould, Susan F.; Lee, Greg; Bindoff, Nathaniel L.; Mackey, Brendan
2014-01-01
Choice of variables, climate models and emissions scenarios all influence the results of species distribution models under future climatic conditions. However, an overview of applied studies suggests that the uncertainty associated with these factors is not always appropriately incorporated or even considered. We examine the effects of choice of variables, climate models and emissions scenarios can have on future species distribution models using two endangered species: one a short-lived invertebrate species (Ptunarra Brown Butterfly), and the other a long-lived paleo-endemic tree species (King Billy Pine). We show the range in projected distributions that result from different variable selection, climate models and emissions scenarios. The extent to which results are affected by these choices depends on the characteristics of the species modelled, but they all have the potential to substantially alter conclusions about the impacts of climate change. We discuss implications for conservation planning and management, and provide recommendations to conservation practitioners on variable selection and accommodating uncertainty when using future climate projections in species distribution models. PMID:25420020
Rand, Miya K; Shimansky, Y P; Hossain, Abul B M I; Stelmach, George E
2010-11-01
Based on an assumption of movement control optimality in reach-to-grasp movements, we have recently developed a mathematical model of transport-aperture coordination (TAC), according to which the hand-target distance is a function of hand velocity and acceleration, aperture magnitude, and aperture velocity and acceleration (Rand et al. in Exp Brain Res 188:263-274, 2008). Reach-to-grasp movements were performed by young adults under four different reaching speeds and two different transport distances. The residual error magnitude of fitting the above model to data across different trials and subjects was minimal for the aperture-closure phase, but relatively much greater for the aperture-opening phase, indicating considerable difference in TAC variability between those phases. This study's goal is to identify the main reasons for that difference and obtain insights into the control strategy of reach-to-grasp movements. TAC variability within the aperture-opening phase of a single trial was found minimal, indicating that TAC variability between trials was not due to execution noise, but rather a result of inter-trial and inter-subject variability of motor plan. At the same time, the dependence of the extent of trial-to-trial variability of TAC in that phase on the speed of hand transport was sharply inconsistent with the concept of speed-accuracy trade-off: the lower the speed, the larger the variability. Conversely, the dependence of the extent of TAC variability in the aperture-closure phase on hand transport speed was consistent with that concept. Taking into account recent evidence that the cost of neural information processing is substantial for movement planning, the dependence of TAC variability in the aperture-opening phase on task performance conditions suggests that it is not the movement time that the CNS saves in that phase, but the cost of neuro-computational resources and metabolic energy required for TAC regulation in that phase. Thus, the CNS performs a trade-off between that cost and TAC regulation accuracy. It is further discussed that such trade-off is possible because, due to a special control law that governs optimal switching from aperture opening to aperture closure, the inter-trial variability of the end of aperture opening does not affect the high accuracy of TAC regulation in the subsequent aperture-closure phase.
Orruño, Estibalitz; Gagnon, Marie Pierre; Asua, José; Ben Abdeljelil, Anis
2011-01-01
We examined the main factors affecting the intention of physicians to use teledermatology using a modified Technology Acceptance Model (TAM). The investigation was carried out during a teledermatology pilot study conducted in Spain. A total of 276 questionnaires were sent to physicians by email and 171 responded (62%). Cronbach's alpha was acceptably high for all constructs. Theoretical variables were well correlated with each other and with the dependent variable (Intention to Use). Logistic regression indicated that the original TAM model was good at predicting physicians' intention to use teledermatology and that the variables Perceived Usefulness and Perceived Ease of Use were both significant (odds ratios of 8.4 and 7.4, respectively). When other theoretical variables were added, the model was still significant and it also became more powerful. However, the only significant predictor in the modified model was Facilitators with an odds ratio of 9.9. Thus the TAM was good at predicting physicians' intention to use teledermatology. However, the most important variable was the perception of Facilitators to using the technology (e.g. infrastructure, training and support).
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Hydrodynamic Aspects of Particle Clogging in Porous Media
MAYS, DAVID C.; HUNT, JAMES R.
2010-01-01
Data from 6 filtration studies, representing 43 experiments, are analyzed with a simplified version of the single-parameter O’Melia and Ali clogging model. The model parameter displays a systematic dependence on fluid velocity, which was an independent variable in each study. A cake filtration model also explains the data from one filtration study by varying a single, velocity-dependent parameter, highlighting that clogging models, because they are empirical, are not unique. Limited experimental data indicate exponential depth dependence of particle accumulation, whose impact on clogging is quantified with an extended O’Melia and Ali model. The resulting two-parameter model successfully describes the increased clogging that is always observed in the top segment of a filter. However, even after accounting for particle penetration, the two-parameter model suggests that a velocity-dependent parameter representing deposit morphology must also be included to explain the data. Most of the experimental data are described by the single-parameter O’Melia and Ali model, and the model parameter is correlated to the collector Peclet number. PMID:15707058
Modeling ecological traps for the control of feral pigs
Dexter, Nick; McLeod, Steven R
2015-01-01
Ecological traps are habitat sinks that are preferred by dispersing animals but have higher mortality or reduced fecundity compared to source habitats. Theory suggests that if mortality rates are sufficiently high, then ecological traps can result in extinction. An ecological trap may be created when pest animals are controlled in one area, but not in another area of equal habitat quality, and when there is density-dependent immigration from the high-density uncontrolled area to the low-density controlled area. We used a logistic population model to explore how varying the proportion of habitat controlled, control mortality rate, and strength of density-dependent immigration for feral pigs could affect the long-term population abundance and time to extinction. Increasing control mortality, the proportion of habitat controlled and the strength of density-dependent immigration decreased abundance both within and outside the area controlled. At higher levels of these parameters, extinction was achieved for feral pigs. We extended the analysis with a more complex stochastic, interactive model of feral pig dynamics in the Australian rangelands to examine how the same variables as the logistic model affected long-term abundance in the controlled and uncontrolled area and time to extinction. Compared to the logistic model of feral pig dynamics, the stochastic interactive model predicted lower abundances and extinction at lower control mortalities and proportions of habitat controlled. To improve the realism of the stochastic interactive model, we substituted fixed mortality rates with a density-dependent control mortality function, empirically derived from helicopter shooting exercises in Australia. Compared to the stochastic interactive model with fixed mortality rates, the model with the density-dependent control mortality function did not predict as substantial decline in abundance in controlled or uncontrolled areas or extinction for any combination of variables. These models demonstrate that pest eradication is theoretically possible without the pest being controlled throughout its range because of density-dependent immigration into the area controlled. The stronger the density-dependent immigration, the better the overall control in controlled and uncontrolled habitat combined. However, the stronger the density-dependent immigration, the poorer the control in the area controlled. For feral pigs, incorporating environmental stochasticity improves the prospects for eradication, but adding a realistic density-dependent control function eliminates these prospects. PMID:26045954
Computational motor control: feedback and accuracy.
Guigon, Emmanuel; Baraduc, Pierre; Desmurget, Michel
2008-02-01
Speed/accuracy trade-off is a ubiquitous phenomenon in motor behaviour, which has been ascribed to the presence of signal-dependent noise (SDN) in motor commands. Although this explanation can provide a quantitative account of many aspects of motor variability, including Fitts' law, the fact that this law is frequently violated, e.g. during the acquisition of new motor skills, remains unexplained. Here, we describe a principled approach to the influence of noise on motor behaviour, in which motor variability results from the interplay between sensory and motor execution noises in an optimal feedback-controlled system. In this framework, we first show that Fitts' law arises due to signal-dependent motor noise (SDN(m)) when sensory (proprioceptive) noise is low, e.g. under visual feedback. Then we show that the terminal variability of non-visually guided movement can be explained by the presence of signal-dependent proprioceptive noise. Finally, we show that movement accuracy can be controlled by opposite changes in signal-dependent sensory (SDN(s)) and SDN(m), a phenomenon that could be ascribed to muscular co-contraction. As the model also explains kinematics, kinetics, muscular and neural characteristics of reaching movements, it provides a unified framework to address motor variability.
A viscoelastic damage rheology and rate- and state-dependent friction
NASA Astrophysics Data System (ADS)
Lyakhovsky, Vladimir; Ben-Zion, Yehuda; Agnon, Amotz
2005-04-01
We analyse the relations between a viscoelastic damage rheology model and rate- and state-dependent (RS) friction. Both frameworks describe brittle deformation, although the former models localization zones in a deforming volume while the latter is associated with sliding on existing surfaces. The viscoelastic damage model accounts for evolving elastic properties and inelastic strain. The evolving elastic properties are related quantitatively to a damage state variable representing the local density of microcracks. Positive and negative changes of the damage variable lead, respectively, to degradation and recovery of the material in response to loading. A model configuration having an existing narrow zone with localized damage produces for appropriate loading and temperature-pressure conditions an overall cyclic stick-slip motion compatible with a frictional response. Each deformation cycle (limit cycle) can be divided into healing and weakening periods associated with decreasing and increasing damage, respectively. The direct effect of the RS friction and the magnitude of the frictional parameter a are related to material strengthening with increasing rate of loading. The strength and residence time of asperities (model elements) in the weakening stage depend on the rates of damage evolution and accumulation of irreversible strain. The evolutionary effect of the RS friction and overall change in the friction parameters (a-b) are controlled by the duration of the healing period and asperity (element) strengthening during this stage. For a model with spatially variable properties, the damage rheology reproduces the logarithmic dependency of the steady-state friction coefficient on the sliding velocity and the normal stress. The transition from a velocity strengthening regime to a velocity weakening one can be obtained by varying the rate of inelastic strain accumulation and keeping the other damage rheology parameters fixed. The developments unify previous damage rheology results on deformation localization leading to formation of new fault zones with detailed experimental results on frictional sliding. The results provide a route for extending the formulation of RS friction into a non-linear continuum mechanics framework.
NASA Astrophysics Data System (ADS)
Luo, X.; Hong, Y.; Lei, X.; Leung, L. R.; Li, H. Y.; Getirana, A.
2017-12-01
As one essential component of the Earth system modeling, the continental-scale river routing computation plays an important role in applications of Earth system models, such as evaluating the impacts of the global change on water resources and flood hazards. The streamflow timing, which depends on the modeled flow velocities, can be an important aspect of the model results. River flow velocities have been estimated by using the Manning's equation where the Manning roughness coefficient is a key and sensitive parameter. In some early continental-scale studies, the Manning coefficient was determined with simplified methods, such as using a constant value for the entire basin. However, large spatial variability is expected in the Manning coefficients for the numerous channels composing the river network in distributed continental-scale hydrologic modeling. In the application of a continental-scale river routing model in the Amazon Basin, we use spatially varying Manning coefficients dependent on channel sizes and attempt to represent the dominant spatial variability of Manning coefficients. Based on the comparisons of simulation results with in situ streamflow records and remotely sensed river stages, we investigate the comparatively optimal Manning coefficients and explicitly demonstrate the advantages of using spatially varying Manning coefficients. The understanding obtained in this study could be helpful in the modeling of surface hydrology at regional to continental scales.
Mao, X.; Prommer, H.; Barry, D.A.; Langevin, C.D.; Panteleit, B.; Li, L.
2006-01-01
PHWAT is a new model that couples a geochemical reaction model (PHREEQC-2) with a density-dependent groundwater flow and solute transport model (SEAWAT) using the split-operator approach. PHWAT was developed to simulate multi-component reactive transport in variable density groundwater flow. Fluid density in PHWAT depends not on only the concentration of a single species as in SEAWAT, but also the concentrations of other dissolved chemicals that can be subject to reactive processes. Simulation results of PHWAT and PHREEQC-2 were compared in their predictions of effluent concentration from a column experiment. Both models produced identical results, showing that PHWAT has correctly coupled the sub-packages. PHWAT was then applied to the simulation of a tank experiment in which seawater intrusion was accompanied by cation exchange. The density dependence of the intrusion and the snow-plough effect in the breakthrough curves were reflected in the model simulations, which were in good agreement with the measured breakthrough data. Comparison simulations that, in turn, excluded density effects and reactions allowed us to quantify the marked effect of ignoring these processes. Next, we explored numerical issues involved in the practical application of PHWAT using the example of a dense plume flowing into a tank containing fresh water. It was shown that PHWAT could model physically unstable flow and that numerical instabilities were suppressed. Physical instability developed in the model in accordance with the increase of the modified Rayleigh number for density-dependent flow, in agreement with previous research. ?? 2004 Elsevier Ltd. All rights reserved.
Change in the magnitude and mechanisms of global temperature variability with warming
Brown, Patrick T.; Ming, Yi; Li, Wenhong; Hill, Spencer A.
2017-01-01
Natural unforced variability in global mean surface air temperature (GMST) can mask or exaggerate human-caused global warming, and thus a complete understanding of this variability is highly desirable. Significant progress has been made in elucidating the magnitude and physical origins of present-day unforced GMST variability, but it has remained unclear how such variability may change as the climate warms. Here we present modeling evidence that indicates that the magnitude of low-frequency GMST variability is likely to decline in a warmer climate and that its generating mechanisms may be fundamentally altered. In particular, a warmer climate results in lower albedo at high latitudes, which yields a weaker albedo feedback on unforced GMST variability. These results imply that unforced GMST variability is dependent on the background climatological conditions, and thus climate model control simulations run under perpetual preindustrial conditions may have only limited relevance for understanding the unforced GMST variability of the future. PMID:29391875
Takagishi, Yukihiro; Sakata, Masatsugu; Kitamura, Toshinori
2011-09-01
This longitudinal study was undertaken to clarify the relationships among self-esteem, interpersonal dependency, and depression, focusing on a trait and state component of interpersonal dependency and depression. In a sample of 466 working people, self-esteem, interpersonal dependency, job stressor, and depression were assessed at 2 points of time. A structural equation model (SEM) was created to differentiate the trait component of interpersonal dependency, depression and the state component of interpersonal dependency, depression. The model revealed that self-esteem influenced trait interpersonal dependency and trait depression but not state interpersonal dependency or depression. Setting a latent variable as a trait component to differentiate trait and state in interpersonal dependency and depression in SEM was found to be effective both statistically and clinically. © 2011 Wiley Periodicals, Inc.
Bi, Yi-an; Qiu, Xi; Rotter, Charles J; Kimoto, Emi; Piotrowski, Mary; Varma, Manthena V; Ei-Kattan, Ayman F; Lai, Yurong
2013-11-01
Hepatic uptake transport is often the rate-determining step in the systemic clearance of drugs. The ability to predict uptake clearance and to determine the contribution of individual transporters to overall hepatic uptake is therefore critical in assessing the potential pharmacokinetic and pharmacodynamic variability associated with drug-drug interactions and pharmacogenetics. The present study revisited the interaction of statin drugs, including pitavastatin, fluvastatin and rosuvastatin, with the sodium-dependent taurocholate co-transporting polypeptide (NTCP) using gene transfected cell models. In addition, the uptake clearance and the contribution of NTCP to the overall hepatic uptake were assessed using in vitro hepatocyte models. Then NTCP protein expression was measured by a targeted proteomics transporter quantification method to confirm the presence and stability of NTCP expression in suspended and cultured hepatocyte models. It was concluded that NTCP-mediated uptake contributed significantly to active hepatic uptake in hepatocyte models for all three statins. However, the contribution of NTCP-mediated uptake to the overall active hepatic uptake was compound-dependent and varied from about 24% to 45%. Understanding the contribution of individual transporter proteins to the overall hepatic uptake and its functional variability when other active hepatic uptake pathways are interrupted could improve the current prediction practice used to assess the pharmacokinetic variability due to drug-drug interactions, pharmacogenetics and physiopathological conditions in humans. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Bertin, Daniel
2017-02-01
An innovative 3-D numerical model for the dynamics of volcanic ballistic projectiles is presented here. The model focuses on ellipsoidal particles and improves previous approaches by considering horizontal wind field, virtual mass forces, and drag forces subjected to variable shape-dependent drag coefficients. Modeling suggests that the projectile's launch velocity and ejection angle are first-order parameters influencing ballistic trajectories. The projectile's density and minor radius are second-order factors, whereas both intermediate and major radii of the projectile are of third order. Comparing output parameters, assuming different input data, highlights the importance of considering a horizontal wind field and variable shape-dependent drag coefficients in ballistic modeling, which suggests that they should be included in every ballistic model. On the other hand, virtual mass forces should be discarded since they almost do not contribute to ballistic trajectories. Simulation results were used to constrain some crucial input parameters (launch velocity, ejection angle, wind speed, and wind azimuth) of the block that formed the biggest and most distal ballistic impact crater during the 1984-1993 eruptive cycle of Lascar volcano, Northern Chile. Subsequently, up to 106 simulations were performed, whereas nine ejection parameters were defined by a Latin-hypercube sampling approach. Simulation results were summarized as a quantitative probabilistic hazard map for ballistic projectiles. Transects were also done in order to depict aerial hazard zones based on the same probabilistic procedure. Both maps combined can be used as a hazard prevention tool for ground and aerial transits nearby unresting volcanoes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackman, C.H.; Douglass, A.R., Chandra, S.; Stolarski, R.S.
1991-03-20
Eight years of NMC (National Meteorological Center) temperature and SBUV (solar backscattered ultraviolet) ozone data were used to calculate the monthly mean heating rates and residual circulation for use in a two-dimensional photochemical model in order to examine the interannual variability of modeled ozone. Fairly good correlations were found in the interannual behavior of modeled and measured SBUV ozone in the upper stratosphere at middle to low latitudes, where temperature dependent photochemistry is thought to dominate ozone behavior. The calculated total ozone is found to be more sensitive to the interannual residual circulation changes than to the interannual temperature changes.more » The magnitude of the modeled ozone variability is similar to the observed variability, but the observed and modeled year to year deviations are mostly uncorrelated. The large component of the observed total ozone variability at low latitudes due to the quasi-biennial oscillation (QBO) is not seen in the modeled total ozone, as only a small QBO signal is present in the heating rates, temperatures, and monthly mean residual circulation. Large interanual changes in tropospheric dynamics are believed to influence the interannual variability in the total ozone, especially at middle and high latitudes. Since these tropospheric changes and most of the QBO forcing are not included in the model formulation, it is not surprising that the interannual variability in total ozione is not well represented in the model computations.« less
NASA Astrophysics Data System (ADS)
Botsford, L. W.; Moloney, C. L.; Hastings, A.; Largier, J. L.; Powell, T. M.; Higgins, K.; Quinn, J. F.
We synthesize the results of several modelling studies that address the influence of variability in larval transport and survival on the dynamics of marine metapopulations distributed along a coast. Two important benthic invertebrates in the California Current System (CCS), the Dungeness crab and the red sea urchin, are used as examples of the way in which physical oceanographic conditions can influence stability, synchrony and persistence of meroplanktonic metapopulations. We first explore population dynamics of subpopulations and metapopulations. Even without environmental forcing, isolated local subpopulations with density-dependence can vary on time scales roughly twice the generation time at high adult survival, shifting to annual time scales at low survivals. The high frequency behavior is not seen in models of the Dungeness crab, because of their high adult survival rates. Metapopulations with density-dependent recruitment and deterministic larval dispersal fluctuate in an asynchronous fashion. Along the coast, abundance varies on spatial scales which increase with dispersal distance. Coastwide, synchronous, random environmental variability tends to synchronize these metapopulations. Climate change could cause a long-term increase or decrease in mean larval survival, which in this model leads to greater synchrony or extinction respectively. Spatially managed metapopulations of red sea urchins go extinct when distances between harvest refugia become greater than the scale of larval dispersal. All assessments of population dynamics indicate that metapopulation behavior in general dependes critically on the temporal and spatial nature of larval dispersal, which is largely determined by physical oceanographic conditions. We therfore explore physical influences on larval dispersal patterns. Observed trends in temperature and salinity applied to laboratory-determined responses indicate that natural variability in temperature and salinity can lead to variability in larval development period on interannual (50%), intra-annual (20%) and latitudinal (200%) scales. Variability in development period significantly influences larval survival and, thus, net transport. Larval drifters that undertake diel vertical migration in a primitive equation model of coastal circulation (SPEM) demonstrate the importance of vertical migration in determining horizontal transport. Empirically derived estimates of the effects of wind forcing on larval transport of vertically migrating larvae (wind drift when near the surface and Ekman transport below the surface) match cross-shelf distributions in 4 years of existing larval data. We use a one-dimensional advection-diffusion model, which includes intra-annual timing of cross-shelf flows in the CCS, to explore the combined effects on settlement: (1) temperature- and salinity-dependent development and survival rates and (2) possible horizontal transport due to vertical migration of crab larvae. Natural variability in temperature, wind forcing, and the timing of the spring transition can cause the observed variability in recruitment. We conclude that understanding the dynamics of coastally distributed metapopulations in response to physically-induced variability in larval dispersal will be a critical step in assessing the effects of climate change on marine populations.
Motivational and psychological correlates of bodybuilding dependence
EMINI, NEIM N.; BOND, MALCOLM J.
2014-01-01
Abstract Background and aims: Exercise may become physically and psychologically maladaptive if taken to extremes. One example is the dependence reported by some individuals who engage in weight training. The current study explored potential psychological, motivational, emotional and behavioural concomitants of bodybuilding dependence, with a particular focus on motives for weight training. Using a path analysis paradigm, putative causal models sought to explain associations among key study variables. Methods: A convenience sample of 101 men aged between 18 and 67 years was assembled from gymnasia in Adelaide, South Australia. Active weight trainers voluntarily completed a questionnaire that included measures of bodybuilding dependence (social dependency, training dependency, and mastery), anger, hostility and aggression, stress and motivations for weight training. Results: Three motives for weight training were identified: mood control, physique anxiety and personal challenge. Of these, personal challenge and mood control were the most directly salient to dependence. Social dependency was particularly relevant to personal challenge, whereas training dependency was associated with both personal challenge and mood control. Mastery demonstrated a direct link with physique anxiety, thus reflecting a unique component of exercise dependence. Conclusions: While it was not possible to determine causality with the available data, the joint roles of variables that influence, or are influenced by, bodybuilding dependence are identified. Results highlight unique motivations for bodybuilding and suggest that dependence could be a result of, and way of coping with, stress manifesting as aggression. A potential framework for future research is provided through the demonstration of plausible causal linkages among these variables. PMID:25317342
Motivational and psychological correlates of bodybuilding dependence.
Emini, Neim N; Bond, Malcolm J
2014-09-01
Exercise may become physically and psychologically maladaptive if taken to extremes. One example is the dependence reported by some individuals who engage in weight training. The current study explored potential psychological, motivational, emotional and behavioural concomitants of bodybuilding dependence, with a particular focus on motives for weight training. Using a path analysis paradigm, putative causal models sought to explain associations among key study variables. A convenience sample of 101 men aged between 18 and 67 years was assembled from gymnasia in Adelaide, South Australia. Active weight trainers voluntarily completed a questionnaire that included measures of bodybuilding dependence (social dependency, training dependency, and mastery), anger, hostility and aggression, stress and motivations for weight training. Three motives for weight training were identified: mood control, physique anxiety and personal challenge. Of these, personal challenge and mood control were the most directly salient to dependence. Social dependency was particularly relevant to personal challenge, whereas training dependency was associated with both personal challenge and mood control. Mastery demonstrated a direct link with physique anxiety, thus reflecting a unique component of exercise dependence. While it was not possible to determine causality with the available data, the joint roles of variables that influence, or are influenced by, bodybuilding dependence are identified. RESULTS highlight unique motivations for bodybuilding and suggest that dependence could be a result of, and way of coping with, stress manifesting as aggression. A potential framework for future research is provided through the demonstration of plausible causal linkages among these variables.
A Linear Model of Phase-Dependent Power Correlations in Neuronal Oscillations
Eriksson, David; Vicente, Raul; Schmidt, Kerstin
2011-01-01
Recently, it has been suggested that effective interactions between two neuronal populations are supported by the phase difference between the oscillations in these two populations, a hypothesis referred to as “communication through coherence” (CTC). Experimental work quantified effective interactions by means of the power correlations between the two populations, where power was calculated on the local field potential and/or multi-unit activity. Here, we present a linear model of interacting oscillators that accounts for the phase dependency of the power correlation between the two populations and that can be used as a reference for detecting non-linearities such as gain control. In the experimental analysis, trials were sorted according to the coupled phase difference of the oscillators while the putative interaction between oscillations was taking place. Taking advantage of the modeling, we further studied the dependency of the power correlation on the uncoupled phase difference, connection strength, and topology. Since the uncoupled phase difference, i.e., the phase relation before the effective interaction, is the causal variable in the CTC hypothesis we also describe how power correlations depend on that variable. For uni-directional connectivity we observe that the width of the uncoupled phase dependency is broader than for the coupled phase. Furthermore, the analytical results show that the characteristics of the phase dependency change when a bidirectional connection is assumed. The width of the phase dependency indicates which oscillation frequencies are optimal for a given connection delay distribution. We propose that a certain width enables a stimulus-contrast dependent extent of effective long-range lateral connections. PMID:21808618
Spectral Behavior of a Linearized Land-Atmosphere Model: Applications to Hydrometeorology
NASA Astrophysics Data System (ADS)
Gentine, P.; Entekhabi, D.; Polcher, J.
2008-12-01
The present study develops an improved version of the linearized land-atmosphere model first introduced by Lettau (1951). This model is used to investigate the spectral response of land-surface variables to a daily forcing of incoming radiation at the land-surface. An analytical solution of the problem is found in the form of temporal Fourier series and gives the atmospheric boundary-layer and soil profiles of state variables (potential temperature, specific humidity, sensible and latent heat fluxes). Moreover the spectral dependency of surface variables is expressed as function of land-surface parameters (friction velocity, vegetation height, aerodynamic resistance, stomatal conductance). This original approach has several advantages: First, the model only requires little data to work and perform well: only time series of incoming radiation at the land-surface, mean specific humidity and temperature at any given height are required. These inputs being widely available over the globe, the model can easily be run and tested under various conditions. The model will also help analysing the diurnal shape and frequency dependency of surface variables and soil-ABL profiles. In particular, a strong emphasis is being placed on the explanation and prediction of Evaporative Fraction (EF) and Bowen Ratio diurnal shapes. EF is shown to remain a diurnal constant under restricting conditions: fair and dry weather, with strong solar radiation and no clouds. Moreover, the EF pseudo-constancy value is found and given as function of surface parameters, such as aerodynamic resistance and stomatal conductance. Then, application of the model for the conception of remote-sensing tools, according to the temporal resolution of the sensor, will also be discussed. Finally, possible extensions and improvement of the model will be discussed.
Introduction to the use of regression models in epidemiology.
Bender, Ralf
2009-01-01
Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Geometry and Kinematics of Fault-Propagation Folds with Variable Interlimb Angles
NASA Astrophysics Data System (ADS)
Dhont, D.; Jabbour, M.; Hervouet, Y.; Deroin, J.
2009-12-01
Fault-propagation folds are common features in foreland basins and fold-and-thrust belts. Several conceptual models have been proposed to account for their geometry and kinematics. It is generally accepted that the shape of fault-propagation folds depends directly from both the amount of displacement along the basal decollement level and the dip angle of the ramp. Among these, the variable interlimb angle model proposed by Mitra (1990) is based on a folding kinematics that is able to explain open and close natural folds. However, the application of this model is limited because the geometric evolution and thickness variation of the fold directly depend on imposed parameters such as the maximal value of the ramp height. Here, we use the ramp and the interlimb angles as input data to develop a forward fold modelling accounting for thickness variations in the forelimb. The relationship between the fold amplitude and fold wavelength are subsequently applied to build balanced geologic cross-sections from surface parameters only, and to propose a kinematic restoration of the folding through time. We considered three natural examples to validate the variable interlimb angle model. Observed thickness variations in the forelimb of the Turner Valley anticline in the Alberta foothills of Canada precisely correspond to the theoretical values proposed by our model. Deep reconstruction of the Alima anticline in the southern Tunisian Atlas implies that the decollement level is localized in the Triassic-Liassic series, as highlighted by seismic imaging. Our kinematic reconstruction of the Ucero anticline in the Spanish Castilian mountains is also in agreement with the anticline geometry derived from two cross-sections. The variable interlimb angle model implies that the fault-propagation fold can be symmetric, normal asymmetric (with a greater dip value in the forelimb than in the backlimb), or reversely asymmetric (with greater dip in the backlimb) depending on the shortening amount. This model allows also: (i) to easily explain folds with wide variety of geometries; (ii) to understand the deep architecture of anticlines; and (iii) to deduce the kinematic evolution of folding with time. Mitra, S., 1990, Fault-propagation folds: geometry, kinematic evolution, and hydrocarbon traps. AAPG Bulletin, v. 74, no. 6, p. 921-945.
NASA Astrophysics Data System (ADS)
Yanites, Brian J.; Becker, Jens K.; Madritsch, Herfried; Schnellmann, Michael; Ehlers, Todd A.
2017-11-01
Landscape evolution is a product of the forces that drive geomorphic processes (e.g., tectonics and climate) and the resistance to those processes. The underlying lithology and structural setting in many landscapes set the resistance to erosion. This study uses a modified version of the Channel-Hillslope Integrated Landscape Development (CHILD) landscape evolution model to determine the effect of a spatially and temporally changing erodibility in a terrain with a complex base level history. Specifically, our focus is to quantify how the effects of variable lithology influence transient base level signals. We set up a series of numerical landscape evolution models with increasing levels of complexity based on the lithologic variability and base level history of the Jura Mountains of northern Switzerland. The models are consistent with lithology (and therewith erodibility) playing an important role in the transient evolution of the landscape. The results show that the erosion rate history at a location depends on the rock uplift and base level history, the range of erodibilities of the different lithologies, and the history of the surface geology downstream from the analyzed location. Near the model boundary, the history of erosion is dominated by the base level history. The transient wave of incision, however, is quite variable in the different model runs and depends on the geometric structure of lithology used. It is thus important to constrain the spatiotemporal erodibility patterns downstream of any given point of interest to understand the evolution of a landscape subject to variable base level in a quantitative framework.
Factors associated with fear of falling in people with Parkinson’s disease
2014-01-01
Background This study aimed to comprehensibly investigate potential contributing factors to fear of falling (FOF) among people with idiopathic Parkinson’s disease (PD). Methods The study included 104 people with PD. Mean (SD) age and PD-duration were 68 (9.4) and 5 (4.2) years, respectively, and the participants’ PD-symptoms were relatively mild. FOF (the dependent variable) was investigated with the Swedish version of the Falls Efficacy Scale, i.e. FES(S). The first multiple linear regression model replicated a previous study and independent variables targeted: walking difficulties in daily life; freezing of gait; dyskinesia; fatigue; need of help in daily activities; age; PD-duration; history of falls/near falls and pain. Model II included also the following clinically assessed variables: motor symptoms, cognitive functions, gait speed, dual-task difficulties and functional balance performance as well as reactive postural responses. Results Both regression models showed that the strongest contributing factor to FOF was walking difficulties, i.e. explaining 60% and 64% of the variance in FOF-scores, respectively. Other significant independent variables in both models were needing help from others in daily activities and fatigue. Functional balance was the only clinical variable contributing additional significant information to model I, increasing the explained variance from 66% to 73%. Conclusions The results imply that one should primarily target walking difficulties in daily life in order to reduce FOF in people mildly affected by PD. This finding applies even when considering a broad variety of aspects not previously considered in PD-studies targeting FOF. Functional balance performance, dependence in daily activities, and fatigue were also independently associated with FOF, but to a lesser extent. Longitudinal studies are warranted to gain an increased understanding of predictors of FOF in PD and who is at risk of developing a FOF. PMID:24456482
Modeling of a latent fault detector in a digital system
NASA Technical Reports Server (NTRS)
Nagel, P. M.
1978-01-01
Methods of modeling the detection time or latency period of a hardware fault in a digital system are proposed that explain how a computer detects faults in a computational mode. The objectives were to study how software reacts to a fault, to account for as many variables as possible affecting detection and to forecast a given program's detecting ability prior to computation. A series of experiments were conducted on a small emulated microprocessor with fault injection capability. Results indicate that the detecting capability of a program largely depends on the instruction subset used during computation and the frequency of its use and has little direct dependence on such variables as fault mode, number set, degree of branching and program length. A model is discussed which employs an analog with balls in an urn to explain the rate of which subsequent repetitions of an instruction or instruction set detect a given fault.
Clark, R; Filinson, R
1991-01-01
This study examines the determinants of spending on social security programs. We draw predictions from industrialism and dependency theories for the explanation of social security programs. The explanations are tested with data on seventy-five nations, representative of core, semipheripheral and peripheral nations. Industrialization variables such as the percentage of older adults and economic productivity have strong effects in models involving all nations, as does multinational corporate (MNC) penetration in extraction, particularly when region is controlled; such penetration is negatively associated with spending on social security. We then look at industrialism and dependency effects for peripheral and non-core nations alone. The effects of all industrialization variables, except economic productivity, appear insignificant for peripheral nations, while the effects of region and multinational corporate penetration in extractive and agricultural industries appears significant. Models involving all non-core nations (peripheral and semi-peripheral) look more like models for all nations than for peripheral nations alone.
Stimulus Dependence of Correlated Variability across Cortical Areas
Cohen, Marlene R.
2016-01-01
The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
Copula-based model for rainfall and El- Niño in Banyuwangi Indonesia
NASA Astrophysics Data System (ADS)
Caraka, R. E.; Supari; Tahmid, M.
2018-04-01
Modelling, describing and measuring the structure dependences between different random events is at the very heart of statistics. Therefore, a broad variety of varying dependence concepts has been developed in the past. Most often, practitioners rely only on the linear correlation to describe the degree of dependence between two or more variables; an approach that can lead to quite misleading conclusions as this measure is only capable of capturing linear relationships. Copulas go beyond dependence measures and provide a sound framework for general dependence modelling. This paper will introduce an application of Copula to estimate, understand, and interpret the dependence structure in a given set of data El-Niño in Banyuwangi, Indonesia. In a nutshell, we proved the flexibility of Copulas Archimedean in rainfall modelling and catching phenomena of El Niño in Banyuwangi, East Java, Indonesia. Also, it was found that SST of nino3, nino4, and nino3.4 are most appropriate ENSO indicators in identifying the relationship of El Nino and rainfall.
ERIC Educational Resources Information Center
Si, Yajuan; Reiter, Jerome P.
2013-01-01
In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…
ERIC Educational Resources Information Center
Duran, Erol
2013-01-01
In this study, survey model was used, for investigating the effect of printed and electronic texts on the reading comprehension levels of teacher candidates. While dependent variable of the research comprises the levels of understanding of the teacher candidates, independent variable comprises the departments of the teacher candidates, types of…
Effectiveness of the Touch Math Technique in Teaching Basic Addition to Children with Autism
ERIC Educational Resources Information Center
Yikmis, Ahmet
2016-01-01
This study aims to reveal whether the touch math technique is effective in teaching basic addition to children with autism. The dependent variable of this study is the children's skills to solve addition problems correctly, whereas teaching with the touch math technique is the independent variable. Among the single-subject research models, a…
ERIC Educational Resources Information Center
Morcol, Goktug; McLaughlin, Gerald W.
1990-01-01
The study proposes using path analysis and residual plotting as methods supporting environmental scanning in strategic planning for higher education institutions. Path models of three levels of independent variables are developed. Dependent variables measuring applications and enrollments at Virginia Polytechnic Institute and State University are…
Escaping Poverty: Rural Low-Income Mothers' Opportunity to Pursue Post-Secondary Education
ERIC Educational Resources Information Center
Woodford, Michelle; Mammen, Sheila
2010-01-01
Using human capital theory, this paper identifies the factors that may affect the opportunity for rural low-income mothers to pursue post-secondary education or training in order to escape poverty. Dependent variables used in the logistic regression model included micro-level household variables as well as the effects of state-wide welfare…
NASA Astrophysics Data System (ADS)
Mallick, Labani; Dewangan, Gulab chand; Misra, Ranjeev
2016-07-01
The broadband energy spectra of Active Galactic Nuclei (AGN) are very complex in nature with the contribution from many ingredients: accretion disk, corona, jets, broad-line region (BLR), narrow-line region (NLR) and Compton-thick absorbing cloud or TORUS. The complexity of the broadband AGN spectra gives rise to mean spectral model degeneracy, e.g, there are competing models for the broad feature near 5-7 keV in terms of blurred reflection and complex absorption. In order to overcome the energy spectral model degeneracy, the most reliable approach is to study the RMS variability spectrum which connects the energy spectrum with temporal variability. The origin of variability could be pivoting of the primary continuum, reflection and/or absorption. The study of RMS (Root Mean Square) spectra would help us to connect the energy spectra with the variability. In this work, we study the energy dependent variability of AGN by developing theoretical RMS spectral model in ISIS (Interactive Spectral Interpretation System) for different input energy spectra. In this talk, I would like to present results of RMS spectral modelling for few radio-loud and radio-quiet AGN observed by XMM-Newton, Suzaku, NuSTAR and ASTROSAT and will probe the dichotomy between these two classes of AGN.
Smyczynska, Joanna; Hilczer, Maciej; Smyczynska, Urszula; Stawerska, Renata; Tadeusiewicz, Ryszard; Lewinski, Andrzej
2015-01-01
The leading method for prediction of growth hormone (GH) therapy effectiveness are multiple linear regression (MLR) models. Best of our knowledge, we are the first to apply artificial neural networks (ANN) to solve this problem. For ANN there is no necessity to assume the functions linking independent and dependent variables. The aim of study is to compare ANN and MLR models of GH therapy effectiveness. Analysis comprised the data of 245 GH-deficient children (170 boys) treated with GH up to final height (FH). Independent variables included: patients' height, pre-treatment height velocity, chronological age, bone age, gender, pubertal status, parental heights, GH peak in 2 stimulation tests, IGF-I concentration. The output variable was FH. For testing dataset, MLR model predicted FH SDS with average error (RMSE) 0.64 SD, explaining 34.3% of its variability; ANN model derived on the same pre-processed data predicted FH SDS with RMSE 0.60 SD, explaining 42.0% of its variability; ANN model derived on raw data predicted FH with RMSE 3.9 cm (0.63 SD), explaining 78.7% of its variability. ANN seem to be valuable tool in prediction of GH treatment effectiveness, especially since they can be applied to raw clinical data.
ERIC Educational Resources Information Center
Fritz, Robert L.
A study examined the association between field-dependence and its related information processing characteristics, and educational cognitive style as a model of conative influence. Data were collected from 145 secondary marketing education students in nothern Georgia during spring 1991. Descriptive statistics, Pearson product moment correlations,…
Radiation and polarization signatures of the 3D multizone time-dependent hadronic blazar model
Zhang, Haocheng; Diltz, Chris; Bottcher, Markus
2016-09-23
We present a newly developed time-dependent three-dimensional multizone hadronic blazar emission model. By coupling a Fokker–Planck-based lepto-hadronic particle evolution code, 3DHad, with a polarization-dependent radiation transfer code, 3DPol, we are able to study the time-dependent radiation and polarization signatures of a hadronic blazar model for the first time. Our current code is limited to parameter regimes in which the hadronic γ-ray output is dominated by proton synchrotron emission, neglecting pion production. Our results demonstrate that the time-dependent flux and polarization signatures are generally dominated by the relation between the synchrotron cooling and the light-crossing timescale, which is largely independent ofmore » the exact model parameters. We find that unlike the low-energy polarization signatures, which can vary rapidly in time, the high-energy polarization signatures appear stable. Lastly, future high-energy polarimeters may be able to distinguish such signatures from the lower and more rapidly variable polarization signatures expected in leptonic models.« less
Exploiting Data Missingness in Bayesian Network Modeling
NASA Astrophysics Data System (ADS)
Rodrigues de Morais, Sérgio; Aussem, Alex
This paper proposes a framework built on the use of Bayesian networks (BN) for representing statistical dependencies between the existing random variables and additional dummy boolean variables, which represent the presence/absence of the respective random variable value. We show how augmenting the BN with these additional variables helps pinpoint the mechanism through which missing data contributes to the classification task. The missing data mechanism is thus explicitly taken into account to predict the class variable using the data at hand. Extensive experiments on synthetic and real-world incomplete data sets reveals that the missingness information improves classification accuracy.
Adam, Mary B.
2014-01-01
We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. PMID:24957544
Adam, Mary B
2014-09-01
We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. © The American Society of Tropical Medicine and Hygiene.
Multi-band implications of external-IC flares
NASA Astrophysics Data System (ADS)
Richter, Stephan; Spanier, Felix
2015-02-01
Very fast variability on scales of minutes is regularly observed in Blazars. The assumption that these flares are emerging from the dominant emission zone of the very high energy (VHE) radiation within the jet challenges current acceleration and radiation models. In this work we use a spatially resolved and time dependent synchrotron-self-Compton (SSC) model that includes the full time dependence of Fermi-I acceleration. We use the (apparent) orphan γ -ray flare of Mrk501 during MJD 54952 and test various flare scenarios against the observed data. We find that a rapidly variable external radiation field can reproduce the high energy lightcurve best. However, the effect of the strong inverse Compton (IC) cooling on other bands and the X-ray observations are constraining the parameters to rather extreme ranges. Then again other scenarios would require parameters even more extreme or stronger physical constraints on the rise and decay of the source of the variability which might be in contradiction with constraints derived from the size of the black hole's ergosphere.
NASA Astrophysics Data System (ADS)
Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.
2013-12-01
There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)
Video Self-Modeling and Improving Oral Reading Fluency
ERIC Educational Resources Information Center
Chandler, Wanda Gail
2012-01-01
Self-modeling can take different forms but is described as a process where one observes one's own successful behavior and learns from it without dependence on any particular medium. In this study, two separate experiments were conducted to evaluate a video self-modeling (VSM) feedforward intervention. VSM feedforward (independent variable, IV),…
A Survival Model for Shortleaf Pine Tress Growing in Uneven-Aged Stands
Thomas B. Lynch; Lawrence R. Gering; Michael M. Huebschmann; Paul A. Murphy
1999-01-01
A survival model for shortleaf pine (Pinus echinata Mill.) trees growing in uneven-aged stands was developed using data from permanently established plots maintained by an industrial forestry company in western Arkansas. Parameters were fitted to a logistic regression model with a Bernoulli dependent variable in which "0" represented...
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
A pocket model for aluminum agglomeration in composite propellants
NASA Technical Reports Server (NTRS)
Cohen, N. S.
1981-01-01
This paper presents a model for the purpose of estimating the fraction of aluminum powder that will form agglomerates at the surface of deflagrating composite propellants. The basic idea is that the fraction agglomerated depends upon the amount of aluminum that melts within effective binder pocket volumes framed by oxidizer particles. The effective pocket depends upon the ability of ammonium perchlorate modals to encapsulate the aluminum and provide a local temperature sufficient to ignite the aluminum. Model results are discussed in the light of data showing effects of propellant formulation variables and pressure.
Uniform modeling of bacterial colony patterns with varying nutrient and substrate
NASA Astrophysics Data System (ADS)
Schwarcz, Deborah; Levine, Herbert; Ben-Jacob, Eshel; Ariel, Gil
2016-04-01
Bacteria develop complex patterns depending on growth condition. For example, Bacillus subtilis exhibit five different patterns depending on substrate hardness and nutrient concentration. We present a unified integro-differential model that reproduces the entire experimentally observed morphology diagram at varying nutrient concentrations and substrate hardness. The model allows a comprehensive and quantitative comparison between experimental and numerical variables and parameters, such as colony growth rate, nutrient concentration and diffusion constants. As a result, the role of the different physical mechanisms underlying and regulating the growth of the colony can be evaluated.
Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.
2013-01-01
In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.
NASA Astrophysics Data System (ADS)
Cho, H. E.; Horstemeyer, M. F.; Baumgardner, J. R.
2017-12-01
In this study, we present an internal state variable (ISV) constitutive model developed to model static and dynamic recrystallization and grain size progression in a unified manner. This method accurately captures temperature, pressure and strain rate effect on the recrystallization and grain size. Because this ISV approach treats dislocation density, volume fraction of recrystallization and grain size as internal variables, this model can simultaneously track their history during the deformation with unprecedented realism. Based on this deformation history, this method can capture realistic mechanical properties such as stress-strain behavior in the relationship of microstructure-mechanical property. Also, both the transient grain size during the deformation and the steady-state grain size of dynamic recrystallization can be predicted from the history variable of recrystallization volume fraction. Furthermore, because this model has a capability to simultaneously handle plasticity and creep behaviors (unified creep-plasticity), the mechanisms (static recovery (or diffusion creep), dynamic recovery (or dislocation creep) and hardening) related to dislocation dynamics can also be captured. To model these comprehensive mechanical behaviors, the mathematical formulation of this model includes elasticity to evaluate yield stress, work hardening in treating plasticity, creep, as well as the unified recrystallization and grain size progression. Because pressure sensitivity is especially important for the mantle minerals, we developed a yield function combining Drucker-Prager shear failure and von Mises yield surfaces to model the pressure dependent yield stress, while using pressure dependent work hardening and creep terms. Using these formulations, we calibrated against experimental data of the minerals acquired from the literature. Additionally, we also calibrated experimental data for metals to show the general applicability of our model. Understanding of realistic mantle dynamics can only be acquired once the various deformation regimes and mechanisms are comprehensively modeled. The results of this study demonstrate that this ISV model is a good modeling candidate to help reveal the realistic dynamics of the Earth's mantle.
NASA Technical Reports Server (NTRS)
Haisler, W. E.
1983-01-01
An uncoupled constitutive model for predicting the transient response of thermal and rate dependent, inelastic material behavior was developed. The uncoupled model assumes that there is a temperature below which the total strain consists essentially of elastic and rate insensitive inelastic strains only. Above this temperature, the rate dependent inelastic strain (creep) dominates. The rate insensitive inelastic strain component is modelled in an incremental form with a yield function, blow rule and hardening law. Revisions to the hardening rule permit the model to predict temperature-dependent kinematic-isotropic hardening behavior, cyclic saturation, asymmetric stress-strain response upon stress reversal, and variable Bauschinger effect. The rate dependent inelastic strain component is modelled using a rate equation in terms of back stress, drag stress and exponent n as functions of temperature and strain. A sequence of hysteresis loops and relaxation tests are utilized to define the rate dependent inelastic strain rate. Evaluation of the model has been performed by comparison with experiments involving various thermal and mechanical load histories on 5086 aluminum alloy, 304 stainless steel and Hastelloy X.
A THREE-DIMENSIONAL AIR FLOW MODEL FOR SOIL VENTING: SUPERPOSITION OF ANLAYTICAL FUNCTIONS
A three-dimensional computer model was developed for the simulation of the soil-air pressure distribution at steady state and specific discharge vectors during soil venting with multiple wells in unsaturated soil. The Kirchhoff transformation of dependent variables and coordinate...
Green-Naghdi dynamics of surface wind waves in finite depth
NASA Astrophysics Data System (ADS)
Manna, M. A.; Latifi, A.; Kraenkel, R. A.
2018-04-01
The Miles’ quasi laminar theory of waves generation by wind in finite depth h is presented. In this context, the fully nonlinear Green-Naghdi model equation is derived for the first time. This model equation is obtained by the non perturbative Green-Naghdi approach, coupling a nonlinear evolution of water waves with the atmospheric dynamics which works as in the classic Miles’ theory. A depth-dependent and wind-dependent wave growth γ is drawn from the dispersion relation of the coupled Green-Naghdi model with the atmospheric dynamics. Different values of the dimensionless water depth parameter δ = gh/U 1, with g the gravity and U 1 a characteristic wind velocity, produce two families of growth rate γ in function of the dimensionless theoretical wave-age c 0: a family of γ with h constant and U 1 variable and another family of γ with U 1 constant and h variable. The allowed minimum and maximum values of γ in this model are exhibited.
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods †
Gonzalez-Navarro, Felix F.; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A.; Flores-Rios, Brenda L.; Ibarra-Esquer, Jorge E.
2016-01-01
Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization. PMID:27792165
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Enviromental influences on the {sup 137}Cs kinetics of the yellow-bellied turtle (Trachemys Scripta)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, E.L.; Brisbin, L.I. Jr.
1996-02-01
Assessments of ecological risk require accurate predictions of contaminant dynamics in natural populations. However, simple deterministic models that assume constant uptake rates and elimination fractions may compromise both their ecological realism and their general application to animals with variable metabolism or diets. In particular, the temperature-dependent model of metabolic rates characteristic of ectotherms may lead to significant differences between observed and predicted contaminant kinetics. We examined the influence of a seasonally variable thermal environment on predicting the uptake and annual cycling of contaminants by ectotherms, using a temperature-dependent model of {sup 137}Cs kinetics in free-living yellow-bellied turtles, Trachemys scripta. Wemore » compared predictions from this model with those of deterministics negative exponential and flexibly shaped Richards sigmoidal models. Concentrations of {sup 137}Cs in a population if this species in Pond B, a radionuclide-contaminated nuclear reactor cooling reservoir, and {sup 137}Cs uptake by the uncontaminated turtles held captive in Pond B for 4 yr confirmed both the pattern of uptake and the equilibrium concentrations predicted by the temperature-dependent model. Almost 90% of the variance on the predicted time-integrated {sup 137}Cs concentration was explainable by linear relationships with model paramaters. The model was also relatively insensitive to uncertainties in the estimates of ambient temperature, suggesting that adequate estimates of temperature-dependent ingestion and elimination may require relatively few measurements of ambient conditions at sites of interest. Analyses of Richards sigmoidal models of {sup 137}Cs uptake indicated significant differences from a negative exponential trajectory in the 1st yr after the turtles` release into Pond B. 76 refs., 7 figs., 5 tabs.« less
Miller, Tom E X
2007-07-01
1. It is widely accepted that density-dependent processes play an important role in most natural populations. However, persistent challenges in our understanding of density-dependent population dynamics include evaluating the shape of the relationship between density and demographic rates (linear, concave, convex), and identifying extrinsic factors that can mediate this relationship. 2. I studied the population dynamics of the cactus bug Narnia pallidicornis on host plants (Opuntia imbricata) that varied naturally in relative reproductive effort (RRE, the proportion of meristems allocated to reproduction), an important plant quality trait. I manipulated per-plant cactus bug densities, quantified subsequent dynamics, and fit stage-structured models to the experimental data to ask if and how density influences demographic parameters. 3. In the field experiment, I found that populations with variable starting densities quickly converged upon similar growth trajectories. In the model-fitting analyses, the data strongly supported a model that defined the juvenile cactus bug retention parameter (joint probability of surviving and not dispersing) as a nonlinear decreasing function of density. The estimated shape of this relationship shifted from concave to convex with increasing host-plant RRE. 4. The results demonstrate that host-plant traits are critical sources of variation in the strength and shape of density dependence in insects, and highlight the utility of integrated experimental-theoretical approaches for identifying processes underlying patterns of change in natural populations.
Navaee-Ardeh, S; Mohammadi-Rovshandeh, J; Pourjoozi, M
2004-03-01
A normalized design was used to examine the influence of independent variables (alcohol concentration, cooking time and temperature) in the catalytic soda-ethanol pulping of rice straw on various mechanical properties (breaking length, burst, tear index and folding endurance) of paper sheets obtained from each pulping process. An equation of each dependent variable as a function of cooking variables (independent variables) was obtained by multiple non-linear regression using the least square method by MATLAB software for developing of empirical models. The ranges of alcohol concentration, cooking time and temperature were 40-65% (w/w), 150-180 min and 195-210 degrees C, respectively. Three-dimensional graphs of dependent variables were also plotted versus independent variables. The optimum values of breaking length, burst and tear index and folding endurance were 4683.7 (m), 30.99 (kN/g), 376.93 (mN m2/g) and 27.31, respectively. However, short cooking time (150 min), high ethanol concentration (65%) and high temperature (210 degrees C) could be used to produce papers with suitable burst and tear index. However, for papers with best breaking length and folding endurance low temperature (195 degrees C) was desirable. Differences between optimum values of dependent variables obtained by normalized design and experimental data were less than 20%.
Bulk canopy resistance: Modeling for the estimation of actual evapotranspiration of maize
NASA Astrophysics Data System (ADS)
Gharsallah, O.; Corbari, C.; Mancini, M.; Rana, G.
2009-04-01
Due to the scarcity of water resources, the correct evaluation of water losses by the crops as evapotranspiration (ET) is very important in irrigation management. This work presents a model for estimating actual evapotranspiration on hourly and daily scales of maize crop grown in well water condition in the Lombardia Region (North Italy). The maize is a difficult crop to model from the soil-canopy-atmosphere point of view, due to its very complex architecture and big height. The present ET model is based on the Penman-Monteith equation using Katerji and Perrier approach for modelling the variable canopy resistance value (rc). In fact rc is a primary factor in the evapotranspiration process and needs to be accurately estimated. Furthermore, ET also has an aerodynamic component, hence it depends on multiple factors such as meteorological variables and crop water condition. The proposed approach appears through a linear model in which rc depends on climate variables and aerodynamic resistance [rc/ra = f(r*/ra)] where ra is the aerodynamic resistance, function of wind speed and crop height, and r* is called "critical" or "climatic" resistance. Here, under humid climate, the model has been applied with good results at both hourly and daily scales. In this study, the reached good accuracy shows that the model worked well and are clearly more accurate than those obtained by using the more diffuse and known standard FAO 56 method for well watered and stressed crops.
Applying causal mediation analysis to personality disorder research.
Walters, Glenn D
2018-01-01
This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Climate-driven vital rates do not always mean climate-driven population.
Tavecchia, Giacomo; Tenan, Simone; Pradel, Roger; Igual, José-Manuel; Genovart, Meritxell; Oro, Daniel
2016-12-01
Current climatic changes have increased the need to forecast population responses to climate variability. A common approach to address this question is through models that project current population state using the functional relationship between demographic rates and climatic variables. We argue that this approach can lead to erroneous conclusions when interpopulation dispersal is not considered. We found that immigration can release the population from climate-driven trajectories even when local vital rates are climate dependent. We illustrated this using individual-based data on a trans-equatorial migratory seabird, the Scopoli's shearwater Calonectris diomedea, in which the variation of vital rates has been associated with large-scale climatic indices. We compared the population annual growth rate λ i , estimated using local climate-driven parameters with ρ i , a population growth rate directly estimated from individual information and that accounts for immigration. While λ i varied as a function of climatic variables, reflecting the climate-dependent parameters, ρ i did not, indicating that dispersal decouples the relationship between population growth and climate variables from that between climatic variables and vital rates. Our results suggest caution when assessing demographic effects of climatic variability especially in open populations for very mobile organisms such as fish, marine mammals, bats, or birds. When a population model cannot be validated or it is not detailed enough, ignoring immigration might lead to misleading climate-driven projections. © 2016 John Wiley & Sons Ltd.
Teclaw, Robert; Osatuke, Katerine; Fishman, Jonathan; Moore, Scott C; Dyrenforth, Sue
2014-01-01
This study estimated the relative influence of age/generation and tenure on job satisfaction and workplace climate perceptions. Data from the 2004, 2008, and 2012 Veterans Health Administration All Employee Survey (sample sizes >100 000) were examined in general linear models, with demographic characteristics simultaneously included as independent variables. Ten dependent variables represented a broad range of employee attitudes. Age/generation and tenure effects were compared through partial η(2) (95% confidence interval), P value of F statistic, and overall model R(2). Demographic variables taken together were only weakly related to employee attitudes, accounting for less than 10% of the variance. Consistently across survey years, for all dependent variables, age and age-squared had very weak to no effects, whereas tenure and tenure-squared had meaningfully greater partial η(2) values. Except for 1 independent variable in 1 year, none of the partial η(2) confidence intervals for age and age-squared overlapped those of tenure and tenure-squared. Much has been made in the popular and professional press of the importance of generational differences in workplace attitudes. Empirical studies have been contradictory and therefore inconclusive. The findings reported here suggest that age/generational differences might not influence employee perceptions to the extent that human resource and management practitioners have been led to believe.
State-variable theories for nonelastic deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.Y.
The various concepts of mechanical equation of state for nonelastic deformation in crystalline solids, originally proposed for plastic deformation, have been recently extended to describe additional phenomena such as anelastic and microplastic deformation including the Bauschinger effect. It has been demonstrated that it is possible to predict, based on current state variables in a unified way, the mechanical response of a material under an arbitrary loading. Thus, if the evolution laws of the state variables are known, one can describe the behavior of a material for a thermal-mechanical path of interest, for example, during constant load (or stress) creep withoutmore » relying on specialized theories. Some of the existing theories of mechanical equation of state for nonelastic deformation are reviewed. The establishment of useful forms of mechanical equation of state has to depend on extensive experimentation in the same way as that involved in the development, for example, the ideal gas law. Recent experimental efforts are also reviewed. It has been possible to develop state-variable deformation models based on experimental findings and apply them to creep, cyclic deformation, and other time-dependent deformation. Attempts are being made to correlate the material parameters of the state-variable models with the microstructure of a material. 24 figures.« less
A Model for Pharmacological Research-Treatment of Cocaine Dependence
Montoya, Ivan D.; Hess, Judith M.; Preston, Kenzie L.; Gorelick, David A.
2008-01-01
Major problems for research on pharmacological treatments for cocaine dependence are lack of comparability of results from different treatment research programs and poor validity and/or reliability of results. Double-blind, placebo-controlled, random assignment, experimental designs, using standard intake and assessment procedures help to reduce these problems. Cessation or reduction of drug use and/or craving, retention in treatment, and medical and psychosocial improvement are some of the outcome variables collected in treatment research programs. A model to be followed across different outpatient clinical trials for pharmacological treatment of cocaine dependence is presented here. This model represents an effort to standardize data collection to make results more valid and comparable. PMID:8749725
NASA Technical Reports Server (NTRS)
Pilewski, P.; Rabbette, M.; Bergstrom, R.; Marquez, J.; Schmid, B.; Russell, P. B.
2000-01-01
Moderate resolution spectra of the downwelling solar irradiance at the ground in north central Oklahoma were measured during the Department of Energy Atmospheric Radiation Measurement Program Intensive Observation Period in the fall of 1997. Spectra obtained under-cloud-free conditions were compared with calculations using a coarse resolution radiative transfer model to examine the dependency of model-measurement bias on water vapor. It was found that the bias was highly correlated with water vapor and increased at a rate of 9 Wm per cm of water. The source of the discrepancy remains undetermined because of the complex dependencies of other variables, most notably aerosol optical depth, on water vapor.
NASA Technical Reports Server (NTRS)
Pilewskie, P.; Rabbette, M.; Bergstrom, R.; Marquez, J.; Schmid, B.; Russell, P. B.
2000-01-01
Moderate resolution spectra of the downwelling solar irradiance at the ground in north central Oklahoma were measured during the Department of Energy Atmospheric Radiation Measurement Program Intensive Observation Period in the fall of 1997. Spectra obtained under cloud-free conditions were compared with calculations using a coarse resolution radiative transfer model to examine the dependency of model-measurement bias on water vapor. It was found that the bias was highly correlated with water vapor and increased at a rate of 9 Wm(exp -2) per cm of water. The source of the discrepancy remains undetermined because of the complex dependencies of other variables, most notably aerosol optical depth, on water vapor.
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-05-01
We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.
Spatial analysis of highway incident durations in the context of Hurricane Sandy.
Xie, Kun; Ozbay, Kaan; Yang, Hong
2015-01-01
The objectives of this study are (1) to develop an incident duration model which can account for the spatial dependence of duration observations, and (2) to investigate the impacts of a hurricane on incident duration. Highway incident data from New York City and its surrounding regions before and after Hurricane Sandy was used for the study. Moran's I statistics confirmed that durations of the neighboring incidents were spatially correlated. Moreover, Lagrange Multiplier tests suggested that the spatial dependence should be captured in a spatial lag specification. A spatial error model, a spatial lag model and a standard model without consideration of spatial effects were developed. The spatial lag model is found to outperform the others by capturing the spatial dependence of incident durations via a spatially lagged dependent variable. It was further used to assess the effects of hurricane-related variables on incident duration. The results show that the incidents during and post the hurricane are expected to have 116.3% and 79.8% longer durations than those that occurred in the regular time. However, no significant increase in incident duration is observed in the evacuation period before Sandy's landfall. Results of temporal stability tests further confirm the existence of the significant changes in incident duration patterns during and post the hurricane. Those findings can provide insights to aid in the development of hurricane evacuation plans and emergency management strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
NASA Astrophysics Data System (ADS)
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Kourgialas, Nektarios; Karatzas, George; Giannakis, Georgios; Lilli, Maria; Nikolaidis, Nikolaos
2014-05-01
Riverbank erosion affects the river morphology and the local habitat and results in riparian land loss, damage to property and infrastructures, ultimately weakening flood defences. An important issue concerning riverbank erosion is the identification of the areas vulnerable to erosion, as it allows for predicting changes and assists with stream management and restoration. One way to predict the vulnerable to erosion areas is to determine the erosion probability by identifying the underlying relations between riverbank erosion and the geomorphological and/or hydrological variables that prevent or stimulate erosion. A statistical model for evaluating the probability of erosion based on a series of independent local variables and by using logistic regression is developed in this work. The main variables affecting erosion are vegetation index (stability), the presence or absence of meanders, bank material (classification), stream power, bank height, river bank slope, riverbed slope, cross section width and water velocities (Luppi et al. 2009). In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable, e.g. binary response, based on one or more predictor variables (continuous or categorical). The probabilities of the possible outcomes are modelled as a function of independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. 1 = "presence of erosion" and 0 = "no erosion") for any value of the independent variables. The regression coefficients are estimated by using maximum likelihood estimation. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested (Atkinson et al. 2003). The developed statistical model is applied to the Koiliaris River Basin in the island of Crete, Greece. The aim is to determine the probability of erosion along the Koiliaris' riverbanks considering a series of independent geomorphological and/or hydrological variables. Data for the river bank slope and for the river cross section width are available at ten locations along the river. The riverbank has indications of erosion at six of the ten locations while four has remained stable. Based on a recent work, measurements for the two independent variables and data regarding bank stability are available at eight different locations along the river. These locations were used as validation points for the proposed statistical model. The results show a very close agreement between the observed erosion indications and the statistical model as the probability of erosion was accurately predicted at seven out of the eight locations. The next step is to apply the model at more locations along the riverbanks. In November 2013, stakes were inserted at selected locations in order to be able to identify the presence or absence of erosion after the winter period. In April 2014 the presence or absence of erosion will be identified and the model results will be compared to the field data. Our intent is to extend the model by increasing the number of independent variables in order to indentify the key factors favouring erosion along the Koiliaris River. We aim at developing an easy to use statistical tool that will provide a quantified measure of the erosion probability along the riverbanks, which could consequently be used to prevent erosion and flooding events. Atkinson, P. M., German, S. E., Sear, D. A. and Clark, M. J. 2003. Exploring the relations between riverbank erosion and geomorphological controls using geographically weighted logistic regression. Geographical Analysis, 35 (1), 58-82. Luppi, L., Rinaldi, M., Teruggi, L. B., Darby, S. E. and Nardi, L. 2009. Monitoring and numerical modelling of riverbank erosion processes: A case study along the Cecina River (central Italy). Earth Surface Processes and Landforms, 34 (4), 530-546. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
Modeling mountain pine beetle habitat suitability within Sequoia National Park
NASA Astrophysics Data System (ADS)
Nguyen, Andrew
Understanding significant changes in climate and their effects on timber resources can help forest managers make better decisions regarding the preservation of natural resources and land management. These changes may to alter natural ecosystems dependent on historical and current climate conditions. Increasing mountain pine beetle (MBP) outbreaks within the southern Sierra Nevada are the result of these alterations. This study better understands MPB behavior within Sequoia National Park (SNP) and model its current and future habitat distribution. Variables contributing to MPB spread are vegetation stress, soil moisture, temperature, precipitation, disturbance, and presence of Ponderosa (Pinus ponderosa) and Lodgepole (Pinus contorta) pine trees. These variables were obtained using various modeled, insitu, and remotely sensed sources. The generalized additive model (GAM) was used to calculate the statistical significance of each variable contributing to MPB spread and also created maps identifying habitat suitability. Results indicate vegetation stress and forest disturbance to be variables most indicative of MPB spread. Additionally, the model was able to detect habitat suitability of MPB with a 45% accuracy concluding that a geospatial driven modeling approach can be used to delineate potential MPB spread within SNP.
Local Infrasound Variability Related to In Situ Atmospheric Observation
NASA Astrophysics Data System (ADS)
Kim, Keehoon; Rodgers, Arthur; Seastrand, Douglas
2018-04-01
Local infrasound is widely used to constrain source parameters of near-surface events (e.g., chemical explosions and volcanic eruptions). While atmospheric conditions are critical to infrasound propagation and source parameter inversion, local atmospheric variability is often ignored by assuming homogeneous atmospheres, and their impact on the source inversion uncertainty has never been accounted for due to the lack of quantitative understanding of infrasound variability. We investigate atmospheric impacts on local infrasound propagation by repeated explosion experiments with a dense acoustic network and in situ atmospheric measurement. We perform full 3-D waveform simulations with local atmospheric data and numerical weather forecast model to quantify atmosphere-dependent infrasound variability and address the advantage and restriction of local weather data/numerical weather model for sound propagation simulation. Numerical simulations with stochastic atmosphere models also showed nonnegligible influence of atmospheric heterogeneity on infrasound amplitude, suggesting an important role of local turbulence.
Range expansion through fragmented landscapes under a variable climate
Bennie, Jonathan; Hodgson, Jenny A; Lawson, Callum R; Holloway, Crispin TR; Roy, David B; Brereton, Tom; Thomas, Chris D; Wilson, Robert J
2013-01-01
Ecological responses to climate change may depend on complex patterns of variability in weather and local microclimate that overlay global increases in mean temperature. Here, we show that high-resolution temporal and spatial variability in temperature drives the dynamics of range expansion for an exemplar species, the butterfly Hesperia comma. Using fine-resolution (5 m) models of vegetation surface microclimate, we estimate the thermal suitability of 906 habitat patches at the species' range margin for 27 years. Population and metapopulation models that incorporate this dynamic microclimate surface improve predictions of observed annual changes to population density and patch occupancy dynamics during the species' range expansion from 1982 to 2009. Our findings reveal how fine-scale, short-term environmental variability drives rates and patterns of range expansion through spatially localised, intermittent episodes of expansion and contraction. Incorporating dynamic microclimates can thus improve models of species range shifts at spatial and temporal scales relevant to conservation interventions. PMID:23701124
Nikčević, Ana V; Alma, Leyla; Marino, Claudia; Kolubinski, Daniel; Yılmaz-Samancı, Adviye Esin; Caselli, Gabriele; Spada, Marcantonio M
2017-11-01
Both positive smoking outcome expectancies and metacognitions about smoking have been found to be positively associated with cigarette use and nicotine dependence. The goal of this study was to test a model including nicotine dependence and number of daily cigarettes as dependent variables, anxiety and depression as independent variables, and smoking outcome expectancies and metacognitions about smoking as mediators between the independents and dependents. The sample consisted of 524 self-declared smokers who scored 3 or above on the Fagerstrom Test for Nicotine Dependence (FTND: Uysal et al., 2004). Anxiety was not associated with either cigarette use or nicotine dependence but was positively associated with all mediators with the exception of stimulation state enhancement and social facilitation. Depression, on the other hand, was found to be positively associated with nicotine dependence (and very weakly to cigarette use) but was not associated with either smoking outcome expectancies or metacognitions about smoking. Only one smoking outcome expectancy (negative affect reduction) was found to be positively associated with nicotine dependence but not cigarette use. Furthermore one smoking outcome expectancy (negative social impression) was found to be positively associated with cigarette use (but not to nicotine dependence). All metacognitions about smoking were found to be positively associated with nicotine dependence. Moreover, negative metacognitions about uncontrollability were found to be positively associated with cigarette use. Metacognitions about smoking appear to be a stronger mediator than smoking outcome expectancies in the relationship between negative affect and cigarette use/nicotine dependence. The implications of these findings are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Isolating the anthropogenic component of Arctic warming
Chylek, Petr; Hengartner, Nicholas; Lesins, Glen; ...
2014-05-28
Structural equation modeling is used in statistical applications as both confirmatory and exploratory modeling to test models and to suggest the most plausible explanation for a relationship between the independent and the dependent variables. Although structural analysis cannot prove causation, it can suggest the most plausible set of factors that influence the observed variable. Here, we apply structural model analysis to the annual mean Arctic surface air temperature from 1900 to 2012 to find the most effective set of predictors and to isolate the anthropogenic component of the recent Arctic warming by subtracting the effects of natural forcing and variabilitymore » from the observed temperature. We also find that anthropogenic greenhouse gases and aerosols radiative forcing and the Atlantic Multidecadal Oscillation internal mode dominate Arctic temperature variability. Finally, our structural model analysis of observational data suggests that about half of the recent Arctic warming of 0.64 K/decade may have anthropogenic causes.« less
Singh, R K Ratankumar; Majumdar, Ranendra K; Venkateshwarlu, G
2014-09-01
To establish the effect of barrel temperature, screw speed, total moisture and fish flour content on the expansion ratio and bulk density of the fish based extrudates, response surface methodology was adopted in this study. The experiments were optimized using five-levels, four factors central composite design. Analysis of Variance was carried to study the effects of main factors and interaction effects of various factors and regression analysis was carried out to explain the variability. The fitting was done to a second order model with the coded variables for each response. The response surface plots were developed as a function of two independent variables while keeping the other two independent variables at optimal values. Based on the ANOVA, the fitted model confirmed the model fitness for both the dependent variables. Organoleptically highest score was obtained with the combination of temperature-110(0) C, screw speed-480 rpm, moisture-18 % and fish flour-20 %.
Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi
2018-06-02
Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.
Rand, Miya K; Shimansky, Yury P
2013-03-01
A quantitative model of optimal transport-aperture coordination (TAC) during reach-to-grasp movements has been developed in our previous studies. The utilization of that model for data analysis allowed, for the first time, to examine the phase dependence of the precision demand specified by the CNS for neurocomputational information processing during an ongoing movement. It was shown that the CNS utilizes a two-phase strategy for movement control. That strategy consists of reducing the precision demand for neural computations during the initial phase, which decreases the cost of information processing at the expense of lower extent of control optimality. To successfully grasp the target object, the CNS increases precision demand during the final phase, resulting in higher extent of control optimality. In the present study, we generalized the model of optimal TAC to a model of optimal coordination between X and Y components of point-to-point planar movements (XYC). We investigated whether the CNS uses the two-phase control strategy for controlling those movements, and how the strategy parameters depend on the prescribed movement speed, movement amplitude and the size of the target area. The results indeed revealed a substantial similarity between the CNS's regulation of TAC and XYC. First, the variability of XYC within individual trials was minimal, meaning that execution noise during the movement was insignificant. Second, the inter-trial variability of XYC was considerable during the majority of the movement time, meaning that the precision demand for information processing was lowered, which is characteristic for the initial phase. That variability significantly decreased, indicating higher extent of control optimality, during the shorter final movement phase. The final phase was the longest (shortest) under the most (least) challenging combination of speed and accuracy requirements, fully consistent with the concept of the two-phase control strategy. This paper further discussed the relationship between motor variability and XYC variability.
NASA Astrophysics Data System (ADS)
Dieppois, B.; Pohl, B.; Eden, J.; Crétat, J.; Rouault, M.; Keenlyside, N.; New, M. G.
2017-12-01
The water management community has hitherto neglected or underestimated many of the uncertainties in climate impact scenarios, in particular, uncertainties associated with decadal climate variability. Uncertainty in the state-of-the-art global climate models (GCMs) is time-scale-dependant, e.g. stronger at decadal than at interannual timescales, in response to the different parameterizations and to internal climate variability. In addition, non-stationarity in statistical downscaling is widely recognized as a key problem, in which time-scale dependency of predictors plays an important role. As with global climate modelling, therefore, the selection of downscaling methods must proceed with caution to avoid unintended consequences of over-correcting the noise in GCMs (e.g. interpreting internal climate variability as a model bias). GCM outputs from the Coupled Model Intercomparison Project 5 (CMIP5) have therefore first been selected based on their ability to reproduce southern African summer rainfall variability and their teleconnections with Pacific sea-surface temperature across the dominant timescales. In observations, southern African summer rainfall has recently been shown to exhibit significant periodicities at the interannual timescale (2-8 years), quasi-decadal (8-13 years) and inter-decadal (15-28 years) timescales, which can be interpret as the signature of ENSO, the IPO, and the PDO over the region. Most of CMIP5 GCMs underestimate southern African summer rainfall variability and their teleconnections with Pacific SSTs at these three timescales. In addition, according to a more in-depth analysis of historical and pi-control runs, this bias is might result from internal climate variability in some of the CMIP5 GCMs, suggesting potential for bias-corrected prediction based empirical statistical downscaling. A multi-timescale regression based downscaling procedure, which determines the predictors across the different timescales, has thus been used to simulate southern African summer rainfall. This multi-timescale procedure shows much better skills in simulating decadal timescales of variability compared to commonly used statistical downscaling approaches.
Idealized models of the joint probability distribution of wind speeds
NASA Astrophysics Data System (ADS)
Monahan, Adam H.
2018-05-01
The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.
Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229
Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.
Global growth and stability of agricultural yield decrease with pollinator dependence
Garibaldi, Lucas A.; Aizen, Marcelo A.; Klein, Alexandra M.; Cunningham, Saul A.; Harder, Lawrence D.
2011-01-01
Human welfare depends on the amount and stability of agricultural production, as determined by crop yield and cultivated area. Yield increases asymptotically with the resources provided by farmers’ inputs and environmentally sensitive ecosystem services. Declining yield growth with increased inputs prompts conversion of more land to cultivation, but at the risk of eroding ecosystem services. To explore the interdependence of agricultural production and its stability on ecosystem services, we present and test a general graphical model, based on Jensen's inequality, of yield–resource relations and consider implications for land conversion. For the case of animal pollination as a resource influencing crop yield, this model predicts that incomplete and variable pollen delivery reduces yield mean and stability (inverse of variability) more for crops with greater dependence on pollinators. Data collected by the Food and Agriculture Organization of the United Nations during 1961–2008 support these predictions. Specifically, crops with greater pollinator dependence had lower mean and stability in relative yield and yield growth, despite global yield increases for most crops. Lower yield growth was compensated by increased land cultivation to enhance production of pollinator-dependent crops. Area stability also decreased with pollinator dependence, as it correlated positively with yield stability among crops. These results reveal that pollen limitation hinders yield growth of pollinator-dependent crops, decreasing temporal stability of global agricultural production, while promoting compensatory land conversion to agriculture. Although we examined crop pollination, our model applies to other ecosystem services for which the benefits to human welfare decelerate as the maximum is approached. PMID:21422295
Ordinal regression models to describe tourist satisfaction with Sintra's world heritage
NASA Astrophysics Data System (ADS)
Mouriño, Helena
2013-10-01
In Tourism Research, ordinal regression models are becoming a very powerful tool in modelling the relationship between an ordinal response variable and a set of explanatory variables. In August and September 2010, we conducted a pioneering Tourist Survey in Sintra, Portugal. The data were obtained by face-to-face interviews at the entrances of the Palaces and Parks of Sintra. The work developed in this paper focus on two main points: tourists' perception of the entrance fees; overall level of satisfaction with this heritage site. For attaining these goals, ordinal regression models were developed. We concluded that tourist's nationality was the only significant variable to describe the perception of the admission fees. Also, Sintra's image among tourists depends not only on their nationality, but also on previous knowledge about Sintra's World Heritage status.
Ecology and the ratchet of events: climate variability, niche dimensions, and species distributions
Jackson, Stephen T.; Betancourt, Julio L.; Booth, Robert K.; Gray, Stephen T.
2009-01-01
Climate change in the coming centuries will be characterized by interannual, decadal, and multidecadal fluctuations superimposed on anthropogenic trends. Predicting ecological and biogeographic responses to these changes constitutes an immense challenge for ecologists. Perspectives from climatic and ecological history indicate that responses will be laden with contingencies, resulting from episodic climatic events interacting with demographic and colonization events. This effect is compounded by the dependency of environmental sensitivity upon life-stage for many species. Climate variables often used in empirical niche models may become decoupled from the proximal variables that directly influence individuals and populations. Greater predictive capacity, and more-fundamental ecological and biogeographic understanding, will come from integration of correlational niche modeling with mechanistic niche modeling, dynamic ecological modeling, targeted experiments, and systematic observations of past and present patterns and dynamics.
Ecology and the ratchet of events: Climate variability, niche dimensions, and species distributions
Jackson, S.T.; Betancourt, J.L.; Booth, R.K.; Gray, S.T.
2009-01-01
Climate change in the coming centuries will be characterized by interannual, decadal, and multidecadal fluctuations superimposed on anthropogenic trends. Predicting ecological and biogeographic responses to these changes constitutes an immense challenge for ecologists. Perspectives from climatic and ecological history indicate that responses will be laden with contingencies, resulting from episodic climatic events interacting with demographic and colonization events. This effect is compounded by the dependency of environmental sensitivity upon life-stage for many species. Climate variables often used in empirical niche models may become decoupled from the proximal variables that directly influence individuals and populations. Greater predictive capacity, and morefundamental ecological and biogeographic understanding, will come from integration of correlational niche modeling with mechanistic niche modeling, dynamic ecological modeling, targeted experiments, and systematic observations of past and present patterns and dynamics.
Ecology and the ratchet of events: Climate variability, niche dimensions, and species distributions
Jackson, Stephen T.; Betancourt, Julio L.; Booth, Robert K.; Gray, Stephen T.
2009-01-01
Climate change in the coming centuries will be characterized by interannual, decadal, and multidecadal fluctuations superimposed on anthropogenic trends. Predicting ecological and biogeographic responses to these changes constitutes an immense challenge for ecologists. Perspectives from climatic and ecological history indicate that responses will be laden with contingencies, resulting from episodic climatic events interacting with demographic and colonization events. This effect is compounded by the dependency of environmental sensitivity upon life-stage for many species. Climate variables often used in empirical niche models may become decoupled from the proximal variables that directly influence individuals and populations. Greater predictive capacity, and more-fundamental ecological and biogeographic understanding, will come from integration of correlational niche modeling with mechanistic niche modeling, dynamic ecological modeling, targeted experiments, and systematic observations of past and present patterns and dynamics. PMID:19805104
Importance of vesicle release stochasticity in neuro-spike communication.
Ramezani, Hamideh; Akan, Ozgur B
2017-07-01
Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.
Geodesic least squares regression on information manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be
We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply thismore » to scaling laws in magnetic confinement fusion.« less
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
Models of subjective response to in-flight motion data
NASA Technical Reports Server (NTRS)
Rudrapatna, A. N.; Jacobson, I. D.
1973-01-01
Mathematical relationships between subjective comfort and environmental variables in an air transportation system are investigated. As a first step in model building, only the motion variables are incorporated and sensitivities are obtained using stepwise multiple regression analysis. The data for these models have been collected from commercial passenger flights. Two models are considered. In the first, subjective comfort is assumed to depend on rms values of the six-degrees-of-freedom accelerations. The second assumes a Rustenburg type human response function in obtaining frequency weighted rms accelerations, which are used in a linear model. The form of the human response function is examined and the results yield a human response weighting function for different degrees of freedom.
NASA Technical Reports Server (NTRS)
Stouffer, D. C.; Sheh, M. Y.
1988-01-01
A micromechanical model based on crystallographic slip theory was formulated for nickel-base single crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the effect of back stress in single crystals. The results showed that (1) the back stress is orientation dependent; and (2) the back stress state variable in the inelastic flow equation is necessary for predicting anelastic behavior of the material. The model also demonstrated improved fatigue predictive capability. Model predictions and experimental data are presented for single crystal superalloy Rene N4 at 982 C.
Gans, Kim M.; Risica, Patricia Markham; Kirtania, Usree; Jennings, Alishia; Strolla, Leslie O.; Steiner-Asiedu, Matilda; Hardy, Norma; Lasater, Thomas M.
2009-01-01
Objective To describe the dietary behaviors of Black women who enrolled in the SisterTalk weight control study. Design Baseline data collected via telephone survey and in-person screening. Setting Boston, MA and surrounding areas. Participants A total of 461 Black women completed the baseline. Variables Measured Measured height and weight; self reported demographics, risk factors, and dietary variables including fat-related eating behaviors, food portion size, fruit, vegetable, and beverage intake. Analysis Descriptive analyses for demographic, risk factors and dietary variables; ANOVA models with Food Habits Questionnaire (FHQ) scores as the dependent variable and demographic categories as the independent variables; ANOVA models with individual FHQ item scores as the dependent variable, and ethnic identification as the independent variable. Results The data indicate a low prevalence of many fat lowering behaviors. More than 60% reported eating less than five servings of fruits and vegetables per day. Self-reported portion sizes were large for most foods. Older age, being born outside the US, living without children and being retired were significantly associated with a higher prevalence of fat-lowering behaviors. The frequency of specific fat-lowering behaviors and portion size also differed by ethnic identification. Conclusions and Implications The findings support the need for culturally appropriate interventions to improve the dietary intake of Black Americans. Further studies should examine the dietary habits, food preparation methods and portion sizes of diverse groups of Black women and how such habits may differ by demographics. PMID:19161918
Perry, Brea L.; Pescosolido, Bernice A.; Bucholz, Kathleen; Edenberg, Howard; Kramer, John; Kuperman, Samuel; Schuckit, Marc Alan; Nurnberger, John I.
2015-01-01
Gender-moderated gene–environment interactions are rarely explored, raising concerns about inaccurate specification of etiological models and inferential errors. The current study examined the influence of gender, negative and positive daily life events, and GABRA2 genotype (SNP rs279871) on alcohol dependence, testing two- and three-way interactions between these variables using multilevel regression models fit to data from 2,281 White participants in the Collaborative Study on the Genetics of Alcoholism. Significant direct effects of variables of interest were identified, as well as gender-specific moderation of genetic risk on this SNP by social experiences. Higher levels of positive life events were protective for men with the high-risk genotype, but not among men with the low-risk genotype or women, regardless of genotype. Our findings support the disinhibition theory of alcohol dependence, suggesting that gender differences in social norms, constraints and opportunities, and behavioral undercontrol may explain men and women’s distinct patterns of association. PMID:23974430
ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference.
van Breukelen, Gerard J P
2013-11-01
The pretest-posttest control group design can be analyzed with the posttest as dependent variable and the pretest as covariate (ANCOVA) or with the difference between posttest and pretest as dependent variable (CHANGE). These 2 methods can give contradictory results if groups differ at pretest, a phenomenon that is known as Lord's paradox. Literature claims that ANCOVA is preferable if treatment assignment is based on randomization or on the pretest and questionable for preexisting groups. Some literature suggests that Lord's paradox has to do with measurement error in the pretest. This article shows two new things: First, the claims are confirmed by proving the mathematical equivalence of ANCOVA to a repeated measures model without group effect at pretest. Second, correction for measurement error in the pretest is shown to lead back to ANCOVA or to CHANGE, depending on the assumed absence or presence of a true group difference at pretest. These two new theoretical results are illustrated with multilevel (mixed) regression and structural equation modeling of data from two studies.
Evaluation of Deep Learning Models for Predicting CO2 Flux
NASA Astrophysics Data System (ADS)
Halem, M.; Nguyen, P.; Frankel, D.
2017-12-01
Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.
NASA Astrophysics Data System (ADS)
Vrac, Mathieu
2018-06-01
Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.
Predicting Use of Nurse Care Coordination by Older Adults With Chronic Conditions.
Vanderboom, Catherine E; Holland, Diane E; Mandrekar, Jay; Lohse, Christine M; Witwer, Stephanie G; Hunt, Vicki L
2017-07-01
To be effective, nurse care coordination must be targeted at individuals who will use the service. The purpose of this study was to identify variables that predicted use of care coordination by primary care patients. Data on the potential predictor variables were obtained from patient interviews, the electronic health record, and an administrative database of 178 adults eligible for care coordination. Use of care coordination was obtained from an administrative database. A multivariable logistic regression model was developed using a bootstrap sampling approach. Variables predicting use of care coordination were dependence in both activities of daily living (ADL) and instrumental activities of daily living (IADL; odds ratio [OR] = 5.30, p = .002), independent for ADL but dependent for IADL (OR = 2.68, p = .01), and number of prescription medications (OR = 1.12, p = .002). Consideration of these variables may improve identification of patients to target for care coordination.
2014-01-01
Background Protein sites evolve at different rates due to functional and biophysical constraints. It is usually considered that the main structural determinant of a site’s rate of evolution is its Relative Solvent Accessibility (RSA). However, a recent comparative study has shown that the main structural determinant is the site’s Local Packing Density (LPD). LPD is related with dynamical flexibility, which has also been shown to correlate with sequence variability. Our purpose is to investigate the mechanism that connects a site’s LPD with its rate of evolution. Results We consider two models: an empirical Flexibility Model and a mechanistic Stress Model. The Flexibility Model postulates a linear increase of site-specific rate of evolution with dynamical flexibility. The Stress Model, introduced here, models mutations as random perturbations of the protein’s potential energy landscape, for which we use simple Elastic Network Models (ENMs). To account for natural selection we assume a single active conformation and use basic statistical physics to derive a linear relationship between site-specific evolutionary rates and the local stress of the mutant’s active conformation. We compare both models on a large and diverse dataset of enzymes. In a protein-by-protein study we found that the Stress Model outperforms the Flexibility Model for most proteins. Pooling all proteins together we show that the Stress Model is strongly supported by the total weight of evidence. Moreover, it accounts for the observed nonlinear dependence of sequence variability on flexibility. Finally, when mutational stress is controlled for, there is very little remaining correlation between sequence variability and dynamical flexibility. Conclusions We developed a mechanistic Stress Model of evolution according to which the rate of evolution of a site is predicted to depend linearly on the local mutational stress of the active conformation. Such local stress is proportional to LPD, so that this model explains the relationship between LPD and evolutionary rate. Moreover, the model also accounts for the nonlinear dependence between evolutionary rate and dynamical flexibility. PMID:24716445
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2014-05-01
This study introduces a methodology for the construction of probabilistic inflow forecasts for multiple catchments and lead times, and investigates criterions for evaluation of multi-variate forecasts. A post-processing approach is used, and a Gaussian model is applied for transformed variables. The post processing model has two main components, the mean model and the dependency model. The mean model is used to estimate the marginal distributions for forecasted inflow for each catchment and lead time, whereas the dependency models was used to estimate the full multivariate distribution of forecasts, i.e. co-variances between catchments and lead times. In operational situations, it is a straightforward task to use the models to sample inflow ensembles which inherit the dependencies between catchments and lead times. The methodology was tested and demonstrated in the river systems linked to the Ulla-Førre hydropower complex in southern Norway, where simultaneous probabilistic forecasts for five catchments and ten lead times were constructed. The methodology exhibits sufficient flexibility to utilize deterministic flow forecasts from a numerical hydrological model as well as statistical forecasts such as persistent forecasts and sliding window climatology forecasts. It also deals with variation in the relative weights of these forecasts with both catchment and lead time. When evaluating predictive performance in original space using cross validation, the case study found that it is important to include the persistent forecast for the initial lead times and the hydrological forecast for medium-term lead times. Sliding window climatology forecasts become more important for the latest lead times. Furthermore, operationally important features in this case study such as heteroscedasticity, lead time varying between lead time dependency and lead time varying between catchment dependency are captured. Two criterions were used for evaluating the added value of the dependency model. The first one was the Energy score (ES) that is a multi-dimensional generalization of continuous rank probability score (CRPS). ES was calculated for all lead-times and catchments together, for each catchment across all lead times and for each lead time across all catchments. The second criterion was to use CRPS for forecasted inflows accumulated over several lead times and catchments. The results showed that ES was not very sensitive to correct covariance structure, whereas CRPS for accumulated flows where more suitable for evaluating the dependency model. This indicates that it is more appropriate to evaluate relevant univariate variables that depends on the dependency structure then to evaluate the multivariate forecast directly.
Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.
2003-01-01
Logistic regression was used to predict the probability of debris flows occurring in areas recently burned by wildland fires. Multiple logistic regression is conceptually similar to multiple linear regression because statistical relations between one dependent variable and several independent variables are evaluated. In logistic regression, however, the dependent variable is transformed to a binary variable (debris flow did or did not occur), and the actual probability of the debris flow occurring is statistically modeled. Data from 399 basins located within 15 wildland fires that burned during 2000-2002 in Colorado, Idaho, Montana, and New Mexico were evaluated. More than 35 independent variables describing the burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows were delineated from National Elevation Data using a Geographic Information System (GIS). (2) Data describing the burn severity, geology, land surface gradient, rainfall, and soil properties were determined for each basin. These data were then downloaded to a statistics software package for analysis using logistic regression. (3) Relations between the occurrence/non-occurrence of debris flows and burn severity, geology, land surface gradient, rainfall, and soil properties were evaluated and several preliminary multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combination produced the most effective model. The multivariate model that best predicted the occurrence of debris flows was selected. (4) The multivariate logistic regression model was entered into a GIS, and a map showing the probability of debris flows was constructed. The most effective model incorporates the percentage of each basin with slope greater than 30 percent, percentage of land burned at medium and high burn severity in each basin, particle size sorting, average storm intensity (millimeters per hour), soil organic matter content, soil permeability, and soil drainage. The results of this study demonstrate that logistic regression is a valuable tool for predicting the probability of debris flows occurring in recently-burned landscapes.
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Allmendinger, M.; Gnann, S.; Heisserer, T.; Bárdossy, A.
2017-12-01
The basic problem of geostatistics is to estimate the primary variable (e.g. groundwater quality, nitrate) at an un-sampled location based on point measurements at locations in the vicinity. Typically, models are being used that describe the spatial dependence based on the geometry of the observation network. This presentation demonstrates methods that take the following properties additionally into account: the statistical distribution of the measurements, a different degree of dependence in different quantiles, censored measurements, the composition of categorical additional information in the neighbourhood (exhaustive secondary information), and the spatial dependence of a dependent secondary variable, possibly measured with a different observation network (non-exhaustive secondary data). Two modelling approaches are demonstrated individually and combined: The non-stationarity in the marginal distribution is accounted for by locally mixed distribution functions that depend on the composition of the categorical variable in the neighbourhood of each interpolation location. This methodology is currently being implemented for operational use at the environmental state agency of Baden-Württemberg. An alternative to co-Kriging in copula space with an arbitrary number of secondary parameters is presented: The method performs better than traditional techniques if the primary variable is undersampled and does not produce erroneous negative estimates. Even more, the quality of the uncertainty estimates is much improved. The worth of the secondary information is thoroughly evaluated. The improved geostatistical hydrogeological models are being analyzed using measurements of a large observation network ( 2500 measurement locations) in the state of Baden-Württemberg ( 36.000 km2). Typical groundwater quality parameters such as nitrate, chloride, barium, antrazine, and desethylatrazine are being assessed, cross-validated, and compared with traditional geostatistical methods. The secondary information of land use is available on a 30m x 30m raster. We show that the presented methods are not only better estimators (e.g. in the sense of an average quadratic error), but exhibit a much more realistic structure of the uncertainty and hence are improvements compared to existing methods.
A graphical vector autoregressive modelling approach to the analysis of electronic diary data
2010-01-01
Background In recent years, electronic diaries are increasingly used in medical research and practice to investigate patients' processes and fluctuations in symptoms over time. To model dynamic dependence structures and feedback mechanisms between symptom-relevant variables, a multivariate time series method has to be applied. Methods We propose to analyse the temporal interrelationships among the variables by a structural modelling approach based on graphical vector autoregressive (VAR) models. We give a comprehensive description of the underlying concepts and explain how the dependence structure can be recovered from electronic diary data by a search over suitable constrained (graphical) VAR models. Results The graphical VAR approach is applied to the electronic diary data of 35 obese patients with and without binge eating disorder (BED). The dynamic relationships for the two subgroups between eating behaviour, depression, anxiety and eating control are visualized in two path diagrams. Results show that the two subgroups of obese patients with and without BED are distinguishable by the temporal patterns which influence their respective eating behaviours. Conclusion The use of the graphical VAR approach for the analysis of electronic diary data leads to a deeper insight into patient's dynamics and dependence structures. An increasing use of this modelling approach could lead to a better understanding of complex psychological and physiological mechanisms in different areas of medical care and research. PMID:20359333
Zhang, Fan; Luo, Wensui; Parker, Jack C; Spalding, Brian P; Brooks, Scott C; Watson, David B; Jardine, Philip M; Gu, Baohua
2008-11-01
Many geochemical reactions that control aqueous metal concentrations are directly affected by solution pH. However, changes in solution pH are strongly buffered by various aqueous phase and solid phase precipitation/dissolution and adsorption/desorption reactions. The ability to predict acid-base behavior of the soil-solution system is thus critical to predict metal transport under variable pH conditions. This studywas undertaken to develop a practical generic geochemical modeling approach to predict aqueous and solid phase concentrations of metals and anions during conditions of acid or base additions. The method of Spalding and Spalding was utilized to model soil buffer capacity and pH-dependent cation exchange capacity by treating aquifer solids as a polyprotic acid. To simulate the dynamic and pH-dependent anion exchange capacity, the aquifer solids were simultaneously treated as a polyprotic base controlled by mineral precipitation/ dissolution reactions. An equilibrium reaction model that describes aqueous complexation, precipitation, sorption and soil buffering with pH-dependent ion exchange was developed using HydroGeoChem v5.0 (HGC5). Comparison of model results with experimental titration data of pH, Al, Ca, Mg, Sr, Mn, Ni, Co, and SO4(2-) for contaminated sediments indicated close agreement suggesting that the model could potentially be used to predictthe acid-base behavior of the sediment-solution system under variable pH conditions.
ERIC Educational Resources Information Center
McMurray, Bob; Jongman, Allard
2011-01-01
Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important are the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context dependent. This study assessed the…
Katherine J. Elliott; Barton D. Clinton
1993-01-01
Allometric equations were developed to predict aboveground dry weight of herbaceous and woody species on prescribe-burned sites in the Southern Appalachians. Best-fit least-square regression models were developed using diamet,er, height, or both, as the independent variables and dry weight as the dependent variable. Coefficients of determination for the selected total...
The Impact of Household Heads' Education Levels on the Poverty Risk: The Evidence from Turkey
ERIC Educational Resources Information Center
Bilenkisi, Fikret; Gungor, Mahmut Sami; Tapsin, Gulcin
2015-01-01
This study aims to analyze the relationship between the education levels of household heads and the poverty risk of households in Turkey. The logistic regression models have been estimated with the poverty risk of a household as a dependent variable and a set of educational levels as explanatory variables for all households. There are subgroups of…
Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation
NASA Technical Reports Server (NTRS)
Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred
2015-01-01
To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.
Indirect costs of teaching in Canadian hospitals.
MacKenzie, T A; Willan, A R; Cox, M A; Green, A
1991-01-01
We sought to determine whether there are indirect costs of teaching in Canadian hospitals. To examine cost differences between teaching and nonteaching hospitals we estimated two cost functions: cost per case and cost per patient-day (dependent variables). The independent variables were number of beds, occupancy rate, teaching ratio (number of residents and interns per 100 beds), province, urbanicity (the population density of the county in which the hospital was situated) and wage index. Within each hospital we categorized a random sample of patient discharges according to case mix and severity of illness using age and standard diagnosis and procedure codes. Teaching ratio and case severity were each highly correlated positively with the dependent variables. The other variables that led to higher costs in teaching hospitals were wage rates and number of beds. Our regression model could serve as the basis of a reimbursement system, adjusted for severity and teaching status, particularly in provinces moving toward introducing case-weighting mechanisms into their payment model. Even if teaching hospitals were paid more than nonteaching hospitals because of the difference in the severity of illness there should be an additional allowance to cover the indirect costs of teaching. PMID:1898870
NASA Astrophysics Data System (ADS)
Kunii, M.; Ito, K.; Wada, A.
2015-12-01
An ensemble Kalman filter (EnKF) using a regional mesoscale atmosphere-ocean coupled model was developed to represent the uncertainties of sea surface temperature (SST) in ensemble data assimilation strategies. The system was evaluated through data assimilation cycle experiments over a one-month period from July to August 2014, during which a tropical cyclone as well as severe rainfall events occurred. The results showed that the data assimilation cycle with the coupled model could reproduce SST distributions realistically even without updating SST and salinity during the data assimilation cycle. Therefore, atmospheric variables and radiation applied as a forcing to ocean models can control oceanic variables to some extent in the current data assimilation configuration. However, investigations of the forecast error covariance estimated in EnKF revealed that the correlation between atmospheric and oceanic variables could possibly lead to less flow-dependent error covariance for atmospheric variables owing to the difference in the time scales between atmospheric and oceanic variables. A verification of the analyses showed positive impacts of applying the ocean model to EnKF on precipitation forecasts. The use of EnKF with the coupled model system captured intensity changes of a tropical cyclone better than it did with an uncoupled atmosphere model, even though the impact on the track forecast was negligibly small.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Real-time plasma control in a dual-frequency, confined plasma etcher
NASA Astrophysics Data System (ADS)
Milosavljević, V.; Ellingboe, A. R.; Gaman, C.; Ringwood, J. V.
2008-04-01
The physics issues of developing model-based control of plasma etching are presented. A novel methodology for incorporating real-time model-based control of plasma processing systems is developed. The methodology is developed for control of two dependent variables (ion flux and chemical densities) by two independent controls (27 MHz power and O2 flow). A phenomenological physics model of the nonlinear coupling between the independent controls and the dependent variables of the plasma is presented. By using a design of experiment, the functional dependencies of the response surface are determined. In conjunction with the physical model, the dependencies are used to deconvolve the sensor signals onto the control inputs, allowing compensation of the interaction between control paths. The compensated sensor signals and compensated set-points are then used as inputs to proportional-integral-derivative controllers to adjust radio frequency power and oxygen flow to yield the desired ion flux and chemical density. To illustrate the methodology, model-based real-time control is realized in a commercial semiconductor dielectric etch chamber. The two radio frequency symmetric diode operates with typical commercial fluorocarbon feed-gas mixtures (Ar/O2/C4F8). Key parameters for dielectric etching are known to include ion flux to the surface and surface flux of oxygen containing species. Control is demonstrated using diagnostics of electrode-surface ion current, and chemical densities of O, O2, and CO measured by optical emission spectrometry and/or mass spectrometry. Using our model-based real-time control, the set-point tracking accuracy to changes in chemical species density and ion flux is enhanced.
Spatio-temporal statistical models for river monitoring networks.
Clement, L; Thas, O; Vanrolleghem, P A; Ottoy, J P
2006-01-01
When introducing new wastewater treatment plants (WWTP), investors and policy makers often want to know if there indeed is a beneficial effect of the installation of a WWTP on the river water quality. Such an effect can be established in time as well as in space. Since both temporal and spatial components affect the output of a monitoring network, their dependence structure has to be modelled. River water quality data typically come from a river monitoring network for which the spatial dependence structure is unidirectional. Thus the traditional spatio-temporal models are not appropriate, as they cannot take advantage of this directional information. In this paper, a state-space model is presented in which the spatial dependence of the state variable is represented by a directed acyclic graph, and the temporal dependence by a first-order autoregressive process. The state-space model is extended with a linear model for the mean to estimate the effect of the activation of a WWTP on the dissolved oxygen concentration downstream.
Sean A. Parks; Marc-Andre Parisien; Carol Miller
2011-01-01
We examined the scale-dependent relationship between spatial fire likelihood or burn probability (BP) and some key environmental controls in the southern Sierra Nevada, California, USA. Continuous BP estimates were generated using a fire simulation model. The correspondence between BP (dependent variable) and elevation, ignition density, fuels and aspect was evaluated...
NASA Astrophysics Data System (ADS)
Reichstein, M.; Rey, A.; Freibauer, A.; Tenhunen, J.; Valentini, R.; Soil Respiration Synthesis Team
2003-04-01
Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, inter-annual and spatial variability of soil respiration as affected by water availability, temperature and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g. leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical non-linear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and inter-site variability of soil respiration with a mean absolute error of 0.82 µmol m-2 s-1. The parameterised model exhibits the following principal properties: 1) At a relative amount of upper-layer soil water of 16% of field capacity half-maximal soil respiration rates are reached. 2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. 3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly time-scale we employed the approach by Raich et al. (2002, Global Change Biol. 8, 800-812) that used monthly precipitation and air temperature to globally predict soil respiration (T&P-model). While this model was able to explain some of the month-to-month variability of soil respiration, it failed to capture the inter-site variability, regardless whether the original or a new optimized model parameterization was used. In both cases, the residuals were strongly related to maximum site leaf area index. Thus, for a monthly time scale we developed a simple T&P&LAI-model that includes leaf area index as an additional predictor of soil respiration. This extended but still simple model performed nearly as well as the more detailed time-step model and explained 50 % of the overall and 65% of the site-to-site variability. Consequently, better estimates of globally distributed soil respiration should be obtained with the new model driven by satellite estimates of leaf area index.
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
X-Ray Spectral Variability Signatures of Flares in BL Lac Objects
NASA Technical Reports Server (NTRS)
Boettcher, Markus; Chiang, James; White, Nicholas E. (Technical Monitor)
2002-01-01
We are presenting a detailed parameter study of the time-dependent electron injection and kinematics and the self-consistent radiation transport in jets of intermediate and low-frequency peaked BL Lac objects. Using a time-dependent, combined synchrotron-self-Compton and external-Compton jet model, we study the influence of variations of several essential model parameters, such as the electron injection compactness, the relative contribution of synchrotron to external soft photons to the soft photon compactness, the electron- injection spectral index, and the details of the time profiles of the electron injection episodes giving rise to flaring activity. In the analysis of our results, we focus on the expected X-ray spectral variability signatures in a region of parameter space particularly well suited to reproduce the broadband spectral energy distributions of intermediate and low-frequency peaked BL Lac objects. We demonstrate that SSC- and external-Compton dominated models for the gamma-ray emission from blazars are producing significantly different signatures in the X-ray variability, in particular in the soft X-ray light curves and the spectral hysteresis at soft X-ray energies, which can be used as a powerful diagnostic to unveil the nature of the high-energy emission from BL Lac objects.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.
Rate dependent fractionation of sulfur isotopes in through-flowing systems
NASA Astrophysics Data System (ADS)
Giannetta, M.; Sanford, R. A.; Druhan, J. L.
2017-12-01
The fidelity of reactive transport models in quantifying microbial activity in the subsurface is often improved through the use stable isotopes. However, the accuracy of current predictions for microbially mediated isotope fractionations within open through-flowing systems typically depends on nutrient availability. This disparity arises from the common application of a single `effective' fractionation factor assigned to a given system, despite extensive evidence for variability in the fractionation factor between eutrophic environments and many naturally occurring, nutrient-limited environments. Here, we demonstrate a reactive transport model with the capacity to simulate a variable fractionation factor over a range of microbially mediated reduction rates and constrain the model with experimental data for nutrient limited conditions. Two coupled isotope-specific Monod rate laws for 32S and 34S, constructed to quantify microbial sulfate reduction and predict associated S isotope partitioning, were parameterized using a series of batch reactor experiments designed to minimize microbial growth. In the current study, we implement these parameterized isotope-specific rate laws within an open, through-flowing system to predict variable fractionation with distance as a function of sulfate reduction rate. These predictions are tested through a supporting laboratory experiment consisting of a flow-through column packed with homogenous porous media inoculated with the same species of sulfate reducing bacteria used in the previous batch reactors, Desulfovibrio vulgaris. The collective results of batch reactor and flow-through column experiments support a significant improvement for S isotope predictions in isotope-sensitive multi-component reactive transport models through treatment of rate-dependent fractionation. Such an update to the model will better equip reactive transport software for isotope informed characterization of microbial activity within energy and nutrient limited environments.
Hamer, R D; Nicholas, S C; Tranchina, D; Liebman, P A; Lamb, T D
2003-10-01
Single-photon responses (SPRs) in vertebrate rods are considerably less variable than expected if isomerized rhodopsin (R*) inactivated in a single, memoryless step, and no other variability-reducing mechanisms were available. We present a new stochastic model, the core of which is the successive ratcheting down of R* activity, and a concomitant increase in the probability of quenching of R* by arrestin (Arr), with each phosphorylation of R* (Gibson, S.K., J.H. Parkes, and P.A. Liebman. 2000. Biochemistry. 39:5738-5749.). We evaluated the model by means of Monte-Carlo simulations of dim-flash responses, and compared the response statistics derived from them with those obtained from empirical dim-flash data (Whitlock, G.G., and T.D. Lamb. 1999. Neuron. 23:337-351.). The model accounts for four quantitative measures of SPR reproducibility. It also reproduces qualitative features of rod responses obtained with altered nucleotide levels, and thus contradicts the conclusion that such responses imply that phosphorylation cannot dominate R* inactivation (Rieke, F., and D.A. Baylor. 1998a. Biophys. J. 75:1836-1857; Field, G.D., and F. Rieke. 2002. Neuron. 35:733-747.). Moreover, the model is able to reproduce the salient qualitative features of SPRs obtained from mouse rods that had been genetically modified with specific pathways of R* inactivation or Ca2+ feedback disabled. We present a theoretical analysis showing that the variability of the area under the SPR estimates the variability of integrated R* activity, and can provide a valid gauge of the number of R* inactivation steps. We show that there is a heretofore unappreciated tradeoff between variability of SPR amplitude and SPR duration that depends critically on the kinetics of inactivation of R* relative to the net kinetics of the downstream reactions in the cascade. Because of this dependence, neither the variability of SPR amplitude nor duration provides a reliable estimate of the underlying variability of integrated R* activity, and cannot be used to estimate the minimum number of R* inactivation steps. We conclude that multiple phosphorylation-dependent decrements in R* activity (with Arr-quench) can confer the observed reproducibility of rod SPRs; there is no compelling need to invoke a long series of non-phosphorylation dependent state changes in R* (as in Rieke, F., and D.A. Baylor. 1998a. Biophys. J. 75:1836-1857; Field, G.D., and F. Rieke. 2002. Neuron. 35:733-747.). Our analyses, plus data and modeling of others (Rieke, F., and D.A. Baylor. 1998a. Biophys. J. 75:1836-1857; Field, G.D., and F. Rieke. 2002. Neuron. 35:733-747.), also argue strongly against either feedback (including Ca2+-feedback) or depletion of any molecular species downstream to R* as the dominant cause of SPR reproducibility.
EMG-based speech recognition using hidden markov models with global control variables.
Lee, Ki-Seung
2008-03-01
It is well known that a strong relationship exists between human voices and the movement of articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The sequence of EMG signals for each word is modelled by a hidden Markov model (HMM) framework. The main objective of the work involves building a model for state observation density when multichannel observation sequences are given. The proposed model reflects the dependencies between each of the EMG signals, which are described by introducing a global control variable. We also develop an efficient model training method, based on a maximum likelihood criterion. In a preliminary study, 60 isolated words were used as recognition variables. EMG signals were acquired from three articulatory facial muscles. The findings indicate that such a system may have the capacity to recognize speech signals with an accuracy of up to 87.07%, which is superior to the independent probabilistic model.
Soliton and periodic solutions for time-dependent coefficient non-linear equation
NASA Astrophysics Data System (ADS)
Guner, Ozkan
2016-01-01
In this article, we establish exact solutions for the generalized (3+1)-dimensional variable coefficient Kadomtsev-Petviashvili (GVCKP) equation. Using solitary wave ansatz in terms of ? functions and the modified sine-cosine method, we find exact analytical bright soliton solutions and exact periodic solutions for the considered model. The physical parameters in the soliton solutions are obtained as function of the dependent model coefficients. The effectiveness and reliability of the method are shown by its application to the GVCKP equation.
Growth Modeling with Nonignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial
ERIC Educational Resources Information Center
Muthen, Bengt; Asparouhov, Tihomir; Hunter, Aimee M.; Leuchter, Andrew F.
2011-01-01
This article uses a general latent variable framework to study a series of models for nonignorable missingness due to dropout. Nonignorable missing data modeling acknowledges that missingness may depend not only on covariates and observed outcomes at previous time points as with the standard missing at random assumption, but also on latent…
Using Design-Based Latent Growth Curve Modeling with Cluster-Level Predictor to Address Dependency
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L.
2014-01-01
The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…
ERIC Educational Resources Information Center
Bryan, Frank M.
Variations in the level of female political participation were examined in the context of the "standard" model of political participation (higher socioeconomic status, urbanism, living at society's center, increased participation) and the "decline of community" model (decreased group membership, increased mobility, decline of…
Predicting tidal currents in San Francisco Bay using a spectral model
Burau, Jon R.; Cheng, Ralph T.
1988-01-01
This paper describes the formulation of a spectral (or frequency based) model which solves the linearized shallow water equations. To account for highly variable basin bathymetry, spectral solutions are obtained using the finite element method which allows the strategic placement of the computation points in the specific areas of interest or in areas where the gradients of the dependent variables are expected to be large. Model results are compared with data using simple statistics to judge overall model performance in the San Francisco Bay estuary. Once the model is calibrated and verified, prediction of the tides and tidal currents in San Francisco Bay is accomplished by applying astronomical tides (harmonic constants deduced from field data) at the prediction time along the model boundaries.
Improved chemometric methodologies for the assessment of soil carbon sequestration mechanisms
NASA Astrophysics Data System (ADS)
Jiménez-González, Marco A.; Almendros, Gonzalo; Álvarez, Ana M.; González-Vila, Francisco J.
2016-04-01
The factors involved soil C sequestration, which is reflected in the highly variable content of organic matter in the soils, are not yet well defined. Therefore, their identification is crucial for understanding Earth's biogeochemical cycle and global change. The main objective of this work is to contribute to a better qualitative and quantitative assessment of the mechanisms of organic C sequestration in the soil, using omic approaches not requiring the detailed knowledge of the structure of the material under study. With this purpose, we have carried out a series of chemometric approaches on a set of widely differing soils (35 representative ecosystems). In an exploratory phase, we used multivariate statistical models (e.g., multidimensional scaling, discriminant analysis with automatic backward variable selection…) to analyze arrays of more than 200 independent soil variables (physicochemical, spectroscopic, pyrolytic...) in order to select those factors (descriptors or proxies) that explain most of the total system variance (content and stability of the different C forms). These models showed that the factors determining the stabilization of organic material are greatly dependent on the soil type. In some cases, the molecular structure of organic matter seemed strongly correlated with their resilience, while in other soil types the organo-mineral interactions played a significant bearing on the accumulation of selectively preserved C forms. In any case, it was clear that the factors driving the resilience of organic matter are manifold and not exclusive. Consequently, in a second stage, prediction models of the soil C content and their biodegradability (laboratory incubation experiments) were carried out by massive data processing by partial least squares (PLS) regression of data from Py-GC-MS and Py-MS. In some models, PLS was applied to a matrix of 150 independent variables corresponding to major pyrolysis compounds (peak areas) from the 35 samples of whole soils. The variable importance in the projection (VIP) histogram obtained from this treatment (total C and total mineralization coefficients as dependent variables) illustrated the contribution of the individual compounds to the total inertia of the models (e.g., carbohydrate-derived compounds, methoxyphenols, or specific alkylbenzenes were relevant in explaining the total quality and the biodegradation rates of the organic matter). Further simplified models consisting of direct PLS analysis of the debugged ion matrix calculated by averaging all ions (45 - 250 amu) in the whole chromatographic area in the 5-60 min range (here referred to as 'rebuilt MS spectra' or 'Py-MS spectra' when obtained connecting directly the pyrolyser to the MS detector through suitable interfaces) were carried out. The above three approaches coincided in pointing out that C sequestration behave as an emergent soil property depending on the complexity of its progressive molecular levels. Most of the total variance is explained by specific assemblages of variables, strongly depending on the soil types. On the other hand, chemical biodiversity (e.g., Shannon indices or coefficients from multivariate data models) behaved as a common background in the prediction models including very different soil types. In fact, assessment of chemodiversity of the pyrolytic compound assemblages (or the Py-MS ion data) would represent a valid clue for the assessment of the extent to which the original biomass has been diagenetically reworked into chaotic structures with non-repeatable units, providing a useful proxy to forecast at least a portion of the total variance in the soil organic matter biodegradability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stetzel, KD; Aldrich, LL; Trimboli, MS
2015-03-15
This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables.more » (C) 2014 Elsevier B.V. All rights reserved.« less
NASA Astrophysics Data System (ADS)
Schubert, J.; Sanders, B. F.; Andreadis, K.
2013-12-01
The Surface Water and Ocean Topography (SWOT) mission, currently under study by NASA (National Aeronautics and Space Administration) and CNES (Centre National d'Etudes Spatiales), is designed to provide global spatial measurements of surface water properties at resolutions better than 10 m and with centimetric accuracy. The data produced by SWOT will include irregularly spaced point clouds of the water surface height, with point spacings from roughly 2-50 m depending on a point's location within SWOT's swath. This could offer unprecedented insight into the spatial structure of rivers. Features that may be resolved include backwater profiles behind dams, drawdown profiles, uniform flow sections, critical flow sections, and even riffle-pool flow structures. In the event that SWOT scans a river during a major flood, it becomes possible to delineate the limits of the flood as well as the spatial structure of the water surface elevation, yielding insight into the dynamic interaction of channels and flood plains. The Platte River in Nebraska, USA, is a braided river with a width and slope of approximately 100 m and 100 cm/km, respectively. A 1 m resolution Digital Terrain Model (DTM) of the river basin, based on airborne lidar collected during low-flow conditions, was used to parameterize a two-dimensional, variable resolution, unstructured grid, hydrodynamic model that uses 3 m resolution triangles in low flow channels and 10 m resolution triangles in the floodplain. Use of a fine resolution mesh guarantees that local variability in topography is resolved, and after applying the hydrodynamic model, the effects of topographic variability are expressed as variability in the water surface height, depth-averaged velocity and flow depth. Flow is modeled over a reach length of 10 km for multi-day durations to capture both frequent (diurnal variations associated with regulated flow) and infrequent (extreme flooding) flow phenomena. Model outputs reveal a number of interesting features, including a high degree of variability in the water depth and velocity and lesser variability in the free-surface profile and river discharge. Hydraulic control sections are also revealed, and shown to depend on flow stage. Reach-averaging of model output is applied to study the macro-scale balance of forces in this system, and the scales at which such a force balance is appropriate. We find that the reach-average slope exhibits a declining reach-length dependence with increasing reach length, up to reach lengths of 1 km. Hence, 1 km appears to be the minimum appropriate length for reach-averaging, and at this scale, a diffusive-wave momentum balance is a reasonable approximation suitable for emerging models of discharge estimation that rely only on SWOT-observable river properties (width, height, slope, etc.).
NASA Astrophysics Data System (ADS)
Malone, A.
2017-12-01
Quantifying mass balance sensitivity to climate change is essential for forecasting glacier evolution and deciphering climate signals embedded in archives of past glacier changes. Ideally, these quantifications result from decades of field measurement, remote sensing, and a hierarchy modeling approach, but in data-sparse regions, such as the Himalayas and tropical Andes, regional-scale modeling rooted in first principles provides a first-order picture. Previous regional-scaling modeling studies have applied a surface energy and mass balance approach in order to quantify equilibrium line altitude sensitivity to climate change. In this study, an expanded regional-scale surface energy and mass balance model is implemented to quantify glacier-wide mass balance sensitivity to climate change for tropical Andean glaciers. Data from the Randolph Glacier Inventory are incorporated, and additional physical processes are included, such as a dynamic albedo and cloud-dependent atmospheric emissivity. The model output agrees well with the limited mass balance records for tropical Andean glaciers. The dominant climate variables driving interannual mass balance variability differ depending on the climate setting. For wet tropical glaciers (annual precipitation >0.75 m y-1), temperature is the dominant climate variable. Different hypotheses for the processes linking wet tropical glacier mass balance variability to temperature are evaluated. The results support the hypothesis that glacier-wide mass balance on wet tropical glaciers is largely dominated by processes at the lowest elevation where temperature plays a leading role in energy exchanges. This research also highlights the transient nature of wet tropical glaciers - the vast majority of tropical glaciers and a vital regional water resource - in an anthropogenic warming world.
Durrieu, Sylvie; Gosselin, Frédéric; Herpigny, Basile
2017-01-01
We explored the potential of airborne laser scanner (ALS) data to improve Bayesian models linking biodiversity indicators of the understory vegetation to environmental factors. Biodiversity was studied at plot level and models were built to investigate species abundance for the most abundant plants found on each study site, and for ecological group richness based on light preference. The usual abiotic explanatory factors related to climate, topography and soil properties were used in the models. ALS data, available for two contrasting study sites, were used to provide biotic factors related to forest structure, which was assumed to be a key driver of understory biodiversity. Several ALS variables were found to have significant effects on biodiversity indicators. However, the responses of biodiversity indicators to forest structure variables, as revealed by the Bayesian model outputs, were shown to be dependent on the abiotic environmental conditions characterizing the study areas. Lower responses were observed on the lowland site than on the mountainous site. In the latter, shade-tolerant and heliophilous species richness was impacted by vegetation structure indicators linked to light penetration through the canopy. However, to reveal the full effects of forest structure on biodiversity indicators, forest structure would need to be measured over much wider areas than the plot we assessed. It seems obvious that the forest structure surrounding the field plots can impact biodiversity indicators measured at plot level. Various scales were found to be relevant depending on: the biodiversity indicators that were modelled, and the ALS variable. Finally, our results underline the utility of lidar data in abundance and richness models to characterize forest structure with variables that are difficult to measure in the field, either due to their nature or to the size of the area they relate to. PMID:28902920
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
NASA Technical Reports Server (NTRS)
Palmer, Paul I.; Abbot, Dorian S.; Fu, Tzung-May; Jacob, Daniel J.; Chance, Kelly; Kurosu, Thomas P.; Guenther, Alex; Wiedinmyer, Christine; Stanton, Jenny C.; Pilling, Michael J.;
2006-01-01
Quantifying isoprene emissions using satellite observations of the formaldehyde (HCHO) columns is subject to errors involving the column retrieval and the assumed relationship between HCHO columns and isoprene emissions, taken here from the GEOS-CHEM chemical transport model. Here we use a 6-year (1996-2001) HCHO column data set from the Global Ozone Monitoring Experiment (GOME) satellite instrument to (1) quantify these errors, (2) evaluate GOME-derived isoprene emissions with in situ flux measurements and a process-based emission inventory (Model of Emissions of Gases and Aerosols from Nature, MEGAN), and (3) investigate the factors driving the seasonal and interannual variability of North American isoprene emissions. The error in the GOME HCHO column retrieval is estimated to be 40%. We use the Master Chemical Mechanism (MCM) to quantify the time-dependent HCHO production from isoprene, alpha- and beta-pinenes, and methylbutenol and show that only emissions of isoprene are detectable by GOME. The time-dependent HCHO yield from isoprene oxidation calculated by MCM is 20-30% larger than in GEOS-CHEM. GOME-derived isoprene fluxes track the observed seasonal variation of in situ measurements at a Michigan forest site with a -30% bias. The seasonal variation of North American isoprene emissions during 2001 inferred from GOME is similar to MEGAN, with GOME emissions typically 25% higher (lower) at the beginning (end) of the growing season. GOME and MEGAN both show a maximum over the southeastern United States, but they differ in the precise location. The observed interannual variability of this maximum is 20-30%, depending on month. The MEGAN isoprene emission dependence on surface air temperature explains 75% of the month-to-month variability in GOME-derived isoprene emissions over the southeastern United States during May-September 1996-2001.
NASA Astrophysics Data System (ADS)
Haddad, Z. S.; Steward, J. L.; Tseng, H.-C.; Vukicevic, T.; Chen, S.-H.; Hristova-Veleva, S.
2015-06-01
Satellite microwave observations of rain, whether from radar or passive radiometers, depend in a very crucial way on the vertical distribution of the condensed water mass and on the types and sizes of the hydrometeors in the volume resolved by the instrument. This crucial dependence is nonlinear, with different types and orders of nonlinearity that are due to differences in the absorption/emission and scattering signatures at the different instrument frequencies. Because it is not monotone as a function of the underlying condensed water mass, the nonlinearity requires great care in its representation in the observation operator, as the inevitable uncertainties in the numerous precipitation variables are not directly convertible into an additive white uncertainty in the forward calculated observations. In particular, when attempting to assimilate such data into a cloud-permitting model, special care needs to be applied to describe and quantify the expected uncertainty in the observations operator in order not to turn the implicit white additive uncertainty on the input values into complicated biases in the calculated radiances. One approach would be to calculate the means and covariances of the nonlinearly calculated radiances given an a priori joint distribution for the input variables. This would be a very resource-intensive proposal if performed in real time. We propose a representation of the observation operator based on performing this moment calculation off line, with a dimensionality reduction step to allow for the effective calculation of the observation operator and the associated covariance in real time during the assimilation. The approach is applicable to other remotely sensed observations that depend nonlinearly on model variables, including wind vector fields. The approach has been successfully applied to the case of tropical cyclones, where the organization of the system helps in identifying the dimensionality-reducing variables.
Hydrologic modeling strategy for the Islamic Republic of Mauritania, Africa
Friedel, Michael J.
2008-01-01
The government of Mauritania is interested in how to maintain hydrologic balance to ensure a long-term stable water supply for minerals-related, domestic, and other purposes. Because of the many complicating and competing natural and anthropogenic factors, hydrologists will perform quantitative analysis with specific objectives and relevant computer models in mind. Whereas various computer models are available for studying water-resource priorities, the success of these models to provide reliable predictions largely depends on adequacy of the model-calibration process. Predictive analysis helps us evaluate the accuracy and uncertainty associated with simulated dependent variables of our calibrated model. In this report, the hydrologic modeling process is reviewed and a strategy summarized for future Mauritanian hydrologic modeling studies.
Yun, Yeoung-Sang; Park, Jong Moon
2003-08-05
Light-dependent photosynthesis of Chlorella vulgaris was investigated by using a novel photosynthesis measurement system that could cover wide ranges of incident light and cell density and reproduce accurate readings. Various photosynthesis models, which have been reported elsewhere, were classified and/or reformulated based upon the underlying hypotheses of the light dependence of the algal photosynthesis. Four types of models were derived, which contained distinct light-related variables such as the average or local photon flux density (APFD or LPFD) and the average or local photon absorption rate (APAR or LPAR). According to our experimental results, the LPFD and LPAR models could predict the experimental data more accurately although the APFD and APAR models have been widely used for the kinetic study of microalgal photosynthesis. Copyright 2003 Wiley Periodicals, Inc. Biotechnol Bioeng 83: 303-311, 2003.
Career Commitment: A Reexamination and an Extension.
ERIC Educational Resources Information Center
Goulet, Laurel R.; Singh, Parbudyal
2002-01-01
A model investigating effects on career commitment of job involvement, organizational commitment, and job satisfaction added the variables achievement need, work ethic, and extra-work factors (family involvement, number of dependents). Tested with 228 subjects, the model supported the effects of achievement need and work ethic but not extra-work…
An Epidemiological Approach to Staff Burnout.
ERIC Educational Resources Information Center
Kamis, Edna
This paper describes a conceptual model of staff burnout in terms of independent, intervening and dependent variables. Staff burnout is defined, symptoms are presented, and the epidemiological approach to burnout is descussed. Components of the proposed model, which groups determinants of mental health into three domains, consist of: (1)…
Resources, Instruction, and Research
ERIC Educational Resources Information Center
Cohen, David K.; Raudenbush, Stephen W.; Ball, Deborah Loewenberg
2003-01-01
Many researchers who study the relations between school resources and student achievement have worked from a causal model, which typically is implicit. In this model, some resource or set of resources is the causal variable and student achievement is the outcome. In a few recent, more nuanced versions, resource effects depend on intervening…
Identifying the Factors That Influence Change in SEBD Using Logistic Regression Analysis
ERIC Educational Resources Information Center
Camilleri, Liberato; Cefai, Carmel
2013-01-01
Multiple linear regression and ANOVA models are widely used in applications since they provide effective statistical tools for assessing the relationship between a continuous dependent variable and several predictors. However these models rely heavily on linearity and normality assumptions and they do not accommodate categorical dependent…
International migration beyond gravity: A statistical model for use in population projections
Cohen, Joel E.; Roig, Marta; Reuman, Daniel C.; GoGwilt, Cai
2008-01-01
International migration will play an increasing role in the demographic future of most nations if fertility continues to decline globally. We developed an algorithm to project future numbers of international migrants from any country or region to any other. The proposed generalized linear model (GLM) used geographic and demographic independent variables only (the population and area of origins and destinations of migrants, the distance between origin and destination, the calendar year, and indicator variables to quantify nonrandom characteristics of individual countries). The dependent variable, yearly numbers of migrants, was quantified by 43653 reports from 11 countries of migration from 228 origins and to 195 destinations during 1960–2004. The final GLM based on all data was selected by the Bayesian information criterion. The number of migrants per year from origin to destination was proportional to (population of origin)0.86(area of origin)−0.21(population of destination)0.36(distance)−0.97, multiplied by functions of year and country-specific indicator variables. The number of emigrants from an origin depended on both its population and its population density. For a variable initial year and a fixed terminal year 2004, the parameter estimates appeared stable. Multiple R2, the fraction of variation in log numbers of migrants accounted for by the starting model, improved gradually with recentness of the data: R2 = 0.57 for data from 1960 to 2004, R2 = 0.59 for 1985–2004, R2 = 0.61 for 1995–2004, and R2 = 0.64 for 2000–2004. The migration estimates generated by the model may be embedded in deterministic or stochastic population projections. PMID:18824693
Bayesian effect estimation accounting for adjustment uncertainty.
Wang, Chi; Parmigiani, Giovanni; Dominici, Francesca
2012-09-01
Model-based estimation of the effect of an exposure on an outcome is generally sensitive to the choice of which confounding factors are included in the model. We propose a new approach, which we call Bayesian adjustment for confounding (BAC), to estimate the effect of an exposure of interest on the outcome, while accounting for the uncertainty in the choice of confounders. Our approach is based on specifying two models: (1) the outcome as a function of the exposure and the potential confounders (the outcome model); and (2) the exposure as a function of the potential confounders (the exposure model). We consider Bayesian variable selection on both models and link the two by introducing a dependence parameter, ω, denoting the prior odds of including a predictor in the outcome model, given that the same predictor is in the exposure model. In the absence of dependence (ω= 1), BAC reduces to traditional Bayesian model averaging (BMA). In simulation studies, we show that BAC, with ω > 1, estimates the exposure effect with smaller bias than traditional BMA, and improved coverage. We, then, compare BAC, a recent approach of Crainiceanu, Dominici, and Parmigiani (2008, Biometrika 95, 635-651), and traditional BMA in a time series data set of hospital admissions, air pollution levels, and weather variables in Nassau, NY for the period 1999-2005. Using each approach, we estimate the short-term effects of on emergency admissions for cardiovascular diseases, accounting for confounding. This application illustrates the potentially significant pitfalls of misusing variable selection methods in the context of adjustment uncertainty. © 2012, The International Biometric Society.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Park, Junghyun; Stump, Brian W.; Hayward, Chris; ...
2016-07-14
This work quantifies the physical characteristics of infrasound signal and noise, assesses their temporal variations, and determines the degree to which these effects can be predicted by time-varying atmospheric models to estimate array and network performance. An automated detector that accounts for both correlated and uncorrelated noise is applied to infrasound data from three seismo-acoustic arrays in South Korea (BRDAR, CHNAR, and KSGAR), cooperatively operated by Korea Institute of Geoscience and Mineral Resources (KIGAM) and Southern Methodist University (SMU). Arrays located on an island and near the coast have higher noise power, consistent with both higher wind speeds and seasonablymore » variable ocean wave contributions. On the basis of the adaptive F-detector quantification of time variable environmental effects, the time-dependent scaling variable is shown to be dependent on both weather conditions and local site effects. Significant seasonal variations in infrasound detections including daily time of occurrence, detection numbers, and phase velocity/azimuth estimates are documented. These time-dependent effects are strongly correlated with atmospheric winds and temperatures and are predicted by available atmospheric specifications. As a result, this suggests that commonly available atmospheric specifications can be used to predict both station and network detection performance, and an appropriate forward model improves location capabilities as a function of time.« less
Park, Junghyun; Stump, Brian W; Hayward, Chris; Arrowsmith, Stephen J; Che, Il-Young; Drob, Douglas P
2016-07-01
This work quantifies the physical characteristics of infrasound signal and noise, assesses their temporal variations, and determines the degree to which these effects can be predicted by time-varying atmospheric models to estimate array and network performance. An automated detector that accounts for both correlated and uncorrelated noise is applied to infrasound data from three seismo-acoustic arrays in South Korea (BRDAR, CHNAR, and KSGAR), cooperatively operated by Korea Institute of Geoscience and Mineral Resources (KIGAM) and Southern Methodist University (SMU). Arrays located on an island and near the coast have higher noise power, consistent with both higher wind speeds and seasonably variable ocean wave contributions. On the basis of the adaptive F-detector quantification of time variable environmental effects, the time-dependent scaling variable is shown to be dependent on both weather conditions and local site effects. Significant seasonal variations in infrasound detections including daily time of occurrence, detection numbers, and phase velocity/azimuth estimates are documented. These time-dependent effects are strongly correlated with atmospheric winds and temperatures and are predicted by available atmospheric specifications. This suggests that commonly available atmospheric specifications can be used to predict both station and network detection performance, and an appropriate forward model improves location capabilities as a function of time.
NASA Astrophysics Data System (ADS)
Ummenhofer, Caroline C.; Kulüke, Marco; Tierney, Jessica E.
2018-04-01
East African hydroclimate exhibits considerable variability across a range of timescales, with implications for its population that depends on the region's two rainy seasons. Recent work demonstrated that current state-of-the-art climate models consistently underestimate the long rains in boreal spring over the Horn of Africa while overestimating the short rains in autumn. This inability to represent the seasonal cycle makes it problematic for climate models to project changes in East African precipitation. Here we consider whether this bias also has implications for understanding interannual and decadal variability in the East African long and short rains. Using a consistent framework with an unforced multi-century global coupled climate model simulation, the role of Indo-Pacific variability for East African rainfall is compared across timescales and related to observations. The dominant driver of East African rainfall anomalies critically depends on the timescale under consideration: Interannual variations in East African hydroclimate coincide with significant sea surface temperature (SST) anomalies across the Indo-Pacific, including those associated with the El Niño-Southern Oscillation (ENSO) in the eastern Pacific, and are linked to changes in the Walker circulation, regional winds and vertical velocities over East Africa. Prolonged drought/pluvial periods in contrast exhibit anomalous SST predominantly in the Indian Ocean and Indo-Pacific warm pool (IPWP) region, while eastern Pacific anomalies are insignificant. We assessed dominant frequencies in Indo-Pacific SST and found the eastern equatorial Pacific dominated by higher-frequency variability in the ENSO band, while the tropical Indian Ocean and IPWP exhibit lower-frequency variability beyond 10 years. This is consistent with the different contribution to regional precipitation anomalies for the eastern Pacific versus Indian Ocean and IPWP on interannual and decadal timescales, respectively. In the model, the dominant low-frequency signal seen in the observations in the Indo-Pacific is not well-represented as it instead exhibits overly strong variability on subdecadal timescales. The overly strong ENSO-teleconnection likely contributes to the overestimated role of the short rains in the seasonal cycle in the model compared to observations.
Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin
2018-01-01
Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.
Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination
Sinkkonen, Aki
2005-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163
Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination
Sinkkonen, Aki
2006-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596
Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory
NASA Astrophysics Data System (ADS)
Rahimi, A.; Zhang, L.
2012-12-01
Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;
Hayes, Rashelle B.; Geller, Alan C.; Crawford, Sybil L.; Jolicoeur, Denise; Churchill, Linda C.; Okuyemi, Kola; David, Sean P.; Adams, Michael; Waugh, Jonathan; Allen, Sharon S.; Leone, Frank T.; Fauver, Randy; Leung, Katherine; Liu, Qin; Ockene, Judith K.
2015-01-01
Objective Physicians play a critical role in addressing tobacco dependence, yet report limited training. Tobacco dependence treatment curricula for medical students could improve performance in this area. This study identified student and medical school tobacco treatment curricula characteristics associated with intentions and use of the 5As for tobacco treatment among 3rd year U.S. medical students. Methods Third year medical students (N=1065, 49.3% male) from 10 U.S. medical schools completed a survey in 2009-2010 assessing student characteristics, including demographics, tobacco treatment knowledge, and self-efficacy. Tobacco curricula characteristics assessed included amount and type of classroom instruction, frequency of tobacco treatment observation, instruction, and perception of preceptors as role models. Results Greater tobacco treatment knowledge, self-efficacy, and curriculum-specific variables were associated with 5A intentions, while younger age, tobacco treatment self-efficacy, intentions, and each curriculum-specific variable was associated with greater 5A behaviors. When controlling for important student variables, greater frequency of receiving 5A instruction (OR = 1.07; 95%CI 1.01-1.12) and perception of preceptors as excellent role models in tobacco treatment (OR = 1.35; 95%CI 1.04-1.75) were significant curriculum predictors of 5A intentions. Greater 5A instruction (B = .06 (.03); p< .05) and observation of tobacco treatment (B= .35 (.02); p< .001) were significant curriculum predictors of greater 5A behaviors. Conclusions Greater exposure to tobacco treatment teaching during medical school is associated with both greater intentions to use and practice tobacco 5As. Clerkship preceptors, or those physicians who provide training to medical students, may be particularly influential when they personally model and instruct students in tobacco dependence treatment. PMID:25572623
Impact of ionic current variability on human ventricular cellular electrophysiology.
Romero, Lucía; Pueyo, Esther; Fink, Martin; Rodríguez, Blanca
2009-10-01
Abnormalities in repolarization and its rate dependence are known to be related to increased proarrhythmic risk. A number of repolarization-related electrophysiological properties are commonly used as preclinical biomarkers of arrhythmic risk. However, the variability and complexity of repolarization mechanisms make the use of cellular biomarkers to predict arrhythmic risk preclinically challenging. Our goal is to investigate the role of ionic current properties and their variability in modulating cellular biomarkers of arrhythmic risk to improve risk stratification and identification in humans. A systematic investigation into the sensitivity of the main preclinical biomarkers of arrhythmic risk to changes in ionic current conductances and kinetics was performed using computer simulations. Four stimulation protocols were applied to the ten Tusscher and Panfilov human ventricular model to quantify the impact of +/-15 and +/-30% variations in key model parameters on action potential (AP) properties, Ca(2+) and Na(+) dynamics, and their rate dependence. Simulations show that, in humans, AP duration is moderately sensitive to changes in all repolarization current conductances and in L-type Ca(2+) current (I(CaL)) and slow component of the delayed rectifier current (I(Ks)) inactivation kinetics. AP triangulation, however, is strongly dependent only on inward rectifier K(+) current (I(K1)) and delayed rectifier current (I(Kr)) conductances. Furthermore, AP rate dependence (i.e., AP duration rate adaptation and restitution properties) and intracellular Ca(2+) and Na(+) levels are highly sensitive to both I(CaL) and Na(+)/K(+) pump current (I(NaK)) properties. This study provides quantitative insights into the sensitivity of preclinical biomarkers of arrhythmic risk to variations in ionic current properties in humans. The results show the importance of sensitivity analysis as a powerful method for the in-depth validation of mathematical models in cardiac electrophysiology.
Pricing Models Using Real Data
ERIC Educational Resources Information Center
Obremski, Tom
2008-01-01
A practical hands-on classroom exercise is described and illustrated using the price of an item as dependent variable throughout. The exercise is well-tested and affords the instructor a variety of approaches and levels.
NASA Technical Reports Server (NTRS)
Smith, C. B.
1982-01-01
The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.
Sohl, Terry L.
2014-01-01
Species distribution models often use climate data to assess contemporary and/or future ranges for animal or plant species. Land use and land cover (LULC) data are important predictor variables for determining species range, yet are rarely used when modeling future distributions. In this study, maximum entropy modeling was used to construct species distribution maps for 50 North American bird species to determine relative contributions of climate and LULC for contemporary (2001) and future (2075) time periods. Species presence data were used as a dependent variable, while climate, LULC, and topographic data were used as predictor variables. Results varied by species, but in general, measures of model fit for 2001 indicated significantly poorer fit when either climate or LULC data were excluded from model simulations. Climate covariates provided a higher contribution to 2001 model results than did LULC variables, although both categories of variables strongly contributed. The area deemed to be "suitable" for 2001 species presence was strongly affected by the choice of model covariates, with significantly larger ranges predicted when LULC was excluded as a covariate. Changes in species ranges for 2075 indicate much larger overall range changes due to projected climate change than due to projected LULC change. However, the choice of study area impacted results for both current and projected model applications, with truncation of actual species ranges resulting in lower model fit scores and increased difficulty in interpreting covariate impacts on species range. Results indicate species-specific response to climate and LULC variables; however, both climate and LULC variables clearly are important for modeling both contemporary and potential future species ranges.
Sohl, Terry L.
2014-01-01
Species distribution models often use climate data to assess contemporary and/or future ranges for animal or plant species. Land use and land cover (LULC) data are important predictor variables for determining species range, yet are rarely used when modeling future distributions. In this study, maximum entropy modeling was used to construct species distribution maps for 50 North American bird species to determine relative contributions of climate and LULC for contemporary (2001) and future (2075) time periods. Species presence data were used as a dependent variable, while climate, LULC, and topographic data were used as predictor variables. Results varied by species, but in general, measures of model fit for 2001 indicated significantly poorer fit when either climate or LULC data were excluded from model simulations. Climate covariates provided a higher contribution to 2001 model results than did LULC variables, although both categories of variables strongly contributed. The area deemed to be “suitable” for 2001 species presence was strongly affected by the choice of model covariates, with significantly larger ranges predicted when LULC was excluded as a covariate. Changes in species ranges for 2075 indicate much larger overall range changes due to projected climate change than due to projected LULC change. However, the choice of study area impacted results for both current and projected model applications, with truncation of actual species ranges resulting in lower model fit scores and increased difficulty in interpreting covariate impacts on species range. Results indicate species-specific response to climate and LULC variables; however, both climate and LULC variables clearly are important for modeling both contemporary and potential future species ranges. PMID:25372571
The Need for Speed in Rodent Locomotion Analyses
Batka, Richard J.; Brown, Todd J.; Mcmillan, Kathryn P.; Meadows, Rena M.; Jones, Kathryn J.; Haulcomb, Melissa M.
2016-01-01
Locomotion analysis is now widely used across many animal species to understand the motor defects in disease, functional recovery following neural injury, and the effectiveness of various treatments. More recently, rodent locomotion analysis has become an increasingly popular method in a diverse range of research. Speed is an inseparable aspect of locomotion that is still not fully understood, and its effects are often not properly incorporated while analyzing data. In this hybrid manuscript, we accomplish three things: (1) review the interaction between speed and locomotion variables in rodent studies, (2) comprehensively analyze the relationship between speed and 162 locomotion variables in a group of 16 wild-type mice using the CatWalk gait analysis system, and (3) develop and test a statistical method in which locomotion variables are analyzed and reported in the context of speed. Notable results include the following: (1) over 90% of variables, reported by CatWalk, were dependent on speed with an average R2 value of 0.624, (2) most variables were related to speed in a nonlinear manner, (3) current methods of controlling for speed are insufficient, and (4) the linear mixed model is an appropriate and effective statistical method for locomotion analyses that is inclusive of speed-dependent relationships. Given the pervasive dependency of locomotion variables on speed, we maintain that valid conclusions from locomotion analyses cannot be made unless they are analyzed and reported within the context of speed. PMID:24890845
Sun, Jennifer K.; Qin, Haijing; Aiello, Lloyd Paul; Melia, Michele; Beck, Roy W.; Andreoli, Christopher M.; Edwards, Paul A.; Glassman, Adam R.; Pavlica, Michael R.
2012-01-01
Objective To compare visual acuity (VA) scores after autorefraction versus research protocol manual refraction in eyes of patients with diabetes and a wide range of VA. Methods Electronic Early Treatment Diabetic Retinopathy Study (E-ETDRS) VA Test© letter score (EVA) was measured after autorefraction (AR-EVA) and after Diabetic Retinopathy Clinical Research Network (DRCR.net) protocol manual refraction (MR-EVA). Testing order was randomized, study participants and VA examiners were masked to refraction source, and a second EVA utilizing an identical manual refraction (MR-EVAsupl) was performed to determine test-retest variability. Results In 878 eyes of 456 study participants, median MR-EVA was 74 (Snellen equivalent approximately 20/32). Spherical equivalent was often similar for manual and autorefraction (median difference: 0.00, 5th and 95th percentiles −1.75 to +1.13 Diopters). However, on average, MR-EVA results were slightly better than AR-EVA results across the entire VA range. Furthermore, variability between AR-EVA and MR-EVA was substantially greater than the test-retest variability of MR-EVA (P<0.001). Variability of differences was highly dependent on autorefractor model. Conclusions Across a wide range of VA at multiple sites using a variety of autorefractors, VA measurements tend to be worse with autorefraction than manual refraction. Differences between individual autorefractor models were identified. However, even among autorefractor models comparing most favorably to manual refraction, VA variability between autorefraction and manual refraction is higher than the test-retest variability of manual refraction. The results suggest that with current instruments, autorefraction is not an acceptable substitute for manual refraction for most clinical trials with primary outcomes dependent on best-corrected VA. PMID:22159173
NASA Astrophysics Data System (ADS)
Xiong, Ying; Wiita, Paul J.; Bao, Gang
2000-12-01
The possibility that some of the observed X-ray and optical variability in active galactic nuclei and galactic black hole candidates are produced in accretion disks through the development of a self-organized critical state is reconsidered. New simulations, including more complete calculations of relativistic effects, do show that this model can produce light-curves and power-spectra for the variability which agree with the range observed in optical and X-ray studies of AGN and X-ray binaries. However, the universality of complete self-organized criticality has not quite been achieved. This is mainly because the character of the variations depend quite substantially on the extent of the unstable disk region. If it extends close to the innermost stable orbit, a physical scale is introduced and the scale-free character of self-organized criticality is vitiated. A significant dependence of the power spectrum density slope on the type of diffusion within the disk and a weaker dependence on the amount of differential rotation are noted. When general-relativistic effects are incorporated in the models, additional substantial differences are produced if the disk is viewed from directions far from the accretion disk axis.
Using transfer functions to quantify El Niño Southern Oscillation dynamics in data and models.
MacMartin, Douglas G; Tziperman, Eli
2014-09-08
Transfer function tools commonly used in engineering control analysis can be used to better understand the dynamics of El Niño Southern Oscillation (ENSO), compare data with models and identify systematic model errors. The transfer function describes the frequency-dependent input-output relationship between any pair of causally related variables, and can be estimated from time series. This can be used first to assess whether the underlying relationship is or is not frequency dependent, and if so, to diagnose the underlying differential equations that relate the variables, and hence describe the dynamics of individual subsystem processes relevant to ENSO. Estimating process parameters allows the identification of compensating model errors that may lead to a seemingly realistic simulation in spite of incorrect model physics. This tool is applied here to the TAO array ocean data, the GFDL-CM2.1 and CCSM4 general circulation models, and to the Cane-Zebiak ENSO model. The delayed oscillator description is used to motivate a few relevant processes involved in the dynamics, although any other ENSO mechanism could be used instead. We identify several differences in the processes between the models and data that may be useful for model improvement. The transfer function methodology is also useful in understanding the dynamics and evaluating models of other climate processes.
Modelling Solar and Stellar Brightness Variabilities
NASA Astrophysics Data System (ADS)
Yeo, K. L.; Shapiro, A. I.; Krivova, N. A.; Solanki, S. K.
2016-04-01
Total and spectral solar irradiance, TSI and SSI, have been measured from space since 1978. This is accompanied by the development of models aimed at replicating the observed variability by relating it to solar surface magnetism. Despite significant progress, there remains persisting controversy over the secular change and the wavelength-dependence of the variation with impact on our understanding of the Sun's influence on the Earth's climate. We highlight the recent progress in TSI and SSI modelling with SATIRE. Brightness variations have also been observed for Sun-like stars. Their analysis can profit from knowledge of the solar case and provide additional constraints for solar modelling. We discuss the recent effort to extend SATIRE to Sun-like stars.
Partial Least Squares Regression Models for the Analysis of Kinase Signaling.
Bourgeois, Danielle L; Kreeger, Pamela K
2017-01-01
Partial least squares regression (PLSR) is a data-driven modeling approach that can be used to analyze multivariate relationships between kinase networks and cellular decisions or patient outcomes. In PLSR, a linear model relating an X matrix of dependent variables and a Y matrix of independent variables is generated by extracting the factors with the strongest covariation. While the identified relationship is correlative, PLSR models can be used to generate quantitative predictions for new conditions or perturbations to the network, allowing for mechanisms to be identified. This chapter will provide a brief explanation of PLSR and provide an instructive example to demonstrate the use of PLSR to analyze kinase signaling.
A crystallographic model for the tensile and fatigue response for Rene N4 at 982 C
NASA Technical Reports Server (NTRS)
Sheh, M. Y.; Stouffer, D. C.
1990-01-01
An anisotropic constitutive model based on crystallographic slip theory was formulated for nickel-base single-crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the existence of back stress in single crystals. The results showed that the back stress effect of reverse inelastic flow on the unloading stress is orientation-dependent, and a back stress state variable in the inelastic flow equation is necessary for predicting inelastic behavior. Model correlations and predictions of experimental data are presented for the single crystal superalloy Rene N4 at 982 C.
Vecchia, Aldo V.; Crawford, Charles G.
2006-01-01
A time-series model was developed to simulate daily pesticide concentrations for streams in the coterminous United States. The model was based on readily available information on pesticide use, climatic variability, and watershed charac-teristics and was used to simulate concentrations for four herbicides [atrazine, ethyldipropylthiocarbamate (EPTC), metolachlor, and trifluralin] and three insecticides (carbofuran, ethoprop, and fonofos) that represent a range of physical and chemical properties, application methods, national application amounts, and areas of use in the United States. The time-series model approximates the probability distributions, seasonal variability, and serial correlation characteristics in daily pesticide concentration data from a national network of monitoring stations. The probability distribution of concentrations for a particular pesticide and station was estimated using the Watershed Regressions for Pesticides (WARP) model. The WARP model, which was developed in previous studies to estimate the probability distribution, was based on selected nationally available watershed-characteristics data, such as pesticide use and soil characteristics. Normality transformations were used to ensure that the annual percentiles for the simulated concentrations agree closely with the percentiles estimated from the WARP model. Seasonal variability in the transformed concentrations was maintained by relating the transformed concentration to precipitation and temperature data from the United States Historical Climatology Network. The monthly precipitation and temperature values were estimated for the centroids of each watershed. Highly significant relations existed between the transformed concentrations, concurrent monthly precipitation, and concurrent and lagged monthly temperature. The relations were consistent among the different pesticides and indicated the transformed concentrations generally increased as precipitation increased but the rate of increase depended on a temperature-dependent growing-season effect. Residual variability of the transformed concentrations, after removal of the effects of precipitation and temperature, was partitioned into a signal (systematic variability that is related from one day to the next) and noise (random variability that is not related from one day to the next). Variograms were used to evaluate measurement error, seasonal variability, and serial correlation of the historical data. The variogram analysis indicated substantial noise resulted, at least in part, from measurement errors (the differences between the actual concen-trations and the laboratory concentrations). The variogram analysis also indicated the presence of a strongly correlated signal, with an exponentially decaying serial correlation function and a correlation time scale (the time required for the correlation to decay to e-1 equals 0.37) that ranged from about 18 to 66 days, depending on the pesticide type. Simulated daily pesticide concentrations from the time-series model indicated the simulated concentrations for the stations located in the northeastern quadrant of the United States where most of the monitoring stations are located generally were in good agreement with the data. The model neither consistently overestimated or underestimated concentrations for streams that are located in this quadrant and the magnitude and timing of high or low concentrations generally coincided reasonably well with the data. However, further data collection and model development may be necessary to determine whether the model should be used for areas for which few historical data are available.
Constrained Stochastic Extended Redundancy Analysis.
DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco
2015-06-01
We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA).
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
A land surface hydrology parameterization for use in atmospheric GCM's is presented. The parameterization incorporates subgrid scale variability in topography, soils, soil moisture and precipitation. The framework of the model is the statistical distribution of a topography-soils index, which controls the local water balance fluxes, and is therefore taken to represent the large land area. Spatially variable water balance fluxes are integrated with respect to the topography-soils index to yield our large topography-soils distribution, and interval responses are weighted by the probability of occurrence of the interval. Grid square averaged land surface fluxes result. The model functions independently as a macroscale water balance model. Runoff ratio and evapotranspiration efficiency parameterizations are derived and are shown to depend on the spatial variability of the above mentioned properties and processes, as well as the dynamics of land surface-atmosphere interactions.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.
2008-04-01
Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.
Environmental Management Model for Road Maintenance Operation Involving Community Participation
NASA Astrophysics Data System (ADS)
Triyono, A. R. H.; Setyawan, A.; Sobriyah; Setiono, P.
2017-07-01
Public expectations of Central Java, which is very high on demand fulfillment, especially road infrastructure as outlined in the number of complaints and community expectations tweeter, Short Mail Massage (SMS), e-mail and public reports from various media, Highways Department of Central Java province requires development model of environmental management in the implementation of a routine way by involving the community in order to fulfill the conditions of a representative, may serve road users safely and comfortably. This study used survey method with SEM analysis and SWOT with Latent Independent Variable (X), namely; Public Participation in the regulation, development, construction and supervision of road (PSM); Public behavior in the utilization of the road (PMJ) Provincial Road Service (PJP); Safety in the Provincial Road (KJP); Integrated Management System (SMT) and latent dependent variable (Y) routine maintenance of the provincial road that is integrated with the environmental management system and involve the participation of the community (MML). The result showed the implementation of routine maintenance of road conditions in Central Java province has yet to implement an environmental management by involving the community; Therefore developed environmental management model with the results of H1: Community Participation (PSM) has positive influence on the Model of Environmental Management (MML); H2: Behavior Society in Jalan Utilization (PMJ) positive effect on Model Environmental Management (MML); H3: Provincial Road Service (PJP) positive effect on Model Environmental Management (MML); H4: Safety in the Provincial Road (KJP) positive effect on Model Environmental Management (MML); H5: Integrated Management System (SMT) has positive influence on the Model of Environmental Management (MML). From the analysis obtained formulation model describing the relationship / influence of the independent variables PSM, PMJ, PJP, KJP, and SMT on the dependent variable MML as follows: MML = 0.13 + 0.07 PSM PJP PMJ + 0.09 + 0.19 + 0.48 KJP SMT + e
Hydrological Dynamics of Central America: Time-of-Emergence of the Global Warming Signal
NASA Astrophysics Data System (ADS)
Imbach, P. A.; Georgiou, S.; Calderer, L.; Coto, A.; Nakaegawa, T.; Chou, S. C.; Lyra, A. A.; Hidalgo, H. G.; Ciais, P.
2016-12-01
Central America is among the world's most vulnerable regions to climate variability and change. Country economies are highly dependent on the agricultural sector and over 40 million people's rural livelihoods directly depend on the use of natural resources. Future climate scenarios show a drier outlook (higher temperatures and lower precipitation) over a region where rural livelihoods are already compromised by water availability and climate variability. Previous efforts to validate modelling of the regional hydrology have been based on high resolution (1 km2) equilibrium models (Imbach et al., 2010) or using dynamic models (Variable Infiltration Capacity) with coarse climate forcing (0.5°) (Hidalgo et al., 2013; Maurer et al., 2009). We present here: (i) validation of the hydrological outputs from high-resolution simulations (10 km2) of a dynamic vegetation model (Orchidee), using 7 different sets of model input forcing data, with monthly runoff observations from 182 catchments across Central America; (ii) the first assessments of the region's hydrological variability using the historical simulations (iii) an estimation of the time of emergence of the climate change signal (under the SRES emission scenarios) on the water balance. We found model performance to be comparable with that from studies in other world regions (Yang et al. 2016) when forced with high resolution precipitation data (monthly values at 5 km2, Funk et al. (2015)) and the Climate Research Unit (CRU 3.2, Harris et al. (2014)) dataset of meteorological parameters. Validation results showed a Pearson correlation coefficient ≈ 0.6, general underestimation of runoff of ≈ 60% and variability close to observed values (ratio of standard deviations of ≈ 0.7). Maps of historical runoff are presented to show areas where high runoff variability follows high mean annual runoff, with opposite trends over the Caribbean. Future scenarios show large areas where future maximum water availability will always fall below minus-one standard deviation of the historical values by mid-century. Additionally, our results highlight the time horizon left to develop adaptation strategies to cope with future reductions in water availability.
McBride, Orla; Adamson, Gary; Bunting, Brendan P; McCann, Siobhan
2009-01-01
Research has demonstrated that diagnostic orphans (i.e. individuals who experience only one to two criteria of DSM-IV alcohol dependence) can encounter significant health problems. Using the SF-12v2, this study examined the general health functioning of alcohol users, and in particular, diagnostic orphans. Current drinkers (n = 26,913) in the National Epidemiologic Survey on Alcohol and Related Conditions were categorized into five diagnosis groups: no alcohol use disorder (no-AUD), one-criterion orphans, two-criterion orphans, alcohol abuse and alcohol dependence. Latent variable modelling was used to assess the associations between the physical and mental health factors of the SF-12v2 and the diagnosis groups and a variety of background variables. In terms of mental health, one-criterion orphans had significantly better health than two-criterion orphans and the dependence group, but poorer health than the no-AUD group. No significant differences were evident between the one-criterion orphan group and the alcohol abuse group. One-criterion orphans had significantly poorer physical health when compared to the no-AUD group. One- and two-criterion orphans did not differ in relation to physical health. Consistent with previous research, diagnostic orphans in the current study appear to have experienced clinically relevant symptoms of alcohol dependence. The current findings suggest that diagnostic orphans may form part of an alcohol use disorders spectrum severity.
NASA Astrophysics Data System (ADS)
Van Uytven, E.; Willems, P.
2018-03-01
Climate change impact assessment on meteorological variables involves large uncertainties as a result of incomplete knowledge on the future greenhouse gas concentrations and climate model physics, next to the inherent internal variability of the climate system. Given that the alteration in greenhouse gas concentrations is the driver for the change, one expects the impacts to be highly dependent on the considered greenhouse gas scenario (GHS). In this study, we denote this behavior as GHS sensitivity. Due to the climate model related uncertainties, this sensitivity is, at local scale, not always that strong as expected. This paper aims to study the GHS sensitivity and its contributing role to climate scenarios for a case study in Belgium. An ensemble of 160 CMIP5 climate model runs is considered and climate change signals are studied for precipitation accumulation, daily precipitation intensities and wet day frequencies. This was done for the different seasons of the year and the scenario periods 2011-2040, 2031-2060, 2051-2081 and 2071-2100. By means of variance decomposition, the total variance in the climate change signals was separated in the contribution of the differences in GHSs and the other model-related uncertainty sources. These contributions were found dependent on the variable and season. Following the time of emergence concept, the GHS uncertainty contribution is found dependent on the time horizon and increases over time. For the most distinct time horizon (2071-2100), the climate model uncertainty accounts for the largest uncertainty contribution. The GHS differences explain up to 18% of the total variance in the climate change signals. The results point further at the importance of the climate model ensemble design, specifically the ensemble size and the combination of climate models, whereupon climate scenarios are based. The numerical noise, introduced at scales smaller than the skillful scale, e.g. at local scale, was not considered in this study.
Software development: Stratosphere modeling
NASA Technical Reports Server (NTRS)
Chen, H. C.
1977-01-01
A more comprehensive model for stratospheric chemistry and transport theory was developed for the purpose of aiding predictions of changes in the stratospheric ozone content as a consequence of natural and anthropogenic processes. This new and more advanced stratospheric model is time dependent and the dependent variables are zonal means of the relevant meteorological quantities which are functions of latitude and height. The model was constructed by the best mathematical approach on a large IBM S360 in American National Standard FORTRAN. It will be both a scientific tool and an assessment device used to evaluate other models. The interactions of dynamics, photochemistry and radiation in the stratosphere can be governed by a set of fundamental dynamical equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unger, N.; Harper, K.; Zheng, Y.
2013-10-22
We describe the implementation of a biochemical model of isoprene emission that depends on the electron requirement for isoprene synthesis into the Farquhar/Ball- Berry leaf model of photosynthesis and stomatal conductance that is embedded within a global chemistry-climate simulation framework. The isoprene production is calculated as a function of electron transport-limited photosynthesis, intercellular carbon dioxide concentration, and canopy temperature. The vegetation biophysics module computes the photosynthetic uptake of carbon dioxide coupled with the transpiration of water vapor and the isoprene emission rate at the 30 min physical integration time step of the global chemistry-climate model. In the model, the ratemore » of carbon assimilation provides the dominant control on isoprene emission variability over canopy temperature. A control simulation representative of the present day climatic state that uses plant functional types (PFTs), prescribed phenology and generic PFT-specific isoprene emission potentials (fraction of electrons available for isoprene synthesis) reproduces 50% of the variability across different ecosystems and seasons in a global database of measured campaign-average fluxes. Compared to time-varying isoprene flux measurements at select sites, the model authentically captures the observed variability in the 30 min average diurnal cycle (R 2 = 64-96 %) and simulates the flux magnitude to within a factor of 2. The control run yields a global isoprene source strength of 451 TgC yr -1 that increases by 30% in the artificial absence of plant water stress and by 55% for potential natural vegetation.« less
NASA Technical Reports Server (NTRS)
Unger, N.; Harper, K.; Zeng, Y.; Kiang, N. Y.; Alienov, I.; Arneth, A.; Schurgers, G.; Amelynck, C.; Goldstein, A.; Guenther, A.;
2013-01-01
We describe the implementation of a biochemical model of isoprene emission that depends on the electron requirement for isoprene synthesis into the FarquharBallBerry leaf model of photosynthesis and stomatal conductance that is embedded within a global chemistry-climate simulation framework. The isoprene production is calculated as a function of electron transport-limited photosynthesis, intercellular and atmospheric carbon dioxide concentration, and canopy temperature. The vegetation biophysics module computes the photosynthetic uptake of carbon dioxide coupled with the transpiration of water vapor and the isoprene emission rate at the 30 min physical integration time step of the global chemistry-climate model. In the model, the rate of carbon assimilation provides the dominant control on isoprene emission variability over canopy temperature. A control simulation representative of the present-day climatic state that uses 8 plant functional types (PFTs), prescribed phenology and generic PFT-specific isoprene emission potentials (fraction of electrons available for isoprene synthesis) reproduces 50 of the variability across different ecosystems and seasons in a global database of 28 measured campaign-average fluxes. Compared to time-varying isoprene flux measurements at 9 select sites, the model authentically captures the observed variability in the 30 min average diurnal cycle (R2 6496) and simulates the flux magnitude to within a factor of 2. The control run yields a global isoprene source strength of 451 TgC yr1 that increases by 30 in the artificial absence of plant water stress and by 55 for potential natural vegetation.
NASA Astrophysics Data System (ADS)
Unger, N.; Harper, K.; Zheng, Y.; Kiang, N. Y.; Aleinov, I.; Arneth, A.; Schurgers, G.; Amelynck, C.; Goldstein, A.; Guenther, A.; Heinesch, B.; Hewitt, C. N.; Karl, T.; Laffineur, Q.; Langford, B.; McKinney, K. A.; Misztal, P.; Potosnak, M.; Rinne, J.; Pressley, S.; Schoon, N.; Serça, D.
2013-10-01
We describe the implementation of a biochemical model of isoprene emission that depends on the electron requirement for isoprene synthesis into the Farquhar-Ball-Berry leaf model of photosynthesis and stomatal conductance that is embedded within a global chemistry-climate simulation framework. The isoprene production is calculated as a function of electron transport-limited photosynthesis, intercellular and atmospheric carbon dioxide concentration, and canopy temperature. The vegetation biophysics module computes the photosynthetic uptake of carbon dioxide coupled with the transpiration of water vapor and the isoprene emission rate at the 30 min physical integration time step of the global chemistry-climate model. In the model, the rate of carbon assimilation provides the dominant control on isoprene emission variability over canopy temperature. A control simulation representative of the present-day climatic state that uses 8 plant functional types (PFTs), prescribed phenology and generic PFT-specific isoprene emission potentials (fraction of electrons available for isoprene synthesis) reproduces 50% of the variability across different ecosystems and seasons in a global database of 28 measured campaign-average fluxes. Compared to time-varying isoprene flux measurements at 9 select sites, the model authentically captures the observed variability in the 30 min average diurnal cycle (R2 = 64-96%) and simulates the flux magnitude to within a factor of 2. The control run yields a global isoprene source strength of 451 TgC yr-1 that increases by 30% in the artificial absence of plant water stress and by 55% for potential natural vegetation.
NASA Astrophysics Data System (ADS)
Unger, N.; Harper, K.; Zheng, Y.; Kiang, N. Y.; Aleinov, I.; Arneth, A.; Schurgers, G.; Amelynck, C.; Goldstein, A.; Guenther, A.; Heinesch, B.; Hewitt, C. N.; Karl, T.; Laffineur, Q.; Langford, B.; McKinney, K. A.; Misztal, P.; Potosnak, M.; Rinne, J.; Pressley, S.; Schoon, N.; Serça, D.
2013-07-01
We describe the implementation of a biochemical model of isoprene emission that depends on the electron requirement for isoprene synthesis into the Farquhar/Ball-Berry leaf model of photosynthesis and stomatal conductance that is embedded within a global chemistry-climate simulation framework. The isoprene production is calculated as a function of electron transport-limited photosynthesis, intercellular carbon dioxide concentration, and canopy temperature. The vegetation biophysics module computes the photosynthetic uptake of carbon dioxide coupled with the transpiration of water vapor and the isoprene emission rate at the 30 min physical integration time step of the global chemistry-climate model. In the model, the rate of carbon assimilation provides the dominant control on isoprene emission variability over canopy temperature. A control simulation representative of the present day climatic state that uses 8 plant functional types (PFTs), prescribed phenology and generic PFT-specific isoprene emission potentials (fraction of electrons available for isoprene synthesis) reproduces 50% of the variability across different ecosystems and seasons in a global database of 28 measured campaign-average fluxes. Compared to time-varying isoprene flux measurements at 9 select sites, the model authentically captures the observed variability in the 30 min average diurnal cycle (R2= 64-96%) and simulates the flux magnitude to within a factor of 2. The control run yields a global isoprene source strength of 451 Tg C yr-1 that increases by 30% in the artificial absence of plant water stress and by 55% for potential natural vegetation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogt, R.; Randrup, J.
The event-by-event fission model FREYA has been improved, in particular to address deficiencies in the calculation of photon observables. In this paper, we discuss the improvements that have been made and introduce several new variables, some detector dependent, that affect the photon observables. We show the sensitivity of FREYA to these variables. Finally, we then compare the results to the available photon data from spontaneous and thermal neutron-induced fission.
ERIC Educational Resources Information Center
Bartik, Timothy J.
The labor market spillover effects of welfare reform were estimated by using models that pool time-series and cross-section data from the Current Population Survey on the state-year cell means of wages, employment, and other labor market outcomes for various demographic groups. The labor market outcomes in question are dependent variables that are…
Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne
2016-11-03
Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.
Modelling of subsonic COIL with an arbitrary magnetic modulation
NASA Astrophysics Data System (ADS)
Beránek, Jaroslav; Rohlena, Karel
2007-05-01
The concept of 1D subsonic COIL model with a mixing length was generalized to include the influence of a variable magnetic field on the stimulated emission cross-section. Equations describing the chemical kinetics were solved taking into account together with the gas temperature also a simplified mixing model of oxygen and iodine molecules. With the external time variable magnetic field the model is no longer stationary. A transformation in the system moving with the mixture reduces partial differential equations to ordinary equations in time with initial conditions given either by the stationary flow at the moment when the magnetic field is switched on combined with the boundary conditions at the injector. Advantage of this procedure is a possibility to consider an arbitrary temporal dependence of the imposed magnetic field and to calculate directly the response of the laser output. The method was applied to model the experimental data measured with the subsonic version of the COIL device in the Institute of Physics, Prague, where the applied magnetic field had a saw-tooth dependence. We found that various values characterizing the laser performance, such as the power density distribution over the active zone cross-section, may have a fairly complicated structure given by combined effects of the delayed reaction to the magnetic switching and the flow velocity. This is necessarily translated in a time dependent spatial inhomogeneity of output beam intensity profile.
A unified dislocation density-dependent physical-based constitutive model for cold metal forming
NASA Astrophysics Data System (ADS)
Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.
2017-10-01
Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.
NASA Astrophysics Data System (ADS)
Field, Richard J.; Gallas, Jason A. C.; Schuldberg, David
2017-08-01
Recent work has introduced social dynamic models of people's stress-related processes, some including amelioration of stress symptoms by support from others. The effects of support may be ;direct;, depending only on the level of support, or ;buffering;, depending on the product of the level of support and level of stress. We focus here on the nonlinear buffering term and use a model involving three variables (and 12 control parameters), including stress as perceived by the individual, physical and psychological symptoms, and currently active social support. This model is quantified by a set of three nonlinear differential equations governing its stationary-state stability, temporal evolution (sometimes oscillatory), and how each variable affects the others. Chaos may appear with periodic forcing of an environmental stress parameter. Here we explore this model carefully as the strength and amplitude of this forcing, and an important psychological parameter relating to self-kindling in the stress response, are varied. Three significant observations are made: 1. There exist many complex but orderly regions of periodicity and chaos, 2. there are nested regions of increasing number of peaks per cycle that may cascade to chaos, and 3. there are areas where more than one state, e.g., a period-2 oscillation and chaos, coexist for the same parameters; which one is reached depends on initial conditions.
NASA Technical Reports Server (NTRS)
De Lannoy, Gabrielle; Reichle, Rolf; Gruber, Alexander; Bechtold, Michel; Quets, Jan; Vrugt, Jasper; Wigneron, Jean-Pierre
2018-01-01
The SMOS and SMAP missions have collected a wealth of global L-band Brightness temperature (Tb) observations. The retrieval of surface Soil moisture estimates, and the estimation of other geophysical Variables, such as root-zone soil moisture and temperature, via data Assimilation into land surface models largely depends on accurate Radiative transfer modeling (RTM). This presentation will focus on various configuration aspects of the RTM (i) for the inversion of SMOS Tb to surface soil moisture, and (ii) for the forward modeling as part of a SMOS Tb data assimilation System to estimate a consistent set of geophysical land surface Variables, using the GEOS-5 Catchment Land Surface Model.
Lany, Nina K; Zarnetske, Phoebe L; Schliep, Erin M; Schaeffer, Robert N; Orians, Colin M; Orwig, David A; Preisser, Evan L
2018-05-01
A species' distribution and abundance are determined by abiotic conditions and biotic interactions with other species in the community. Most species distribution models correlate the occurrence of a single species with environmental variables only, and leave out biotic interactions. To test the importance of biotic interactions on occurrence and abundance, we compared a multivariate spatiotemporal model of the joint abundance of two invasive insects that share a host plant, hemlock woolly adelgid (HWA; Adelges tsugae) and elongate hemlock scale (EHS; Fiorina externa), to independent models that do not account for dependence among co-occurring species. The joint model revealed that HWA responded more strongly to abiotic conditions than EHS. Additionally, HWA appeared to predispose stands to subsequent increase of EHS, but HWA abundance was not strongly dependent on EHS abundance. This study demonstrates how incorporating spatial and temporal dependence into a species distribution model can reveal the dependence of a species' abundance on other species in the community. Accounting for dependence among co-occurring species with a joint distribution model can also improve estimation of the abiotic niche for species affected by interspecific interactions. © 2018 by the Ecological Society of America.
Rahmati-Najarkolaei, Fatemeh; Pakpour, Amir H; Saffari, Mohsen; Hosseini, Mahboobeh Sadat; Hajizadeh, Fereshteh; Chen, Hui; Yekaninejad, Mir Saeed
2017-04-01
Prediabetic condition can lead to development of type 2 diabetes, especially in individuals who do not adhere to a healthy lifestyle. The aim of the present study was to investigate the socio-cognitive factors using the Theory of Planned Behavior (TPB) that may be associated with the choice of lifestyle in prediabetic patients. A prospective study with one-month follow up was designed to collect data from 350 individuals with prediabetic conditions. A questionnaire was used to collect the information, including demographic variables, exercise behavior, food consumption, as well as the constructs of the TPB (attitude, subjective norms, perceived behavioral control, and behavioral intention) regarding physical activity and dietary choice. The correlations between TPB variables and the dependent variables (dietary choice, physical activity) were assessed using Spearman correlation and multiple regression models. In total, 303 people participated. The mean age of the participants was 53.0 (SD 11.5) years and 42% were males. Significant correlations were found between all TPB constructs and both dependent variables (healthy eating and exercise behaviors) both at baseline and after one month (P < 0.01). The predictive validity of the TPB over time was proved for both dependent variables where past and future behaviors were significantly correlated with the constructs. Nearly 87% of the variance in exercise behavior and 72% of the variance in healthy eating behavior were explainable by TPB constructs. The TPB may be a useful model to predict behaviors of physical activity and dietary choice among prediabetic people. Therefore, it may be used to monitor lifestyle modification to prevent development of diabetes among people with prediabetic conditions.
Sources of signal-dependent noise during isometric force production.
Jones, Kelvin E; Hamilton, Antonia F; Wolpert, Daniel M
2002-09-01
It has been proposed that the invariant kinematics observed during goal-directed movements result from reducing the consequences of signal-dependent noise (SDN) on motor output. The purpose of this study was to investigate the presence of SDN during isometric force production and determine how central and peripheral components contribute to this feature of motor control. Peripheral and central components were distinguished experimentally by comparing voluntary contractions to those elicited by electrical stimulation of the extensor pollicis longus muscle. To determine other factors of motor-unit physiology that may contribute to SDN, a model was constructed and its output compared with the empirical data. SDN was evident in voluntary isometric contractions as a linear scaling of force variability (SD) with respect to the mean force level. However, during electrically stimulated contractions to the same force levels, the variability remained constant over the same range of mean forces. When the subjects were asked to combine voluntary with stimulation-induced contractions, the linear scaling relationship between the SD and mean force returned. The modeling results highlight that much of the basic physiological organization of the motor-unit pool, such as range of twitch amplitudes and range of recruitment thresholds, biases force output to exhibit linearly scaled SDN. This is in contrast to the square root scaling of variability with mean force present in any individual motor-unit of the pool. Orderly recruitment by twitch amplitude was a necessary condition for producing linearly scaled SDN. Surprisingly, the scaling of SDN was independent of the variability of motoneuron firing and therefore by inference, independent of presynaptic noise in the motor command. We conclude that the linear scaling of SDN during voluntary isometric contractions is a natural by-product of the organization of the motor-unit pool that does not depend on signal-dependent noise in the motor command. Synaptic noise in the motor command and common drive, which give rise to the variability and synchronization of motoneuron spiking, determine the magnitude of the force variability at a given level of mean force output.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Analysis of model development strategies: predicting ventral hernia recurrence.
Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K
2016-11-01
There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huertas-Hernando, Daniel; Farahmand, Hossein; Holttinen, Hannele
2016-06-20
Hydro power is one of the most flexible sources of electricity production. Power systems with considerable amounts of flexible hydro power potentially offer easier integration of variable generation, e.g., wind and solar. However, there exist operational constraints to ensure mid-/long-term security of supply while keeping river flows and reservoirs levels within permitted limits. In order to properly assess the effective available hydro power flexibility and its value for storage, a detailed assessment of hydro power is essential. Due to the inherent uncertainty of the weather-dependent hydrological cycle, regulation constraints on the hydro system, and uncertainty of internal load as wellmore » as variable generation (wind and solar), this assessment is complex. Hence, it requires proper modeling of all the underlying interactions between hydro power and the power system, with a large share of other variable renewables. A summary of existing experience of wind integration in hydro-dominated power systems clearly points to strict simulation methodologies. Recommendations include requirements for techno-economic models to correctly assess strategies for hydro power and pumped storage dispatch. These models are based not only on seasonal water inflow variations but also on variable generation, and all these are in time horizons from very short term up to multiple years, depending on the studied system. Another important recommendation is to include a geographically detailed description of hydro power systems, rivers' flows, and reservoirs as well as grid topology and congestion.« less
Lee, Kyu Ha; Tadesse, Mahlet G; Baccarelli, Andrea A; Schwartz, Joel; Coull, Brent A
2017-03-01
The analysis of multiple outcomes is becoming increasingly common in modern biomedical studies. It is well-known that joint statistical models for multiple outcomes are more flexible and more powerful than fitting a separate model for each outcome; they yield more powerful tests of exposure or treatment effects by taking into account the dependence among outcomes and pooling evidence across outcomes. It is, however, unlikely that all outcomes are related to the same subset of covariates. Therefore, there is interest in identifying exposures or treatments associated with particular outcomes, which we term outcome-specific variable selection. In this work, we propose a variable selection approach for multivariate normal responses that incorporates not only information on the mean model, but also information on the variance-covariance structure of the outcomes. The approach effectively leverages evidence from all correlated outcomes to estimate the effect of a particular covariate on a given outcome. To implement this strategy, we develop a Bayesian method that builds a multivariate prior for the variable selection indicators based on the variance-covariance of the outcomes. We show via simulation that the proposed variable selection strategy can boost power to detect subtle effects without increasing the probability of false discoveries. We apply the approach to the Normative Aging Study (NAS) epigenetic data and identify a subset of five genes in the asthma pathway for which gene-specific DNA methylations are associated with exposures to either black carbon, a marker of traffic pollution, or sulfate, a marker of particles generated by power plants. © 2016, The International Biometric Society.
Retransformation bias in a stem profile model
Raymond L. Czaplewski; David Bruce
1990-01-01
An unbiased profile model, fit to diameter divided by diameter at breast height, overestimated volume of 5.3-m log sections by 0.5 to 3.5%. Another unbiased profile model, fit to squared diameter divided by squared diameter at breast height, underestimated bole diameters by 0.2 to 2.1%. These biases are caused by retransformation of the predicted dependent variable;...
ERIC Educational Resources Information Center
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
ERIC Educational Resources Information Center
Park, Jungkyu; Yu, Hsiu-Ting
2016-01-01
The multilevel latent class model (MLCM) is a multilevel extension of a latent class model (LCM) that is used to analyze nested structure data structure. The nonparametric version of an MLCM assumes a discrete latent variable at a higher-level nesting structure to account for the dependency among observations nested within a higher-level unit. In…
NASA Astrophysics Data System (ADS)
Dons, Evi; Van Poppel, Martine; Kochan, Bruno; Wets, Geert; Int Panis, Luc
2013-08-01
Land use regression (LUR) modeling is a statistical technique used to determine exposure to air pollutants in epidemiological studies. Time-activity diaries can be combined with LUR models, enabling detailed exposure estimation and limiting exposure misclassification, both in shorter and longer time lags. In this study, the traffic related air pollutant black carbon was measured with μ-aethalometers on a 5-min time base at 63 locations in Flanders, Belgium. The measurements show that hourly concentrations vary between different locations, but also over the day. Furthermore the diurnal pattern is different for street and background locations. This suggests that annual LUR models are not sufficient to capture all the variation. Hourly LUR models for black carbon are developed using different strategies: by means of dummy variables, with dynamic dependent variables and/or with dynamic and static independent variables. The LUR model with 48 dummies (weekday hours and weekend hours) performs not as good as the annual model (explained variance of 0.44 compared to 0.77 in the annual model). The dataset with hourly concentrations of black carbon can be used to recalibrate the annual model, resulting in many of the original explaining variables losing their statistical significance, and certain variables having the wrong direction of effect. Building new independent hourly models, with static or dynamic covariates, is proposed as the best solution to solve these issues. R2 values for hourly LUR models are mostly smaller than the R2 of the annual model, ranging from 0.07 to 0.8. Between 6 a.m. and 10 p.m. on weekdays the R2 approximates the annual model R2. Even though models of consecutive hours are developed independently, similar variables turn out to be significant. Using dynamic covariates instead of static covariates, i.e. hourly traffic intensities and hourly population densities, did not significantly improve the models' performance.
Are the binary typology models of alcoholism valid in polydrug abusers?
Pombo, Samuel; da Costa, Nuno F; Figueira, Maria L
2015-01-01
To evaluate the dichotomy of type I/II and type A/B alcoholism typologies in opiate-dependent patients with a comorbid alcohol dependence problem (ODP-AP). The validity assessment process comprised the information regarding the history of alcohol use (internal validity), cognitive-behavioral variables regarding substance use (external validity), and indicators of treatment during 6-month follow-up (predictive validity). ODP-AP subjects classified as type II/B presented an early and much more severe drinking problem and a worse clinical prognosis when considering opiate treatment variables as compared with ODP-AP subjects defined as type I/A. Furthermore, type II/B patients endorse more general positive beliefs and expectancies related to the effect of alcohol and tend to drink heavily across several intra- and interpersonal situations as compared with type I/A patients. These findings confirm two different forms of alcohol dependence, recognized as a low-severity/vulnerability subgroup and a high-severity/vulnerability subgroup, in an opiate-dependent population with a lifetime diagnosis of alcohol dependence.
Effects of metal- and fiber-reinforced composite root canal posts on flexural properties.
Kim, Su-Hyeon; Oh, Tack-Oon; Kim, Ju-Young; Park, Chun-Woong; Baek, Seung-Ho; Park, Eun-Seok
2016-01-01
The aim of this study was to observe the effects of different test conditions on the flexural properties of root canal post. Metal- and fiber-reinforced composite root canal posts of various diameters were measured to determine flexural properties using a threepoint bending test at different conditions. In this study, the span length/post diameter ratio of root canal posts varied from 3.0 to 10.0. Multiple regression models for maximum load as a dependent variable were statistically significant. The models for flexural properties as dependent variables were statistically significant, but linear regression models could not be fitted to data sets. At a low span length/post diameter ratio, the flexural properties were distorted by occurrence of shear stress in short samples. It was impossible to obtain high span length/post diameter ratio with root canal posts. The addition of parameters or coefficients is necessary to appropriately represent the flexural properties of root canal posts.
Functional Freedom: A Psychological Model of Freedom in Decision-Making
Lau, Stephan; Hiemisch, Anette
2017-01-01
The freedom of a decision is not yet sufficiently described as a psychological variable. We present a model of functional decision freedom that aims to fill that role. The model conceptualizes functional freedom as a capacity of people that varies depending on certain conditions of a decision episode. It denotes an inner capability to consciously shape complex decisions according to one’s own values and needs. Functional freedom depends on three compensatory dimensions: it is greatest when the decision-maker is highly rational, when the structure of the decision is highly underdetermined, and when the decision process is strongly based on conscious thought and reflection. We outline possible research questions, argue for psychological benefits of functional decision freedom, and explicate the model’s implications on current knowledge and research. In conclusion, we show that functional freedom is a scientific variable, permitting an additional psychological foothold in research on freedom, and that is compatible with a deterministic worldview. PMID:28678165
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Automated combinatorial method for fast and robust prediction of lattice thermal conductivity
NASA Astrophysics Data System (ADS)
Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Toher, Cormac; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano
The lack of computationally inexpensive and accurate ab-initio based methodologies to predict lattice thermal conductivity, κl, without computing the anharmonic force constants or performing time-consuming ab-initio molecular dynamics, is one of the obstacles preventing the accelerated discovery of new high or low thermal conductivity materials. The Slack equation is the best alternative to other more expensive methodologies but is highly dependent on two variables: the acoustic Debye temperature, θa, and the Grüneisen parameter, γ. Furthermore, different definitions can be used for these two quantities depending on the model or approximation. Here, we present a combinatorial approach based on the quasi-harmonic approximation to elucidate which definitions of both variables produce the best predictions of κl. A set of 42 compounds was used to test accuracy and robustness of all possible combinations. This approach is ideal for obtaining more accurate values than fast screening models based on the Debye model, while being significantly less expensive than methodologies that solve the Boltzmann transport equation.
Hayes, Andrew F; Matthes, Jörg
2009-08-01
Researchers often hypothesize moderated effects, in which the effect of an independent variable on an outcome variable depends on the value of a moderator variable. Such an effect reveals itself statistically as an interaction between the independent and moderator variables in a model of the outcome variable. When an interaction is found, it is important to probe the interaction, for theories and hypotheses often predict not just interaction but a specific pattern of effects of the focal independent variable as a function of the moderator. This article describes the familiar pick-a-point approach and the much less familiar Johnson-Neyman technique for probing interactions in linear models and introduces macros for SPSS and SAS to simplify the computations and facilitate the probing of interactions in ordinary least squares and logistic regression. A script version of the SPSS macro is also available for users who prefer a point-and-click user interface rather than command syntax.
Modelling alpha-diversities of coastal lagoon fish assemblages from the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Riera, R.; Tuset, V. M.; Betancur-R, R.; Lombarte, A.; Marcos, C.; Pérez-Ruzafa, A.
2018-07-01
Coastal lagoons are marine ecosystems spread worldwide with high ecological value; however, they are increasingly becoming deteriorated as a result of anthropogenic activity. Their conservation requires a better understanding of the biodiversity factors that may help identifying priority areas. The present study is focused on 37 Mediterranean coastal lagoons and we use predictive modelling approaches based on Generalized Linear Model (GLM) analysis to investigate variables (geomorphological, environmental, trophic or biogeographic) that may predict variations in alpha-diversity. It included taxonomic diversity, average taxonomic distinctness, and phylogenetic and functional diversity. Two GLM models by index were built depending on available variables for lagoons: in the model 1 all lagoons were used, and in the model 2 only 23. All alpha-diversity indices showed variability between lagoons associated to exogenous factors considered. The biogeographic region strongly conditioned most of models, being the first variable introduced in the models. The salinity and chlorophyll a concentration played a secondary role for the models 1 and 2, respectively. In general, the highest values of alpha-diversities were found in northwestern Mediterranean (Balearic Sea, Alborán Sea and Gulf of Lion), hence they might be considered "hotspots" at the Mediterranean scale and should have a special status for their protection.
Element enrichment factor calculation using grain-size distribution and functional data regression.
Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R
2015-01-01
In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Modelling of Axially Translating Flexible Beams
NASA Astrophysics Data System (ADS)
Theodore, R. J.; Arakeri, J. H.; Ghosal, A.
1996-04-01
The axially translating flexible beam with a prismatic joint can be modelled by using the Euler-Bernoulli beam equation together with the convective terms. In general, the method of separation of variables cannot be applied to solve this partial differential equation. In this paper, a non-dimensional form of the Euler Bernoulli beam equation is presented, obtained by using the concept of group velocity, and also the conditions under which separation of variables and assumed modes method can be used. The use of clamped-mass boundary conditions leads to a time-dependent frequency equation for the translating flexible beam. A novel method is presented for solving this time dependent frequency equation by using a differential form of the frequency equation. The assume mode/Lagrangian formulation of dynamics is employed to derive closed form equations of motion. It is shown by using Lyapunov's first method that the dynamic responses of flexural modal variables become unstable during retraction of the flexible beam, which the dynamic response during extension of the beam is stable. Numerical simulation results are presented for the uniform axial motion induced transverse vibration for a typical flexible beam.
NASA Astrophysics Data System (ADS)
Lee, H.
2016-12-01
Precipitation is one of the most important climate variables that are taken into account in studying regional climate. Nevertheless, how precipitation will respond to a changing climate and even its mean state in the current climate are not well represented in regional climate models (RCMs). Hence, comprehensive and mathematically rigorous methodologies to evaluate precipitation and related variables in multiple RCMs are required. The main objective of the current study is to evaluate the joint variability of climate variables related to model performance in simulating precipitation and condense multiple evaluation metrics into a single summary score. We use multi-objective optimization, a mathematical process that provides a set of optimal tradeoff solutions based on a range of evaluation metrics, to characterize the joint representation of precipitation, cloudiness and insolation in RCMs participating in the North American Regional Climate Change Assessment Program (NARCCAP) and Coordinated Regional Climate Downscaling Experiment-North America (CORDEX-NA). We also leverage ground observations, NASA satellite data and the Regional Climate Model Evaluation System (RCMES). Overall, the quantitative comparison of joint probability density functions between the three variables indicates that performance of each model differs markedly between sub-regions and also shows strong seasonal dependence. Because of the large variability across the models, it is important to evaluate models systematically and make future projections using only models showing relatively good performance. Our results indicate that the optimized multi-model ensemble always shows better performance than the arithmetic ensemble mean and may guide reliable future projections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yust, B.L.
The relationship between fuels used by households in a rural region of Leyte Province, the Philippines, and the variables that can affect the type and amount of fuel used were examined. Data were drawn from interviews conducted in a previous study with 150 female heads of households from 10 villages near Baybay, Leyte. Within a family-ecosystem framework, a multiple regression model was developed to identify predictors of fuel use in the households. Inputs to the system included the following independent variables representing aspects of household environments; (1) natural--geographic location of the village, (2) technical--cook stove and equipment ownership, (3) economic--distancemore » to fuel sources and number of hectares of land owned, and (4) cultural-cooking fuel preference. Two regression equations were developed. The first used as the dependent variable the number of units of each of four specific fuels used in the household in one week: wood, coconut fronds, and coconut shells, and coconut husks with shells. The second used as the dependent variable an aggregate measure, barrel oil equivalent (boe), of the quantity of all fuels used in the household in one week. The households in this study were primarily dependent on biomass fuels gathered by family members; a limited quantity of commercial fuels was used.« less
The Mathematics of Psychotherapy: A Nonlinear Model of Change Dynamics.
Schiepek, Gunter; Aas, Benjamin; Viol, Kathrin
2016-07-01
Psychotherapy is a dynamic process produced by a complex system of interacting variables. Even though there are qualitative models of such systems the link between structure and function, between network and network dynamics is still missing. The aim of this study is to realize these links. The proposed model is composed of five state variables (P: problem severity, S: success and therapeutic progress, M: motivation to change, E: emotions, I: insight and new perspectives) interconnected by 16 functions. The shape of each function is modified by four parameters (a: capability to form a trustful working alliance, c: mentalization and emotion regulation, r: behavioral resources and skills, m: self-efficacy and reward expectation). Psychologically, the parameters play the role of competencies or traits, which translate into the concept of control parameters in synergetics. The qualitative model was transferred into five coupled, deterministic, nonlinear difference equations generating the dynamics of each variable as a function of other variables. The mathematical model is able to reproduce important features of psychotherapy processes. Examples of parameter-dependent bifurcation diagrams are given. Beyond the illustrated similarities between simulated and empirical dynamics, the model has to be further developed, systematically tested by simulated experiments, and compared to empirical data.
A respiratory alert model for the Shenandoah Valley, Virginia, USA
NASA Astrophysics Data System (ADS)
Hondula, David M.; Davis, Robert E.; Knight, David B.; Sitka, Luke J.; Enfield, Kyle; Gawtry, Stephen B.; Stenger, Phillip J.; Deaton, Michael L.; Normile, Caroline P.; Lee, Temple R.
2013-01-01
Respiratory morbidity (particularly COPD and asthma) can be influenced by short-term weather fluctuations that affect air quality and lung function. We developed a model to evaluate meteorological conditions associated with respiratory hospital admissions in the Shenandoah Valley of Virginia, USA. We generated ensembles of classification trees based on six years of respiratory-related hospital admissions (64,620 cases) and a suite of 83 potential environmental predictor variables. As our goal was to identify short-term weather linkages to high admission periods, the dependent variable was formulated as a binary classification of five-day moving average respiratory admission departures from the seasonal mean value. Accounting for seasonality removed the long-term apparent inverse relationship between temperature and admissions. We generated eight total models specific to the northern and southern portions of the valley for each season. All eight models demonstrate predictive skill (mean odds ratio = 3.635) when evaluated using a randomization procedure. The predictor variables selected by the ensembling algorithm vary across models, and both meteorological and air quality variables are included. In general, the models indicate complex linkages between respiratory health and environmental conditions that may be difficult to identify using more traditional approaches.
Spatiotemporal correlation structure of the Earth's surface temperature
NASA Astrophysics Data System (ADS)
Fredriksen, Hege-Beate; Rypdal, Kristoffer; Rypdal, Martin
2015-04-01
We investigate the spatiotemporal temperature variability for several gridded instrumental and climate model data sets. The temporal variability is analysed by estimating the power spectral density and studying the differences between local and global temperatures, land and sea, and among local temperature records at different locations. The spatiotemporal correlation structure is analysed through cross-spectra that allow us to compute frequency-dependent spatial autocorrelation functions (ACFs). Our results are then compared to theoretical spectra and frequency-dependent spatial ACFs derived from a fractional stochastic-diffusive energy balance model (FEBM). From the FEBM we expect both local and global temperatures to have a long-range persistent temporal behaviour, and the spectral exponent (β) is expected to increase by a factor of two when going from local to global scales. Our comparison of the average local spectrum and the global spectrum shows good agreement with this model, although the FEBM has so far only been studied for a pure land planet and a pure ocean planet, respectively, with no seasonal forcing. Hence it cannot capture the substantial variability among the local spectra, in particular between the spectra for land and sea, and for equatorial and non-equatorial temperatures. Both models and observation data show that land temperatures in general have a low persistence, while sea surface temperatures show a higher, and also more variable degree of persistence. Near the equator the spectra deviate from the power-law shape expected from the FEBM. Instead we observe large variability at time scales of a few years due to ENSO, and a flat spectrum at longer time scales, making the spectrum more reminiscent of that of a red noise process. From the frequency-dependent spatial ACFs we observe that the spatial correlation length increases with increasing time scale, which is also consistent with the FEBM. One consequence of this is that longer-lasting structures must also be wider in space. The spatial correlation length is also observed to be longer for land than for sea. The climate model simulations studied are mainly CMIP5 control runs of length 500-1000 yr. On time scales up to several centuries we do not observe that the difference between the local and global spectral exponents vanish. This also follows from the FEBM and shows that the dynamics is spatiotemporal (not just temporal) even on these time scales.
NASA Astrophysics Data System (ADS)
Ramantoko, Gadang; Irawan, Herry
2017-10-01
This research examines the factors influencing the Information Sharing Model in Supporting Implementation of e-Procurement Services: Case of Bandung City in its early maturity stage. The early maturity of information sharing stage was determined using e-Government Maturity Stage Conceptual Framework from Estevez. Bandung City e-Procurement Information Sharing system was categorized at stage 1 in Estevez' model where the concern was mainly on assessing the benefit and risk of implementing the system. The Authors were using DeLone & McLean (D&M) Information System Success model to study benefit and risk of implementing the system in Bandung city. The model was then empirically tested by employing survey data that was collected from the available 40 listed supplier firms. D&M's model adjusted by Klischewski's description was introducing Information Quality, System Quality, and Service Quality as independent variable; Usability and User Satisfaction as intermediate dependent variable; and Perceived Net Benefit as final dependent variable. The findings suggested that, all of the predictors in D&M's model significantly influenced the net perceived benefit of implementing the e-Procurement system in the early maturity stage. The theoretical contribution of this research suggested that D&M's model might find useful in modeling complex information technology successfulness such as the one used in e-Procurement service. This research could also have implications for policy makers (LPSE) and system providers (LKPP) following the introduction of the service. However, the small number of respondent might be considered limitation of the study. The model needs to be further tested using larger number of respondents by involving the population of the firms in extended boundary/municipality area around Bandung.
Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution
ERIC Educational Resources Information Center
Verkuilen, Jay; Smithson, Michael
2012-01-01
Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…
ERIC Educational Resources Information Center
Stakhovych, Stanislav; Bijmolt, Tammo H. A.; Wedel, Michel
2012-01-01
In this article, we present a Bayesian spatial factor analysis model. We extend previous work on confirmatory factor analysis by including geographically distributed latent variables and accounting for heterogeneity and spatial autocorrelation. The simulation study shows excellent recovery of the model parameters and demonstrates the consequences…
ERIC Educational Resources Information Center
Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.
2011-01-01
Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…
The Technology Adoption Process Model and Self-Efficacy of Distance Education Students
ERIC Educational Resources Information Center
Olson, Joel D.; Appunn, Frank D.
2017-01-01
The technology adoption process model (TAPM) is applied to a new synchronous conference technology with 27 asynchronous courses involving 520 participants and 17 instructors. The TAPM resulted from a qualitative study reviewing webcam conference technology adoption. The TAPM is now tested using self-efficacy as the dependent variable. The…
NASA Astrophysics Data System (ADS)
Oliver, Eric C. J.
2014-01-01
Intraseasonal variability of the tropical Indo-Pacific ocean is strongly related to the Madden-Julian Oscillation (MJO). Shallow seas in this region, such as the Gulf of Thailand, act as amplifiers of the direct ocean response to surface wind forcing by efficient setup of sea level. Intraseasonal ocean variability in the Gulf of Thailand region is examined using statistical analysis of local tide gauge observations and surface winds. The tide gauges detect variability on intraseasonal time scales that is related to the MJO through its effect on local wind. The relationship between the MJO and the surface wind is strongly seasonal, being most vigorous during the monsoon, and direction-dependent. The observations are then supplemented with simulations of sea level and circulation from a fully nonlinear barotropic numerical ocean model (Princeton Ocean Model). The numerical model reproduces well the intraseasonal sea level variability in the Gulf of Thailand and its seasonal modulations. The model is then used to map the wind-driven response of sea level and circulation in the entire Gulf of Thailand. Finally, the predictability of the setup and setdown signal is discussed by relating it to the, potentially predictable, MJO index.
The Dependence of Cloud-SST Feedback on Circulation Regime and Timescale
NASA Astrophysics Data System (ADS)
Middlemas, E.; Clement, A. C.; Medeiros, B.
2017-12-01
Studies suggest cloud radiative feedback amplifies internal variability of Pacific sea surface temperature (SST) on interannual-and-longer timescales, though only a few modeling studies have tested the quantitative importance of this feedback (Bellomo et al. 2014b, Brown et al. 2016, Radel et al. 2016 Burgman et al. 2017). We prescribe clouds from a previous control run in the radiation module in Community Atmospheric Model (CAM5-slab), a method called "cloud-locking". By comparing this run to a control run, in which cloud radiative forcing can feedback on the climate system, we isolate the effect of cloud radiative forcing on SST variability. Cloud-locking prevents clouds from radiatively interacting with atmospheric circulation, water vapor, and SST, while maintaining a similar mean state to the control. On all timescales, cloud radiative forcing's influence on SST variance is modulated by the circulation regime. Cloud radiative forcing amplifies SST variance in subsiding regimes and dampens SST variance in convecting regimes. In this particular model, a tug of war between latent heat flux and cloud radiative forcing determines the variance of SST, and the winner depends on the timescale. On decadal-and-longer timescales, cloud radiative forcing plays a relatively larger role than on interannual-and-shorter timescales, while latent heat flux plays a smaller role. On longer timescales, the absence of cloud radiative feedback changes SST variance in a zonally asymmetric pattern in the Pacific Ocean that resembles an IPO-like pattern. We also present an analysis of cloud feedback's role on Pacific SST variability among preindustrial control CMIP5 models to test the model robustness of our results. Our results suggest that circulation plays a crucial role in cloud-SST feedbacks across the globe and cloud radiative feedbacks cannot be ignored when studying SST variability on decadal-and-longer timescales.
Optimization techniques using MODFLOW-GWM
Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.
2015-01-01
An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.
Generalized semiparametric varying-coefficient models for longitudinal data
NASA Astrophysics Data System (ADS)
Qi, Li
In this dissertation, we investigate the generalized semiparametric varying-coefficient models for longitudinal data that can flexibly model three types of covariate effects: time-constant effects, time-varying effects, and covariate-varying effects, i.e., the covariate effects that depend on other possibly time-dependent exposure variables. First, we consider the model that assumes the time-varying effects are unspecified functions of time while the covariate-varying effects are parametric functions of an exposure variable specified up to a finite number of unknown parameters. The estimation procedures are developed using multivariate local linear smoothing and generalized weighted least squares estimation techniques. The asymptotic properties of the proposed estimators are established. The simulation studies show that the proposed methods have satisfactory finite sample performance. ACTG 244 clinical trial of HIV infected patients are applied to examine the effects of antiretroviral treatment switching before and after HIV developing the 215-mutation. Our analysis shows benefit of treatment switching before developing the 215-mutation. The proposed methods are also applied to the STEP study with MITT cases showing that they have broad applications in medical research.
Geomatic methods at the service of water resources modelling
NASA Astrophysics Data System (ADS)
Molina, José-Luis; Rodríguez-Gonzálvez, Pablo; Molina, Mª Carmen; González-Aguilera, Diego; Espejo, Fernando
2014-02-01
Acquisition, management and/or use of spatial information are crucial for the quality of water resources studies. In this sense, several geomatic methods arise at the service of water modelling, aiming the generation of cartographic products, especially in terms of 3D models and orthophotos. They may also perform as tools for problem solving and decision making. However, choosing the right geomatic method is still a challenge in this field. That is mostly due to the complexity of the different applications and variables involved for water resources management. This study is aimed to provide a guide to best practices in this context by tackling a deep review of geomatic methods and their suitability assessment for the following study types: Surface Hydrology, Groundwater Hydrology, Hydraulics, Agronomy, Morphodynamics and Geotechnical Processes. This assessment is driven by several decision variables grouped in two categories, classified depending on their nature as geometric or radiometric. As a result, the reader comes with the best choice/choices for the method to use, depending on the type of water resources modelling study in hand.
Gupta, C K; Mishra, G; Mehta, S C; Prasad, J
1993-01-01
Lung volumes, capacities, diffusion and alveolar volumes with physical characteristics (age, height and weight) were recorded for 186 healthy school children (96 boys and 90 girls) of 10-17 years age group. The objective was to study the relative importance of physical characteristics as regressor variables in regression models to estimate lung functions. We observed that height is best correlated with all the lung functions. Inclusion of all physical characteristics in the models have little gain compared to the ones having just height as regressor variable. We also find that exponential models were not only statistically valid but fared better compared to the linear ones. We conclude that lung functions covary with height and other physical characteristics but do not depend upon them. The rate of increase in the functions depend upon initial lung functions. Further, we propose models and provide ready reckoners to give estimates of lung functions with 95 per cent confidence limits based on heights from 125 to 170 cm for the age group of 10 to 17 years.
Drivers of Variability in Public-Supply Water Use Across the Contiguous United States
NASA Astrophysics Data System (ADS)
Worland, Scott C.; Steinschneider, Scott; Hornberger, George M.
2018-03-01
This study explores the relationship between municipal water use and an array of climate, economic, behavioral, and policy variables across the contiguous U.S. The relationship is explored using Bayesian-hierarchical regression models for over 2,500 counties, 18 covariates, and three higher-level grouping variables. Additionally, a second analysis is included for 83 cities where water price and water conservation policy information is available. A hierarchical model using the nine climate regions (product of National Oceanic and Atmospheric Administration) as the higher-level groups results in the best out-of-sample performance, as estimated by the Widely Available Information Criterion, compared to counties grouped by urban continuum classification or primary economic activity. The regression coefficients indicate that the controls on water use are not uniform across the nation: e.g., counties in the Northeast and Northwest climate regions are more sensitive to social variables, whereas counties in the Southwest and East North Central climate regions are more sensitive to environmental variables. For the national city-level model, it appears that arid cities with a high cost of living and relatively low water bills sell more water per customer, but as with the county-level model, the effect of each variable depends heavily on where a city is located.
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
Impact of tidal density variability on orbital and reentry predictions
NASA Astrophysics Data System (ADS)
Leonard, J. M.; Forbes, J. M.; Born, G. H.
2012-12-01
Since the first satellites entered Earth orbit in the late 1950's and early 1960's, the influences of solar and geomagnetic variability on the satellite drag environment have been studied, and parameterized in empirical density models with increasing sophistication. However, only within the past 5 years has the realization emerged that "troposphere weather" contributes significantly to the "space weather" of the thermosphere, especially during solar minimum conditions. Much of the attendant variability is attributable to upward-propagating solar tides excited by latent heating due to deep tropical convection, and solar radiation absorption primarily by water vapor and ozone in the stratosphere and mesosphere, respectively. We know that this tidal spectrum significantly modifies the orbital (>200 km) and reentry (60-150 km) drag environments, and that these tidal components induce longitude variability not yet emulated in empirical density models. Yet, current requirements for improvements in orbital prediction make clear that further refinements to density models are needed. In this paper, the operational consequences of longitude-dependent tides are quantitatively assessed through a series of orbital and reentry predictions. We find that in-track prediction differences incurred by tidal effects are typically of order 200 ± 100 m for satellites in 400-km circular orbits and 15 ± 10 km for satellites in 200-km circular orbits for a 24-hour prediction. For an initial 200-km circular orbit, surface impact differences of order 15° ± 15° latitude are incurred. For operational problems with similar accuracy needs, a density model that includes a climatological representation of longitude-dependent tides should significantly reduce errors due to this source.
Huntsman, Brock M.; Petty, J. Todd
2014-01-01
Spatial population models predict strong density-dependence and relatively stable population dynamics near the core of a species' distribution with increasing variance and importance of density-independent processes operating towards the population periphery. Using a 10-year data set and an information-theoretic approach, we tested a series of candidate models considering density-dependent and density-independent controls on brook trout population dynamics across a core-periphery distribution gradient within a central Appalachian watershed. We sampled seven sub-populations with study sites ranging in drainage area from 1.3–60 km2 and long-term average densities ranging from 0.335–0.006 trout/m. Modeled response variables included per capita population growth rate of young-of-the-year, adult, and total brook trout. We also quantified a stock-recruitment relationship for the headwater population and coefficients of variability in mean trout density for all sub-populations over time. Density-dependent regulation was prevalent throughout the study area regardless of stream size. However, density-independent temperature models carried substantial weight and likely reflect the effect of year-to-year variability in water temperature on trout dispersal between cold tributaries and warm main stems. Estimated adult carrying capacities decreased exponentially with increasing stream size from 0.24 trout/m in headwaters to 0.005 trout/m in the main stem. Finally, temporal variance in brook trout population size was lowest in the high-density headwater population, tended to peak in mid-sized streams and declined slightly in the largest streams with the lowest densities. Our results provide support for the hypothesis that local density-dependent processes have a strong control on brook trout dynamics across the entire distribution gradient. However, the mechanisms of regulation likely shift from competition for limited food and space in headwater streams to competition for thermal refugia in larger main stems. It also is likely that source-sink dynamics and dispersal from small headwater habitats may partially influence brook trout population dynamics in the main stem. PMID:24618602
Del Pozo Rubio, Raúl; Escribano Sotos, Francisco; Moya Martínez, Pablo
2011-12-01
To analyze the relationship between sociodemographic and health variables (including informal care) and the healthcare service delivery assigned in the individualized care plan. An observational cross-sectional study was conducted in a representative sample of the dependent population in Cuenca (Spain) in February, 2009. Information was obtained on people with level II and III dependency. Four different logistic regression models were used to identify the factors associated with the care service delivery assigned in the individualized care plan. Independent variables consisted of age, gender, marital status, annual income, place of residence, health conditions, medical treatment, and perception of informal care. A total of 83.7% of the sample was assigned economic benefits and 15.3% were assigned services. Eighty percent of the sample received informal care in addition to dependency benefits. People who received informal care were 3239 times more likely to be assigned economic benefits than persons not receiving informal care. For the period analyzed (the first phase of the implementation of the Dependency Act), the variables associated with receiving economic benefits (versus services) were being married, having a high annual income, the place of residence (rural areas versus urban area), and receiving hygiene-dietary treatment and informal care. Copyright © 2011 SESPAS. Published by Elsevier Espana. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fan; Parker, Jack C.; Luo, Wensui
2008-01-01
Many geochemical reactions that control aqueous metal concentrations are directly affected by solution pH. However, changes in solution pH are strongly buffered by various aqueous phase and solid phase precipitation/dissolution and adsorption/desorption reactions. The ability to predict acid-base behavior of the soil-solution system is thus critical to predict metal transport under variable pH conditions. This study was undertaken to develop a practical generic geochemical modeling approach to predict aqueous and solid phase concentrations of metals and anions during conditions of acid or base additions. The method of Spalding and Spalding was utilized to model soil buffer capacity and pH-dependent cationmore » exchange capacity by treating aquifer solids as a polyprotic acid. To simulate the dynamic and pH-dependent anion exchange capacity, the aquifer solids were simultaneously treated as a polyprotic base controlled by mineral precipitation/dissolution reactions. An equilibrium reaction model that describes aqueous complexation, precipitation, sorption and soil buffering with pH-dependent ion exchange was developed using HydroGeoChem v5.0 (HGC5). Comparison of model results with experimental titration data of pH, Al, Ca, Mg, Sr, Mn, Ni, Co, and SO{sub 4}{sup 2-} for contaminated sediments indicated close agreement, suggesting that the model could potentially be used to predict the acid-base behavior of the sediment-solution system under variable pH conditions.« less
NASA Astrophysics Data System (ADS)
Hararuk, Oleksandra; Zwart, Jacob A.; Jones, Stuart E.; Prairie, Yves; Solomon, Christopher T.
2018-03-01
Formal integration of models and data to test hypotheses about the processes controlling carbon dynamics in lakes is rare, despite the importance of lakes in the carbon cycle. We built a suite of models (n = 102) representing different hypotheses about lake carbon processing, fit these models to data from a north-temperate lake using data assimilation, and identified which processes were essential for adequately describing the observations. The hypotheses that we tested concerned organic matter lability and its variability through time, temperature dependence of biological decay, photooxidation, microbial dynamics, and vertical transport of water via hypolimnetic entrainment and inflowing density currents. The data included epilimnetic and hypolimnetic CO2 and dissolved organic carbon, hydrologic fluxes, carbon loads, gross primary production, temperature, and light conditions at high frequency for one calibration and one validation year. The best models explained 76-81% and 64-67% of the variability in observed epilimnetic CO2 and dissolved organic carbon content in the validation data. Accurately describing C dynamics required accounting for hypolimnetic entrainment and inflowing density currents, in addition to accounting for biological transformations. In contrast, neither photooxidation nor variable organic matter lability improved model performance. The temperature dependence of biological decay (Q10) was estimated at 1.45, significantly lower than the commonly assumed Q10 of 2. By confronting multiple models of lake C dynamics with observations, we identified processes essential for describing C dynamics in a temperate lake at daily to annual scales, while also providing a methodological roadmap for using data assimilation to further improve understanding of lake C cycling.
NASA Astrophysics Data System (ADS)
Wable, Pawan S.; Jha, Madan K.
2018-02-01
The effects of rainfall and the El Niño Southern Oscillation (ENSO) on groundwater in a semi-arid basin of India were analyzed using Archimedean copulas considering 17 years of data for monsoon rainfall, post-monsoon groundwater level (PMGL) and ENSO Index. The evaluated dependence among these hydro-climatic variables revealed that PMGL-Rainfall and PMGL-ENSO Index pairs have significant dependence. Hence, these pairs were used for modeling dependence by employing four types of Archimedean copulas: Ali-Mikhail-Haq, Clayton, Gumbel-Hougaard, and Frank. For the copula modeling, the results of probability distributions fitting to these hydro-climatic variables indicated that the PMGL and rainfall time series are best represented by Weibull and lognormal distributions, respectively, while the non-parametric kernel-based normal distribution is the most suitable for the ENSO Index. Further, the PMGL-Rainfall pair is best modeled by the Clayton copula, and the PMGL-ENSO Index pair is best modeled by the Frank copula. The Clayton copula-based conditional probability of PMGL being less than or equal to its average value at a given mean rainfall is above 70% for 33% of the study area. In contrast, the spatial variation of the Frank copula-based probability of PMGL being less than or equal to its average value is 35-40% in 23% of the study area during El Niño phase, while it is below 15% in 35% of the area during the La Niña phase. This copula-based methodology can be applied under data-scarce conditions for exploring the impacts of rainfall and ENSO on groundwater at basin scales.
TIME-DEPENDENT TURBULENT HEATING OF OPEN FLUX TUBES IN THE CHROMOSPHERE, CORONA, AND SOLAR WIND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woolsey, L. N.; Cranmer, S. R., E-mail: lwoolsey@cfa.harvard.edu
We investigate several key questions of plasma heating in open-field regions of the corona that connect to the solar wind. We present results for a model of Alfvén-wave-driven turbulence for three typical open magnetic field structures: a polar coronal hole, an open flux tube neighboring an equatorial streamer, and an open flux tube near a strong-field active region. We compare time-steady, one-dimensional turbulent heating models against fully time-dependent three-dimensional reduced-magnetohydrodynamic modeling of BRAID. We find that the time-steady results agree well with time-averaged results from BRAID. The time dependence allows us to investigate the variability of the magnetic fluctuations andmore » of the heating in the corona. The high-frequency tail of the power spectrum of fluctuations forms a power law whose exponent varies with height, and we discuss the possible physical explanation for this behavior. The variability in the heating rate is bursty and nanoflare-like in nature, and we analyze the amount of energy lost via dissipative heating in transient events throughout the simulation. The average energy in these events is 10{sup 21.91} erg, within the “picoflare” range, and many events reach classical “nanoflare” energies. We also estimated the multithermal distribution of temperatures that would result from the heating-rate variability, and found good agreement with observed widths of coronal differential emission measure distributions. The results of the modeling presented in this paper provide compelling evidence that turbulent heating in the solar atmosphere by Alfvén waves accelerates the solar wind in open flux tubes.« less
NASA Astrophysics Data System (ADS)
Dalkilic, Turkan Erbay; Apaydin, Aysen
2009-11-01
In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.
Shahyad, Shima; Pakdaman, Shahla; Shokri, Omid; Saadat, Seyed Hassan
2018-01-12
The aim of the present study was to examine the causal relationships between psychological and social factors, being independent variables and body image dissatisfaction plus symptoms of eating disorders as dependent variables through the mediation of social comparison and thin-ideal internalization. To conduct the study, 477 high-school students from Tehran were recruited by method of cluster sampling. Next, they filled out Rosenberg Self-esteem Scale (RSES), Physical Appearance Comparison Scale (PACS), Self-Concept Clarity Scale (SCCS), Appearance Perfectionism Scale (APS), Eating Disorder Inventory (EDI), Multidimensional Body Self Relations Questionnaire (MBSRQ) and Sociocultural Attitudes towards Appearance Questionnaire (SATAQ-4). In the end, collected data were analyzed using structural equation modeling. Findings showed that the assumed model perfectly fitted the data after modification and as a result, all the path-coefficients of latent variables (except for the path between self-esteem and thin-ideal internalization) were statistically significant (p>0.05). Also, in this model, 75% of scores' distribution of body dissatisfaction was explained through psychological variables, socio-cultural variables, social comparison and internalization of the thin ideal. The results of the present study provid experimental basis for the confirmation of proposed causal model. The combination of psychological, social and cultural variables could efficiently predict body image dissatisfaction of young girls in Iran.
Modelling rainfall amounts using mixed-gamma model for Kuantan district
NASA Astrophysics Data System (ADS)
Zakaria, Roslinazairimah; Moslim, Nor Hafizah
2017-05-01
An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.
Comparison modeling for alpine vegetation distribution in an arid area.
Zhou, Jihua; Lai, Liming; Guan, Tianyu; Cai, Wetao; Gao, Nannan; Zhang, Xiaolong; Yang, Dawen; Cong, Zhentao; Zheng, Yuanrun
2016-07-01
Mapping and modeling vegetation distribution are fundamental topics in vegetation ecology. With the rise of powerful new statistical techniques and GIS tools, the development of predictive vegetation distribution models has increased rapidly. However, modeling alpine vegetation with high accuracy in arid areas is still a challenge because of the complexity and heterogeneity of the environment. Here, we used a set of 70 variables from ASTER GDEM, WorldClim, and Landsat-8 OLI (land surface albedo and spectral vegetation indices) data with decision tree (DT), maximum likelihood classification (MLC), and random forest (RF) models to discriminate the eight vegetation groups and 19 vegetation formations in the upper reaches of the Heihe River Basin in the Qilian Mountains, northwest China. The combination of variables clearly discriminated vegetation groups but failed to discriminate vegetation formations. Different variable combinations performed differently in each type of model, but the most consistently important parameter in alpine vegetation modeling was elevation. The best RF model was more accurate for vegetation modeling compared with the DT and MLC models for this alpine region, with an overall accuracy of 75 % and a kappa coefficient of 0.64 verified against field point data and an overall accuracy of 65 % and a kappa of 0.52 verified against vegetation map data. The accuracy of regional vegetation modeling differed depending on the variable combinations and models, resulting in different classifications for specific vegetation groups.