Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
Are your covariates under control? How normalization can re-introduce covariate effects.
Pain, Oliver; Dudbridge, Frank; Ronald, Angelica
2018-04-30
Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
ERIC Educational Resources Information Center
Kong, Nan
2007-01-01
In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Furuya, Keiichiro; Ishizuka, Shinichi
2018-07-01
Model-based controllers with adaptive design variables are often used to control an object with time-dependent characteristics. However, the controller's performance is influenced by many factors such as modeling accuracy and fluctuations in the object's characteristics. One method to overcome these negative factors is to tune model-based controllers. Herein we propose an online tuning method to maintain control performance for an object that exhibits time-dependent variations. The proposed method employs the poles of the controller as design variables because the poles significantly impact performance. Specifically, we use the simultaneous perturbation stochastic approximation (SPSA) to optimize a model-based controller with multiple design variables. Moreover, a vibration control experiment of an object with time-dependent characteristics as the temperature is varied demonstrates that the proposed method allows adaptive control and stably maintains the closed-loop characteristics.
The Variability of Gender-Based Communication in Japanese Magazine Advertising.
ERIC Educational Resources Information Center
Maynard, Michael L.
1995-01-01
Analyzes Japanese magazine advertising text from an intracultural perspective based on gender. Uses content analysis to examine advertising text of eight gender-specific magazines. Reveals significant difference in the variability of message perception depending on target gender. Suggests the importance of recognizing intracultural variability,…
An Optimization-Based Approach to Injector Element Design
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)
2000-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, combining results from both elements to simulate a trade study, thrust-to-weight trends are illustrated and examined in detail.
Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)
NASA Astrophysics Data System (ADS)
Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi
2017-06-01
Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.
Modeling Time-Dependent Association in Longitudinal Data: A Lag as Moderator Approach
ERIC Educational Resources Information Center
Selig, James P.; Preacher, Kristopher J.; Little, Todd D.
2012-01-01
We describe a straightforward, yet novel, approach to examine time-dependent association between variables. The approach relies on a measurement-lag research design in conjunction with statistical interaction models. We base arguments in favor of this approach on the potential for better understanding the associations between variables by…
Optimization of a GO2/GH2 Swirl Coaxial Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
1999-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Modified Regression Correlation Coefficient for Poisson Regression Model
NASA Astrophysics Data System (ADS)
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
NASA Astrophysics Data System (ADS)
Guo, A.; Wang, Y.
2017-12-01
Investigating variability in dependence structures of hydrological processes is of critical importance for developing an understanding of mechanisms of hydrological cycles in changing environments. In focusing on this topic, present work involves the following: (1) identifying and eliminating serial correlation and conditional heteroscedasticity in monthly streamflow (Q), precipitation (P) and potential evapotranspiration (PE) series using the ARMA-GARCH model (ARMA: autoregressive moving average; GARCH: generalized autoregressive conditional heteroscedasticity); (2) describing dependence structures of hydrological processes using partial copula coupled with the ARMA-GARCH model and identifying their variability via copula-based likelihood-ratio test method; and (3) determining conditional probability of annual Q under different climate scenarios on account of above results. This framework enables us to depict hydrological variables in the presence of conditional heteroscedasticity and to examine dependence structures of hydrological processes while excluding the influence of covariates by using partial copula-based ARMA-GARCH model. Eight major catchments across the Loess Plateau (LP) are used as study regions. Results indicate that (1) The occurrence of change points in dependence structures of Q and P (PE) varies across the LP. Change points of P-PE dependence structures in all regions almost fully correspond to the initiation of global warming, i.e., the early 1980s. (3) Conditional probabilities of annual Q under various P and PE scenarios are estimated from the 3-dimensional joint distribution of (Q, P and PE) based on the above change points. These findings shed light on mechanisms of the hydrological cycle and can guide water supply planning and management, particularly in changing environments.
A continuum state variable theory to model the size-dependent surface energy of nanostructures.
Jamshidian, Mostafa; Thamburaja, Prakash; Rabczuk, Timon
2015-10-14
We propose a continuum-based state variable theory to quantify the excess surface free energy density throughout a nanostructure. The size-dependent effect exhibited by nanoplates and spherical nanoparticles i.e. the reduction of surface energy with reducing nanostructure size is well-captured by our continuum state variable theory. Our constitutive theory is also able to predict the reducing energetic difference between the surface and interior (bulk) portions of a nanostructure with decreasing nanostructure size.
Optimization of a GO2/GH2 Impinging Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
2001-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method
NASA Astrophysics Data System (ADS)
Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad
2018-04-01
Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.
Analysis of the semi-permanent house in Merauke city in terms of aesthetic value in architecture
NASA Astrophysics Data System (ADS)
Topan, Anton; Octavia, Sari; Soleman, Henry
2018-05-01
Semi permanent houses are also used called “Rumah Kancingan” is the houses that generally exist in the Merauke city. Called semi permanent because the main structure use is woods even if the walls uses bricks. This research tries to analyze more about Semi permanent house in terms of aesthethics value. This research is a qualitative research with data collection techniques using questionnaire method and direct observation field and study of literature. The result of questionnaire data collection then processed using SPSS to get the influence of independent variable against the dependent variable and found that color, ornament, shape of the door-window and shape of roof (independent) gives 97,1% influence to the aesthetics of the Semi permanent house and based on the output coefficient SPSS obtained that the dependent variable has p-value < 0.05 which means independent variables have an effect on significant to aesthetic variable. For variables of semi permanent and wooden structure gives an effect of 98,6% to aesthetics and based on the result of SPSS coefficient it is found that free variable has p-value < 0.05 which means independent variables have an effect on significant to aesthetic variable.
Avoiding and Correcting Bias in Score-Based Latent Variable Regression with Discrete Manifest Items
ERIC Educational Resources Information Center
Lu, Irene R. R.; Thomas, D. Roland
2008-01-01
This article considers models involving a single structural equation with latent explanatory and/or latent dependent variables where discrete items are used to measure the latent variables. Our primary focus is the use of scores as proxies for the latent variables and carrying out ordinary least squares (OLS) regression on such scores to estimate…
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
NASA Astrophysics Data System (ADS)
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
Can Dynamic Visualizations with Variable Control Enhance the Acquisition of Intuitive Knowledge?
ERIC Educational Resources Information Center
Wichmann, Astrid; Timpe, Sebastian
2015-01-01
An important feature of inquiry learning is to take part in science practices including exploring variables and testing hypotheses. Computer-based dynamic visualizations have the potential to open up various exploration possibilities depending on the level of learner control. It is assumed that variable control, e.g., by changing parameters of a…
Analysis of Factors Influencing Energy Consumption at an Air Force Base.
1995-12-01
include them in energy consumption projections. 28 Table 2-3 Selected Independent Variables ( Morill , 1985) Dependent Variable Energy Conservation...most appropriate method for forecasting energy consumption (Weck, 1981; Tinsley, 1981; and Morill , 1985). This section will present a brief
A hazard rate analysis of fertility using duration data from Malaysia.
Chang, C
1988-01-01
Data from the Malaysia Fertility and Family Planning Survey (MFLS) of 1974 were used to investigate the effects of biological and socioeconomic variables on fertility based on the hazard rate model. Another study objective was to investigate the robustness of the findings of Trussell et al. (1985) by comparing the findings of this study with theirs. The hazard rate of conception for the jth fecundable spell of the ith woman, hij, is determined by duration dependence, tij, measured by the waiting time to conception; unmeasured heterogeneity (HETi; the time-invariant variables, Yi (race, cohort, education, age at marriage); and time-varying variables, Xij (age, parity, opportunity cost, income, child mortality, child sex composition). In this study, all the time-varying variables were constant over a spell. An asymptotic X2 test for the equality of constant hazard rates across birth orders, allowing time-invariant variables and heterogeneity, showed the importance of time-varying variables and duration dependence. Under the assumption of fixed effects heterogeneity and the Weibull distribution for the duration of waiting time to conception, the empirical results revealed a negative parity effect, a negative impact from male children, and a positive effect from child mortality on the hazard rate of conception. The estimates of step functions for the hazard rate of conception showed parity-dependent fertility control, evidence of heterogeneity, and the possibility of nonmonotonic duration dependence. In a hazard rate model with piecewise-linear-segment duration dependence, the socioeconomic variables such as cohort, child mortality, income, and race had significant effects, after controlling for the length of the preceding birth. The duration dependence was consistant with the common finding, i.e., first increasing and then decreasing at a slow rate. The effects of education and opportunity cost on fertility were insignificant.
NASA Astrophysics Data System (ADS)
Gireesha, B. J.; Kumar, P. B. Sampath; Mahanthesh, B.; Shehzad, S. A.; Abbasi, F. M.
2018-05-01
The nonlinear convective flow of kerosene-Alumina nanoliquid subjected to an exponential space dependent heat source and temperature dependent viscosity is investigated here. This study is focuses on augmentation of heat transport rate in liquid propellant rocket engine. The kerosene-Alumina nanoliquid is considered as the regenerative coolant. Aspects of radiation and viscous dissipation are also covered. Relevant nonlinear system is solved numerically via RK based shooting scheme. Diverse flow fields are computed and examined for distinct governing variables. We figured out that the nanoliquid's temperature increased due to space dependent heat source and radiation aspects. The heat transfer rate is higher in case of changeable viscosity than constant viscosity.
NASA Astrophysics Data System (ADS)
Gireesha, B. J.; Kumar, P. B. Sampath; Mahanthesh, B.; Shehzad, S. A.; Abbasi, F. M.
2018-02-01
The nonlinear convective flow of kerosene-Alumina nanoliquid subjected to an exponential space dependent heat source and temperature dependent viscosity is investigated here. This study is focuses on augmentation of heat transport rate in liquid propellant rocket engine. The kerosene-Alumina nanoliquid is considered as the regenerative coolant. Aspects of radiation and viscous dissipation are also covered. Relevant nonlinear system is solved numerically via RK based shooting scheme. Diverse flow fields are computed and examined for distinct governing variables. We figured out that the nanoliquid's temperature increased due to space dependent heat source and radiation aspects. The heat transfer rate is higher in case of changeable viscosity than constant viscosity.
Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory
NASA Astrophysics Data System (ADS)
Rahimi, A.; Zhang, L.
2012-12-01
Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;
Ross, Michael G; Jessie, Marquis; Amaya, Kevin; Matushewski, Brad; Durosier, L Daniel; Frasch, Martin G; Richardson, Bryan S
2013-04-01
Recent guidelines classify variable decelerations without detail as to degree of depth. We hypothesized that variable deceleration severity is highly correlated with fetal base deficit accumulation. Seven near-term fetal sheep underwent a series of graded umbilical cord occlusions resulting in mild (30 bpm decrease), moderate (60 bpm decrease), or severe (decrease of 90 bpm to baseline <70 bpm) variable decelerations at 2.5 minute intervals. Mild, moderate, and severe variable decelerations increased fetal base deficit (0.21 ± 0.03, 0.27 ± 0.03, and 0.54 ± 0.09 mEq/L per minute) in direct proportion to severity. During recovery, fetal base deficit cleared at 0.12 mEq/L per minute. In this model, ovine fetuses can tolerate repetitive mild and moderate variable decelerations with minimal change in base deficit and lactate. In contrast, repetitive severe variable decelerations may result in significant base deficit increases, dependent on frequency. Modified guideline differentiation of mild/moderate vs severe variable decelerations may aid in the interpretation of fetal heart rate tracings and optimization of clinical management paradigms. Copyright © 2013 Mosby, Inc. All rights reserved.
[Hydrologic variability and sensitivity based on Hurst coefficient and Bartels statistic].
Lei, Xu; Xie, Ping; Wu, Zi Yi; Sang, Yan Fang; Zhao, Jiang Yan; Li, Bin Bin
2018-04-01
Due to the global climate change and frequent human activities in recent years, the pure stochastic components of hydrological sequence is mixed with one or several of the variation ingredients, including jump, trend, period and dependency. It is urgently needed to clarify which indices should be used to quantify the degree of their variability. In this study, we defined the hydrological variability based on Hurst coefficient and Bartels statistic, and used Monte Carlo statistical tests to test and analyze their sensitivity to different variants. When the hydrological sequence had jump or trend variation, both Hurst coefficient and Bartels statistic could reflect the variation, with the Hurst coefficient being more sensitive to weak jump or trend variation. When the sequence had period, only the Bartels statistic could detect the mutation of the sequence. When the sequence had a dependency, both the Hurst coefficient and the Bartels statistics could reflect the variation, with the latter could detect weaker dependent variations. For the four variations, both the Hurst variability and Bartels variability increased with the increases of variation range. Thus, they could be used to measure the variation intensity of the hydrological sequence. We analyzed the temperature series of different weather stations in the Lancang River basin. Results showed that the temperature of all stations showed the upward trend or jump, indicating that the entire basin had experienced warming in recent years and the temperature variability in the upper and lower reaches was much higher. This case study showed the practicability of the proposed method.
Quantitative variability of renewable energy resources in Norway
NASA Astrophysics Data System (ADS)
Christakos, Konstantinos; Varlas, George; Cheliotis, Ioannis; Aalstad, Kristoffer; Papadopoulos, Anastasios; Katsafados, Petros; Steeneveld, Gert-Jan
2017-04-01
Based on European Union (EU) targets for 2030, the share of renewable energy (RE) consumption should be increased at 27%. RE resources such as hydropower, wind, wave power and solar power are strongly depending on the chaotic behavior of the weather conditions and climate. Due to this dependency, the prediction of the spatiotemporal variability of the RE resources is more crucial factor than in other energy resources (i.e. carbon based energy). The fluctuation of the RE resources can affect the development of the RE technologies, the energy grid, supply and prices. This study investigates the variability of the potential RE resources in Norway. More specifically, hydropower, wind, wave, and solar power are quantitatively analyzed and correlated with respect to various spatial and temporal scales. In order to analyze the diversities and their interrelationships, reanalysis and observational data of wind, precipitation, wave, and solar radiation are used for a quantitative assessment. The results indicate a high variability of marine RE resources in the North Sea and the Norwegian Sea.
Systems Engineering-Based Tool for Identifying Critical Research Systems
ERIC Educational Resources Information Center
Abbott, Rodman P.; Stracener, Jerrell
2016-01-01
This study investigates the relationship between the designated research project system independent variables of Labor, Travel, Equipment, and Contract total annual costs and the dependent variables of both the associated matching research project total annual academic publication output and thesis/dissertation number output. The Mahalanobis…
Making Student Online Teams Work
ERIC Educational Resources Information Center
Olsen, Joel; Kalinski, Ray
2017-01-01
Online professors typically assign teams based on time zones, performance, or alphabet, but are these the best ways to position student virtual teams for success? Personality and task complexity could provide additional direction. Personality and task complexity were used as independent variables related to the depended variable of team…
Benford's law and continuous dependent random variables
NASA Astrophysics Data System (ADS)
Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine
2018-01-01
Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.
Data Mining in Institutional Economics Tasks
NASA Astrophysics Data System (ADS)
Kirilyuk, Igor; Kuznetsova, Anna; Senko, Oleg
2018-02-01
The paper discusses problems associated with the use of data mining tools to study discrepancies between countries with different types of institutional matrices by variety of potential explanatory variables: climate, economic or infrastructure indicators. An approach is presented which is based on the search of statistically valid regularities describing the dependence of the institutional type on a single variable or a pair of variables. Examples of regularities are given.
The anisotropic Hooke's law for cancellous bone and wood.
Yang, G; Kabel, J; van Rietbergen, B; Odgaard, A; Huiskes, R; Cowin, S C
A method of data analysis for a set of elastic constant measurements is applied to data bases for wood and cancellous bone. For these materials the identification of the type of elastic symmetry is complicated by the variable composition of the material. The data analysis method permits the identification of the type of elastic symmetry to be accomplished independent of the examination of the variable composition. This method of analysis may be applied to any set of elastic constant measurements, but is illustrated here by application to hardwoods and softwoods, and to an extraordinary data base of cancellous bone elastic constants. The solid volume fraction or bulk density is the compositional variable for the elastic constants of these natural materials. The final results are the solid volume fraction dependent orthotropic Hooke's law for cancellous bone and a bulk density dependent one for hardwoods and softwoods.
A Two-Step Approach to Analyze Satisfaction Data
ERIC Educational Resources Information Center
Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.
2011-01-01
In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…
NASA Astrophysics Data System (ADS)
Shang, De-Yi; Zhong, Liang-Cai
2017-01-01
Our novel models for fluid's variable physical properties are improved and reported systematically in this work for enhancement of theoretical and practical value on study of convection heat and mass transfer. It consists of three models, namely (1) temperature parameter model, (2) polynomial model, and (3) weighted-sum model, respectively for treatment of temperature-dependent physical properties of gases, temperature-dependent physical properties of liquids, and concentration- and temperature-dependent physical properties of vapour-gas mixture. Two related components are proposed, and involved in each model for fluid's variable physical properties. They are basic physic property equations and theoretical similarity equations on physical property factors. The former, as the foundation of the latter, is based on the typical experimental data and physical analysis. The latter is built up by similarity analysis and mathematical derivation based on the former basic physical properties equations. These models are available for smooth simulation and treatment of fluid's variable physical properties for assurance of theoretical and practical value of study on convection of heat and mass transfer. Especially, so far, there has been lack of available study on heat and mass transfer of film condensation convection of vapour-gas mixture, and the wrong heat transfer results existed in widespread studies on the related research topics, due to ignorance of proper consideration of the concentration- and temperature-dependent physical properties of vapour-gas mixture. For resolving such difficult issues, the present novel physical property models have their special advantages.
Evidence for a Time-Invariant Phase Variable in Human Ankle Control
Gregg, Robert D.; Rouse, Elliott J.; Hargrove, Levi J.; Sensinger, Jonathon W.
2014-01-01
Human locomotion is a rhythmic task in which patterns of muscle activity are modulated by state-dependent feedback to accommodate perturbations. Two popular theories have been proposed for the underlying embodiment of phase in the human pattern generator: a time-dependent internal representation or a time-invariant feedback representation (i.e., reflex mechanisms). In either case the neuromuscular system must update or represent the phase of locomotor patterns based on the system state, which can include measurements of hundreds of variables. However, a much simpler representation of phase has emerged in recent designs for legged robots, which control joint patterns as functions of a single monotonic mechanical variable, termed a phase variable. We propose that human joint patterns may similarly depend on a physical phase variable, specifically the heel-to-toe movement of the Center of Pressure under the foot. We found that when the ankle is unexpectedly rotated to a position it would have encountered later in the step, the Center of Pressure also shifts forward to the corresponding later position, and the remaining portion of the gait pattern ensues. This phase shift suggests that the progression of the stance ankle is controlled by a biomechanical phase variable, motivating future investigations of phase variables in human locomotor control. PMID:24558485
The intermediate endpoint effect in logistic and probit regression
MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM
2010-01-01
Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted conclusions regarding the intermediate effect. PMID:17942466
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Using ERA-Interim reanalysis for creating datasets of energy-relevant climate variables
NASA Astrophysics Data System (ADS)
Jones, Philip D.; Harpham, Colin; Troccoli, Alberto; Gschwind, Benoit; Ranchin, Thierry; Wald, Lucien; Goodess, Clare M.; Dorling, Stephen
2017-07-01
The construction of a bias-adjusted dataset of climate variables at the near surface using ERA-Interim reanalysis is presented. A number of different, variable-dependent, bias-adjustment approaches have been proposed. Here we modify the parameters of different distributions (depending on the variable), adjusting ERA-Interim based on gridded station or direct station observations. The variables are air temperature, dewpoint temperature, precipitation (daily only), solar radiation, wind speed, and relative humidity. These are available on either 3 or 6 h timescales over the period 1979-2016. The resulting bias-adjusted dataset is available through the Climate Data Store (CDS) of the Copernicus Climate Change Data Store (C3S) and can be accessed at present from ftp://ecem.climate.copernicus.eu. The benefit of performing bias adjustment is demonstrated by comparing initial and bias-adjusted ERA-Interim data against gridded observational fields.
Scheidegger, Stephan; Fuchs, Hans U; Zaugg, Kathrin; Bodis, Stephan; Füchslin, Rudolf M
2013-01-01
In order to overcome the limitations of the linear-quadratic model and include synergistic effects of heat and radiation, a novel radiobiological model is proposed. The model is based on a chain of cell populations which are characterized by the number of radiation induced damages (hits). Cells can shift downward along the chain by collecting hits and upward by a repair process. The repair process is governed by a repair probability which depends upon state variables used for a simplistic description of the impact of heat and radiation upon repair proteins. Based on the parameters used, populations up to 4-5 hits are relevant for the calculation of the survival. The model describes intuitively the mathematical behaviour of apoptotic and nonapoptotic cell death. Linear-quadratic-linear behaviour of the logarithmic cell survival, fractionation, and (with one exception) the dose rate dependencies are described correctly. The model covers the time gap dependence of the synergistic cell killing due to combined application of heat and radiation, but further validation of the proposed approach based on experimental data is needed. However, the model offers a work bench for testing different biological concepts of damage induction, repair, and statistical approaches for calculating the variables of state.
Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment
NASA Astrophysics Data System (ADS)
Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.
NASA Astrophysics Data System (ADS)
Nacif el Alaoui, Reda
Mechanical structure-property relations have been quantified for AISI 4140 steel. under different strain rates and temperatures. The structure-property relations were used. to calibrate a microstructure-based internal state variable plasticity-damage model for. monotonic tension, compression and torsion plasticity, as well as damage evolution. Strong stress state and temperature dependences were observed for the AISI 4140 steel. Tension tests on three different notched Bridgman specimens were undertaken to study. the damage-triaxiality dependence for model validation purposes. Fracture surface. analysis was performed using Scanning Electron Microscopy (SEM) to quantify the void. nucleation and void sizes in the different specimens. The stress-strain behavior exhibited. a fairly large applied stress state (tension, compression dependence, and torsion), a. moderate temperature dependence, and a relatively small strain rate dependence.
Talker-specificity and adaptation in quantifier interpretation
Yildirim, Ilker; Degen, Judith; Tanenhaus, Michael K.; Jaeger, T. Florian
2015-01-01
Linguistic meaning has long been recognized to be highly context-dependent. Quantifiers like many and some provide a particularly clear example of context-dependence. For example, the interpretation of quantifiers requires listeners to determine the relevant domain and scale. We focus on another type of context-dependence that quantifiers share with other lexical items: talker variability. Different talkers might use quantifiers with different interpretations in mind. We used a web-based crowdsourcing paradigm to study participants’ expectations about the use of many and some based on recent exposure. We first established that the mapping of some and many onto quantities (candies in a bowl) is variable both within and between participants. We then examined whether and how listeners’ expectations about quantifier use adapts with exposure to talkers who use quantifiers in different ways. The results demonstrate that listeners can adapt to talker-specific biases in both how often and with what intended meaning many and some are used. PMID:26858511
Recurrence measure of conditional dependence and applications.
Ramos, Antônio M T; Builes-Jaramillo, Alejandro; Poveda, Germán; Goswami, Bedartha; Macau, Elbert E N; Kurths, Jürgen; Marwan, Norbert
2017-05-01
Identifying causal relations from observational data sets has posed great challenges in data-driven causality inference studies. One of the successful approaches to detect direct coupling in the information theory framework is transfer entropy. However, the core of entropy-based tools lies on the probability estimation of the underlying variables. Here we propose a data-driven approach for causality inference that incorporates recurrence plot features into the framework of information theory. We define it as the recurrence measure of conditional dependence (RMCD), and we present some applications. The RMCD quantifies the causal dependence between two processes based on joint recurrence patterns between the past of the possible driver and present of the potentially driven, excepting the contribution of the contemporaneous past of the driven variable. Finally, it can unveil the time scale of the influence of the sea-surface temperature of the Pacific Ocean on the precipitation in the Amazonia during recent major droughts.
Recurrence measure of conditional dependence and applications
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Builes-Jaramillo, Alejandro; Poveda, Germán; Goswami, Bedartha; Macau, Elbert E. N.; Kurths, Jürgen; Marwan, Norbert
2017-05-01
Identifying causal relations from observational data sets has posed great challenges in data-driven causality inference studies. One of the successful approaches to detect direct coupling in the information theory framework is transfer entropy. However, the core of entropy-based tools lies on the probability estimation of the underlying variables. Here we propose a data-driven approach for causality inference that incorporates recurrence plot features into the framework of information theory. We define it as the recurrence measure of conditional dependence (RMCD), and we present some applications. The RMCD quantifies the causal dependence between two processes based on joint recurrence patterns between the past of the possible driver and present of the potentially driven, excepting the contribution of the contemporaneous past of the driven variable. Finally, it can unveil the time scale of the influence of the sea-surface temperature of the Pacific Ocean on the precipitation in the Amazonia during recent major droughts.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
Wang, Xiuquan; Huang, Guohe; Zhao, Shan; Guo, Junhong
2015-09-01
This paper presents an open-source software package, rSCA, which is developed based upon a stepwise cluster analysis method and serves as a statistical tool for modeling the relationships between multiple dependent and independent variables. The rSCA package is efficient in dealing with both continuous and discrete variables, as well as nonlinear relationships between the variables. It divides the sample sets of dependent variables into different subsets (or subclusters) through a series of cutting and merging operations based upon the theory of multivariate analysis of variance (MANOVA). The modeling results are given by a cluster tree, which includes both intermediate and leaf subclusters as well as the flow paths from the root of the tree to each leaf subcluster specified by a series of cutting and merging actions. The rSCA package is a handy and easy-to-use tool and is freely available at http://cran.r-project.org/package=rSCA . By applying the developed package to air quality management in an urban environment, we demonstrate its effectiveness in dealing with the complicated relationships among multiple variables in real-world problems.
Dong, Chunjiao; Xie, Kun; Zeng, Jin; Li, Xia
2018-04-01
Highway safety laws aim to influence driver behaviors so as to reduce the frequency and severity of crashes, and their outcomes. For one specific highway safety law, it would have different effects on the crashes across severities. Understanding such effects can help policy makers upgrade current laws and hence improve traffic safety. To investigate the effects of highway safety laws on crashes across severities, multivariate models are needed to account for the interdependency issues in crash counts across severities. Based on the characteristics of the dependent variables, multivariate dynamic Tobit (MVDT) models are proposed to analyze crash counts that are aggregated at the state level. Lagged observed dependent variables are incorporated into the MVDT models to account for potential temporal correlation issues in crash data. The state highway safety law related factors are used as the explanatory variables and socio-demographic and traffic factors are used as the control variables. Three models, a MVDT model with lagged observed dependent variables, a MVDT model with unobserved random variables, and a multivariate static Tobit (MVST) model are developed and compared. The results show that among the investigated models, the MVDT models with lagged observed dependent variables have the best goodness-of-fit. The findings indicate that, compared to the MVST, the MVDT models have better explanatory power and prediction accuracy. The MVDT model with lagged observed variables can better handle the stochasticity and dependency in the temporal evolution of the crash counts and the estimated values from the model are closer to the observed values. The results show that more lives could be saved if law enforcement agencies can make a sustained effort to educate the public about the importance of motorcyclists wearing helmets. Motor vehicle crash-related deaths, injuries, and property damages could be reduced if states enact laws for stricter text messaging rules, higher speeding fines, older licensing age, and stronger graduated licensing provisions. Injury and PDO crashes would be significantly reduced with stricter laws prohibiting the use of hand-held communication devices and higher fines for drunk driving. Copyright © 2018 Elsevier Ltd. All rights reserved.
Determination of riverbank erosion probability using Locally Weighted Logistic Regression
NASA Astrophysics Data System (ADS)
Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos
2015-04-01
Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, A., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr; Song, J.; Hwang, H., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr
In order to obtain reliable multilevel cell (MLC) characteristics, resistance controllability between the different resistance levels is required especially in resistive random access memory (RRAM), which is prone to resistance variability mainly due to its intrinsic random nature of defect generation and filament formation. In this study, we have thoroughly investigated the multilevel resistance variability in a TaO{sub x}-based nanoscale (<30 nm) RRAM operated in MLC mode. It is found that the resistance variability not only depends on the conductive filament size but also is a strong function of oxygen vacancy concentration in it. Based on the gained insights through experimentalmore » observations and simulation, it is suggested that forming thinner but denser conductive filament may greatly improve the temporal resistance variability even at low operation current despite the inherent stochastic nature of resistance switching process.« less
NASA Astrophysics Data System (ADS)
Yanites, Brian J.; Becker, Jens K.; Madritsch, Herfried; Schnellmann, Michael; Ehlers, Todd A.
2017-11-01
Landscape evolution is a product of the forces that drive geomorphic processes (e.g., tectonics and climate) and the resistance to those processes. The underlying lithology and structural setting in many landscapes set the resistance to erosion. This study uses a modified version of the Channel-Hillslope Integrated Landscape Development (CHILD) landscape evolution model to determine the effect of a spatially and temporally changing erodibility in a terrain with a complex base level history. Specifically, our focus is to quantify how the effects of variable lithology influence transient base level signals. We set up a series of numerical landscape evolution models with increasing levels of complexity based on the lithologic variability and base level history of the Jura Mountains of northern Switzerland. The models are consistent with lithology (and therewith erodibility) playing an important role in the transient evolution of the landscape. The results show that the erosion rate history at a location depends on the rock uplift and base level history, the range of erodibilities of the different lithologies, and the history of the surface geology downstream from the analyzed location. Near the model boundary, the history of erosion is dominated by the base level history. The transient wave of incision, however, is quite variable in the different model runs and depends on the geometric structure of lithology used. It is thus important to constrain the spatiotemporal erodibility patterns downstream of any given point of interest to understand the evolution of a landscape subject to variable base level in a quantitative framework.
Effectiveness of Blog Response Strategies to Minimize Crisis Effects
ERIC Educational Resources Information Center
Tomsic, Louis P.
2010-01-01
This study examined the effects of four post-crisis responses on five different variables using a blog tool. The four post-crisis responses are information only, compensation, apology, and sympathy. The five dependent variables are reputation, anger (negative emotion), negative word-of-mouth, account acceptance and state of the publics based on…
On Direction of Dependence in Latent Variable Contexts
ERIC Educational Resources Information Center
von Eye, Alexander; Wiedermann, Wolfgang
2014-01-01
Approaches to determining direction of dependence in nonexperimental data are based on the relation between higher-than second-order moments on one side and correlation and regression models on the other. These approaches have experienced rapid development and are being applied in contexts such as research on partner violence, attention deficit…
Galea, Joseph M.; Ruge, Diane; Buijink, Arthur; Bestmann, Sven; Rothwell, John C.
2013-01-01
Action selection describes the high-level process which selects between competing movements. In animals, behavioural variability is critical for the motor exploration required to select the action which optimizes reward and minimizes cost/punishment, and is guided by dopamine (DA). The aim of this study was to test in humans whether low-level movement parameters are affected by punishment and reward in ways similar to high-level action selection. Moreover, we addressed the proposed dependence of behavioural and neurophysiological variability on DA, and whether this may underpin the exploration of kinematic parameters. Participants performed an out-and-back index finger movement and were instructed that monetary reward and punishment were based on its maximal acceleration (MA). In fact, the feedback was not contingent on the participant’s behaviour but pre-determined. Blocks highly-biased towards punishment were associated with increased MA variability relative to blocks with either reward or without feedback. This increase in behavioural variability was positively correlated with neurophysiological variability, as measured by changes in cortico-spinal excitability with transcranial magnetic stimulation over the primary motor cortex. Following the administration of a DA-antagonist, the variability associated with punishment diminished and the correlation between behavioural and neurophysiological variability no longer existed. Similar changes in variability were not observed when participants executed a pre-determined MA, nor did DA influence resting neurophysiological variability. Thus, under conditions of punishment, DA-dependent processes influence the selection of low-level movement parameters. We propose that the enhanced behavioural variability reflects the exploration of kinematic parameters for less punishing, or conversely more rewarding, outcomes. PMID:23447607
Anisotropic constitutive modeling for nickel-base single crystal superalloys. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sheh, Michael Y.
1988-01-01
An anisotropic constitutive model was developed based on crystallographic slip theory for nickel base single crystal superalloys. The constitutive equations developed utilizes drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments were conducted to evaluate the existence of back stress in single crystal superalloy Rene N4 at 982 C. The results suggest that: (1) the back stress is orientation dependent; and (2) the back stress state variable is required for the current model to predict material anelastic recovery behavior. The model was evaluated for its predictive capability on single crystal material behavior including orientation dependent stress-strain response, tension/compression asymmetry, strain rate sensitivity, anelastic recovery behavior, cyclic hardening and softening, stress relaxation, creep and associated crystal lattice rotation. Limitation and future development needs are discussed.
GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.
Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N
2018-01-01
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.
Missel, P J
2000-01-01
Four methods are proposed for modeling diffusion in heterogeneous media where diffusion and partition coefficients take on differing values in each subregion. The exercise was conducted to validate finite element modeling (FEM) procedures in anticipation of modeling drug diffusion with regional partitioning into ocular tissue, though the approach can be useful for other organs, or for modeling diffusion in laminate devices. Partitioning creates a discontinuous value in the dependent variable (concentration) at an intertissue boundary that is not easily handled by available general-purpose FEM codes, which allow for only one value at each node. The discontinuity is handled using a transformation on the dependent variable based upon the region-specific partition coefficient. Methods were evaluated by their ability to reproduce a known exact result, for the problem of the infinite composite medium (Crank, J. The Mathematics of Diffusion, 2nd ed. New York: Oxford University Press, 1975, pp. 38-39.). The most physically intuitive method is based upon the concept of chemical potential, which is continuous across an interphase boundary (method III). This method makes the equation of the dependent variable highly nonlinear. This can be linearized easily by a change of variables (method IV). Results are also given for a one-dimensional problem simulating bolus injection into the vitreous, predicting time disposition of drug in vitreous and retina.
Anxiety, affect, self-esteem, and stress: mediation and moderation effects on depression.
Nima, Ali Al; Rosenberg, Patricia; Archer, Trevor; Garcia, Danilo
2013-01-01
Mediation analysis investigates whether a variable (i.e., mediator) changes in regard to an independent variable, in turn, affecting a dependent variable. Moderation analysis, on the other hand, investigates whether the statistical interaction between independent variables predict a dependent variable. Although this difference between these two types of analysis is explicit in current literature, there is still confusion with regard to the mediating and moderating effects of different variables on depression. The purpose of this study was to assess the mediating and moderating effects of anxiety, stress, positive affect, and negative affect on depression. Two hundred and two university students (males = 93, females = 113) completed questionnaires assessing anxiety, stress, self-esteem, positive and negative affect, and depression. Mediation and moderation analyses were conducted using techniques based on standard multiple regression and hierarchical regression analyses. The results indicated that (i) anxiety partially mediated the effects of both stress and self-esteem upon depression, (ii) that stress partially mediated the effects of anxiety and positive affect upon depression, (iii) that stress completely mediated the effects of self-esteem on depression, and (iv) that there was a significant interaction between stress and negative affect, and between positive affect and negative affect upon depression. The study highlights different research questions that can be investigated depending on whether researchers decide to use the same variables as mediators and/or moderators.
A robust variable sampling time BLDC motor control design based upon μ-synthesis.
Hung, Chung-Wen; Yen, Jia-Yush
2013-01-01
The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach.
A Robust Variable Sampling Time BLDC Motor Control Design Based upon μ-Synthesis
Yen, Jia-Yush
2013-01-01
The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach. PMID:24327804
Increasing Self-Regulation and Classroom Participation of a Child Who Is Deafblind.
Nelson, Catherine; Hyte, Holly A; Greenfield, Robin
2016-01-01
Self-regulation has been identified as essential to school success. However, for a variety of reasons, its development may be compromised in children and youth who are deafblind. A single-case multiple-baseline study of a child who was deafblind examined the effects of three groups of evidence-based interventions on variables thought to be associated with self-regulation. The dependent variables were (a) frequency and duration of behaviors thought to indicate dysregulation, (b) active participation in school activities, and (c) time from onset of behaviors indicating dysregulation until achievement of a calm, regulated state. The interventions, which included provision of meaningful, enjoyable, and interactive activities, anticipatory strategies, and calming strategies, significantly influenced the dependent variables and are described in detail.
ERIC Educational Resources Information Center
Howard, Donna E.; Wang, Min Qi; Yah, Fang
2008-01-01
The present study, based upon the national 2005 Youth Risk Behavior Survey of U.S. high school students, provides the most current and representative data on physical dating violence among adolescent males (N = 6,528) The dependent variable was physical dating violence. The independent variables included four dimensions: violence, suicide,…
A Reduced Form Model for Ozone Based on Two Decades of ...
A Reduced Form Model (RFM) is a mathematical relationship between the inputs and outputs of an air quality model, permitting estimation of additional modeling without costly new regional-scale simulations. A 21-year Community Multiscale Air Quality (CMAQ) simulation for the continental United States provided the basis for the RFM developed in this study. Predictors included the principal component scores (PCS) of emissions and meteorological variables, while the predictand was the monthly mean of daily maximum 8-hour CMAQ ozone for the ozone season at each model grid. The PCS form an orthogonal basis for RFM inputs. A few PCS incorporate most of the variability of emissions and meteorology, thereby reducing the dimensionality of the source-receptor problem. Stochastic kriging was used to estimate the model. The RFM was used to separate the effects of emissions and meteorology on ozone concentrations. by running the RFM with emissions constant (ozone dependent on meteorology), or constant meteorology (ozone dependent on emissions). Years with ozone-conducive meteorology were identified, and meteorological variables best explaining meteorology-dependent ozone were identified. Meteorology accounted for 19% to 55% of ozone variability in the eastern US, and 39% to 92% in the western US. Temporal trends estimated for original CMAQ ozone data and emission-dependent ozone were mostly negative, but the confidence intervals for emission-dependent ozone are much
Motivation and Exercise Dependence: A Study Based on Self-Determination Theory
ERIC Educational Resources Information Center
Gonzalez-Cutre, David; Sicilia, Alvaro
2012-01-01
The objective of this study was to use self-determination theory to analyze the relationships of several motivational variables with exercise dependence. The study involved 531 exercisers, ranging in age from 16 to 60 years old, who responded to different questionnaires assessing perception of motivational climate, satisfaction of basic…
Ermer, James C; Adeyi, Ben A; Pucci, Michael L
2010-12-01
Methylphenidate- and amfetamine-based stimulants are first-line pharmacotherapies for attention-deficit hyperactivity disorder, a common neurobehavioural disorder in children and adults. A number of long-acting stimulant formulations have been developed with the aim of providing once-daily dosing, employing various means to extend duration of action, including a transdermal delivery system, an osmotic-release oral system, capsules with a mixture of immediate- and delayed-release beads, and prodrug technology. Coefficients of variance of pharmacokinetic measures can estimate the levels of pharmacokinetic variability based on the measurable variance between different individuals receiving the same dose of stimulant (interindividual variability) and within the same individual over multiple administrations (intraindividual variability). Differences in formulation clearly impact pharmacokinetic profiles. Many medications exhibit wide interindividual variability in clinical response. Stimulants with low levels of inter- and intraindividual variability may be better suited to provide consistent levels of medication to patients. The pharmacokinetic profile of stimulants using pH-dependent bead technology can vary depending on food consumption or concomitant administration of medications that alter gastric pH. While delivery of methylphenidate with the transdermal delivery system would be unaffected by gastrointestinal factors, intersubject variability is nonetheless substantial. Unlike the beaded formulations and, to some extent (when considering total exposure) the osmotic-release formulation, systemic exposure to amfetamine with the prodrug stimulant lisdexamfetamine dimesylate appears largely unaffected by such factors, likely owing to its dependence on systemic enzymatic cleavage of the precursor molecule, which occurs primarily in the blood involving red blood cells. The high capacity but as yet unidentified enzymatic system for conversion of lisdexamfetamine dimesylate may contribute to its consistent pharmacokinetic profile. The reasons underlying observed differential responses to stimulants are likely to be multifactorial, including pharmacodynamic factors. While the use of stimulants with low inter- and intrapatient pharmacokinetic variability does not obviate the need to titrate stimulant doses, stimulants with low intraindividual variation in pharmacokinetic parameters may reduce the likelihood of patients falling into subtherapeutic drug concentrations or reaching drug concentrations at which the risk of adverse events increases. As such, clinicians are urged both to adjust stimulant doses based on therapeutic response and the risk for adverse events and to monitor patients for potential causes of pharmacokinetic variability.
Della Libera, Chiara; Calletti, Riccardo; Eštočinová, Jana; Chelazzi, Leonardo; Santandrea, Elisa
2017-04-01
Recent evidence indicates that the attentional priority of objects and locations is altered by the controlled delivery of reward, reflecting reward-based attentional learning. Here, we take an approach hinging on intersubject variability to probe the neurobiological bases of the reward-driven plasticity of spatial priority maps. Specifically, we ask whether an individual's susceptibility to the reward-based treatment can be accounted for by specific predictors, notably personality traits that are linked to reward processing (along with more general personality traits), but also gender. Using a visual search protocol, we show that when different target locations are associated with unequal reward probability, different priorities are acquired by the more rewarded relative to the less rewarded locations. However, while males exhibit the expected pattern of results, with greater priority for locations associated with higher reward, females show an opposite trend. Critically, both the extent and the direction of reward-based adjustments are further predicted by personality traits indexing reward sensitivity, indicating that not only male and female brains are differentially sensitive to reward, but also that specific personality traits further contribute to shaping their learning-dependent attentional plasticity. These results contribute to a better understanding of the neurobiology underlying reward-dependent attentional learning and cross-subject variability in this domain.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
Szekér, Szabolcs; Vathy-Fogarassy, Ágnes
2018-01-01
Logistic regression based propensity score matching is a widely used method in case-control studies to select the individuals of the control group. This method creates a suitable control group if all factors affecting the output variable are known. However, if relevant latent variables exist as well, which are not taken into account during the calculations, the quality of the control group is uncertain. In this paper, we present a statistics-based research in which we try to determine the relationship between the accuracy of the logistic regression model and the uncertainty of the dependent variable of the control group defined by propensity score matching. Our analyses show that there is a linear correlation between the fit of the logistic regression model and the uncertainty of the output variable. In certain cases, a latent binary explanatory variable can result in a relative error of up to 70% in the prediction of the outcome variable. The observed phenomenon calls the attention of analysts to an important point, which must be taken into account when deducting conclusions.
Enhancing Multimedia Imbalanced Concept Detection Using VIMP in Random Forests.
Sadiq, Saad; Yan, Yilin; Shyu, Mei-Ling; Chen, Shu-Ching; Ishwaran, Hemant
2016-07-01
Recent developments in social media and cloud storage lead to an exponential growth in the amount of multimedia data, which increases the complexity of managing, storing, indexing, and retrieving information from such big data. Many current content-based concept detection approaches lag from successfully bridging the semantic gap. To solve this problem, a multi-stage random forest framework is proposed to generate predictor variables based on multivariate regressions using variable importance (VIMP). By fine tuning the forests and significantly reducing the predictor variables, the concept detection scores are evaluated when the concept of interest is rare and imbalanced, i.e., having little collaboration with other high level concepts. Using classical multivariate statistics, estimating the value of one coordinate using other coordinates standardizes the covariates and it depends upon the variance of the correlations instead of the mean. Thus, conditional dependence on the data being normally distributed is eliminated. Experimental results demonstrate that the proposed framework outperforms those approaches in the comparison in terms of the Mean Average Precision (MAP) values.
Adam, Mary B.
2014-01-01
We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. PMID:24957544
Adam, Mary B
2014-09-01
We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. © The American Society of Tropical Medicine and Hygiene.
A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism
NASA Astrophysics Data System (ADS)
Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo
2015-03-01
In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.
Polymorph-dependent titanium dioxide nanoparticle dissolution in acidic and alkali digestions
Multiple polymorphs (anatase, brookite and rutile) of titanium dioxide nanoparticles (TiO2-NPs) with variable structures were quantified in environmental matrices via microwave-based hydrofluoric (HF) and nitric (HNO3) mixed acid digestion and muffle furnace (MF)-based potassium ...
Dhingra, R. R.; Jacono, F. J.; Fishman, M.; Loparo, K. A.; Rybak, I. A.
2011-01-01
Physiological rhythms, including respiration, exhibit endogenous variability associated with health, and deviations from this are associated with disease. Specific changes in the linear and nonlinear sources of breathing variability have not been investigated. In this study, we used information theory-based techniques, combined with surrogate data testing, to quantify and characterize the vagal-dependent nonlinear pattern variability in urethane-anesthetized, spontaneously breathing adult rats. Surrogate data sets preserved the amplitude distribution and linear correlations of the original data set, but nonlinear correlation structure in the data was removed. Differences in mutual information and sample entropy between original and surrogate data sets indicated the presence of deterministic nonlinear or stochastic non-Gaussian variability. With vagi intact (n = 11), the respiratory cycle exhibited significant nonlinear behavior in templates of points separated by time delays ranging from one sample to one cycle length. After vagotomy (n = 6), even though nonlinear variability was reduced significantly, nonlinear properties were still evident at various time delays. Nonlinear deterministic variability did not change further after subsequent bilateral microinjection of MK-801, an N-methyl-d-aspartate receptor antagonist, in the Kölliker-Fuse nuclei. Reversing the sequence (n = 5), blocking N-methyl-d-aspartate receptors bilaterally in the dorsolateral pons significantly decreased nonlinear variability in the respiratory pattern, even with the vagi intact, and subsequent vagotomy did not change nonlinear variability. Thus both vagal and dorsolateral pontine influences contribute to nonlinear respiratory pattern variability. Furthermore, breathing dynamics of the intact system are mutually dependent on vagal and pontine sources of nonlinear complexity. Understanding the structure and modulation of variability provides insight into disease effects on respiratory patterning. PMID:21527661
Falkenberg, A; Nyfjäll, M; Hellgren, C; Vingård, E
2012-01-01
The aim of this longitudinal study is to investigate how different aspects of social support at work and in leisure time are associated with self rated health and sickness absence. The 541 participants in the study were representative for a working population in the public sector in Sweden with a majority being woman. Most of the variables were created from data from a questionnaire in March-April 2005. There were four independent variables and two dependent variables. The dependent were based on data from November 2006. A logistic regression model was used for the analysis of associations. A separate model was adapted for each of the explanatory variables for each outcome, which gave five models per independent variable. The study has given a greater awareness of the importance of employees receiving social support, regardless of type of support or from whom the support is coming. Social support has a strong association with SRH in a longitudinal perspective and no association between social support and sickness absence.
Modeling time-dependent corrosion fatigue crack propagation in 7000 series aluminum alloys
NASA Technical Reports Server (NTRS)
Mason, Mark E.; Gangloff, Richard P.
1994-01-01
Stress corrosion cracking and corrosion fatigue experiments were conducted with the susceptible S-L orientation of AA7075-T651, immersed in acidified and inhibited NaCl solution, to provide a basis for incorporating environmental effects into fatigue crack propagation life prediction codes such as NASA FLAGRO. This environment enhances da/dN by five to ten-fold compared to fatigue in moist air. Time-based crack growth rates from quasi-static load experiments are an order of magnitude too small for accurate linear superposition prediction of da/dN for loading frequencies above 0.001 Hz. Alternate methods of establishing da/dt, based on rising-load or ripple-load-enhanced crack tip strain rate, do not increase da/dt and do not improve linear superposition. Corrosion fatigue is characterized by two regimes of frequency dependence; da/dN is proportional to f(exp -1) below 0.001 Hz and to F(exp 0) to F(exp -0.1) for higher frequencies. Da/dN increases mildly both with increasing hold-time at K(sub max) and with increasing rise-time for a range of loading waveforms. The mild time-dependence is due to cycle-time-dependent corrosion fatigue growth. This behavior is identical for S-L nd L-T crack orientations. The frequency response of environmental fatigue in several 7000 series alloys is variable and depends on undefined compositional or microstructural variables. Speculative explanations are based on the effect of Mg on occluded crack chemistry and embritting hydrogen uptake, or on variable hydrogen diffusion in the crack tip process zone. Cracking in the 7075/NaCl system is adequately described for life prediction by linear superposition for prolonged load-cycle periods, and by a time-dependent upper bound relationship between da/dN and delta K for moderate loading times.
Annual variability of PAH concentrations in the Potomac River watershed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maher, I.L.; Foster, G.D.
1995-12-31
Dynamics of organic contaminant transport in a large river system is influenced by annual variability in organic contaminant concentrations. Surface runoff and groundwater input control the flow of river waters. They are also the two major inputs of contaminants to river waters. The annual variability of contaminant concentrations in rivers may or may not represent similar trends to the flow changes of river waters. The purpose of the research is to define the annual variability in concentrations of polycyclic aromatic hydrocarbons (PAH) in riverine environment. To accomplish this, from March 1992 to March 1995 samples of Potomac River water weremore » collected monthly or bimonthly downstream of the Chesapeake Bay fall line (Chain Bridge) during base flow and main storm flow hydrologic conditions. Concentrations of selected PAHs were measured in the dissolved phase and the particulate phase via GC/MS. The study of the annual variability of PAH concentrations will be performed through comparisons of PAH concentrations seasonally, annually, and through study of PAH concentration river discharge dependency and rainfall dependency. For selected PAHs monthly and annual loadings will be estimated based on their measured concentrations and average daily river discharge. The monthly loadings of selected PAHs will be compared by seasons and annually.« less
State-variable theories for nonelastic deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.Y.
The various concepts of mechanical equation of state for nonelastic deformation in crystalline solids, originally proposed for plastic deformation, have been recently extended to describe additional phenomena such as anelastic and microplastic deformation including the Bauschinger effect. It has been demonstrated that it is possible to predict, based on current state variables in a unified way, the mechanical response of a material under an arbitrary loading. Thus, if the evolution laws of the state variables are known, one can describe the behavior of a material for a thermal-mechanical path of interest, for example, during constant load (or stress) creep withoutmore » relying on specialized theories. Some of the existing theories of mechanical equation of state for nonelastic deformation are reviewed. The establishment of useful forms of mechanical equation of state has to depend on extensive experimentation in the same way as that involved in the development, for example, the ideal gas law. Recent experimental efforts are also reviewed. It has been possible to develop state-variable deformation models based on experimental findings and apply them to creep, cyclic deformation, and other time-dependent deformation. Attempts are being made to correlate the material parameters of the state-variable models with the microstructure of a material. 24 figures.« less
Knowlden, Adam; Sharma, Manoj
2016-02-01
The purpose of this study was to evaluate the efficacy of the Enabling Mothers to Prevent Pediatric Obesity through Web-Based Education and Reciprocal Determinism (EMPOWER) intervention at 1-year, postintervention follow-up. A mixed between-within subjects design was used to evaluate the trial. Independent variables included a two-level, group assignment: EMPOWER (experimental intervention) based on social cognitive theory (SCT) as well as a knowledge-based intervention Healthy Lifestyles (active control intervention). Dependent variables were evaluated across four levels of time: baseline (Week 0), posttest (Week 4), 1-month follow-up (Week 8), and 1-year follow-up (Week 60). Dependent variables included five maternal-facilitated SCT constructs (environment, emotional coping, expectations, self-control, and self-efficacy) as well as four child behaviors (minutes of child physical activity, cups of fruits and vegetables consumed, 8-ounce glasses of sugar-sweetened beverages consumed, and minutes of screen time). Null hypotheses implied no significant group-by-time interactions for the dependent variables under investigation. A significant group-by-time interaction for child fruit and vegetable consumption was found in the experimental group (p = .012) relative to the control group. At 1 year, results suggested an overall increase of 1.847 cups of fruits and vegetables (95% confidence interval = 1.207-2.498) in the experimental group (p < .001). Analysis suggested changes in the maternal-facilitated home environment accounted for 13.3% of the variance in the change in child fruit and vegetable consumption. Improvements in child physical activity, sugar-free beverage intake, and screen time first detected at 1-month follow-up in both groups were no longer significant at 1-year follow-up. An online family-and-home-based intervention was efficacious for improving child fruit and vegetable consumption. Follow-up booster sessions may assist in maintaining treatment effects. © 2015 Society for Public Health Education.
NASA Astrophysics Data System (ADS)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore themore » robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.« less
ERIC Educational Resources Information Center
Al-Adwani, Amel M.; Al-Fadley, Anaam
2017-01-01
The current study is a quantitative research that examined the mean differences of the students' attitude towards reading, based upon several demographic variables (such as gender, grade level and social media devices usage) The researchers used the Students' Reading Attitude Survey (SRAS) as the dependent variable; the sample consisted of 812…
Determining Directional Dependency in Causal Associations
Pornprasertmanit, Sunthud; Little, Todd D.
2014-01-01
Directional dependency is a method to determine the likely causal direction of effect between two variables. This article aims to critique and improve upon the use of directional dependency as a technique to infer causal associations. We comment on several issues raised by von Eye and DeShon (2012), including: encouraging the use of the signs of skewness and excessive kurtosis of both variables, discouraging the use of D’Agostino’s K2, and encouraging the use of directional dependency to compare variables only within time points. We offer improved steps for determining directional dependency that fix the problems we note. Next, we discuss how to integrate directional dependency into longitudinal data analysis with two variables. We also examine the accuracy of directional dependency evaluations when several regression assumptions are violated. Directional dependency can suggest the direction of a relation if (a) the regression error in population is normal, (b) an unobserved explanatory variable correlates with any variables equal to or less than .2, (c) a curvilinear relation between both variables is not strong (standardized regression coefficient ≤ .2), (d) there are no bivariate outliers, and (e) both variables are continuous. PMID:24683282
Collocation mismatch uncertainties in satellite aerosol retrieval validation
NASA Astrophysics Data System (ADS)
Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit
2018-02-01
Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the retrieval errors to the total uncertainty estimates including the CMU in the validation. We find that accounting for CMU increases the fraction of consistent observations.
Singh, R K Ratankumar; Majumdar, Ranendra K; Venkateshwarlu, G
2014-09-01
To establish the effect of barrel temperature, screw speed, total moisture and fish flour content on the expansion ratio and bulk density of the fish based extrudates, response surface methodology was adopted in this study. The experiments were optimized using five-levels, four factors central composite design. Analysis of Variance was carried to study the effects of main factors and interaction effects of various factors and regression analysis was carried out to explain the variability. The fitting was done to a second order model with the coded variables for each response. The response surface plots were developed as a function of two independent variables while keeping the other two independent variables at optimal values. Based on the ANOVA, the fitted model confirmed the model fitness for both the dependent variables. Organoleptically highest score was obtained with the combination of temperature-110(0) C, screw speed-480 rpm, moisture-18 % and fish flour-20 %.
Lung Disease, Indigestion, and Two-Way Tables
ERIC Educational Resources Information Center
Watson, Jane; Callingham, Rosemary
2016-01-01
This paper considers the responses of 115 school students to two problems based on information provided in two-way tables. In each case the question asks if one of the variables involved depends on the other. Contextual knowledge might suggest a dependent relationship in both but in one problem the data show independence while in the other the…
Anxiety, Affect, Self-Esteem, and Stress: Mediation and Moderation Effects on Depression
Nima, Ali Al; Rosenberg, Patricia; Archer, Trevor; Garcia, Danilo
2013-01-01
Background Mediation analysis investigates whether a variable (i.e., mediator) changes in regard to an independent variable, in turn, affecting a dependent variable. Moderation analysis, on the other hand, investigates whether the statistical interaction between independent variables predict a dependent variable. Although this difference between these two types of analysis is explicit in current literature, there is still confusion with regard to the mediating and moderating effects of different variables on depression. The purpose of this study was to assess the mediating and moderating effects of anxiety, stress, positive affect, and negative affect on depression. Methods Two hundred and two university students (males = 93, females = 113) completed questionnaires assessing anxiety, stress, self-esteem, positive and negative affect, and depression. Mediation and moderation analyses were conducted using techniques based on standard multiple regression and hierarchical regression analyses. Main Findings The results indicated that (i) anxiety partially mediated the effects of both stress and self-esteem upon depression, (ii) that stress partially mediated the effects of anxiety and positive affect upon depression, (iii) that stress completely mediated the effects of self-esteem on depression, and (iv) that there was a significant interaction between stress and negative affect, and between positive affect and negative affect upon depression. Conclusion The study highlights different research questions that can be investigated depending on whether researchers decide to use the same variables as mediators and/or moderators. PMID:24039896
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
NASA Astrophysics Data System (ADS)
Ten Veldhuis, M. C.; Smith, J. A.; Zhou, Z.
2017-12-01
Impacts of rainfall variability on runoff response are highly scale-dependent. Sensitivity analyses based on hydrological model simulations have shown that impacts are likely to depend on combinations of storm type, basin versus storm scale, temporal versus spatial rainfall variability. So far, few of these conclusions have been confirmed on observational grounds, since high quality datasets of spatially variable rainfall and runoff over prolonged periods are rare. Here we investigate relationships between rainfall variability and runoff response based on 30 years of radar-rainfall datasets and flow measurements for 16 hydrological basins ranging from 7 to 111 km2. Basins vary not only in scale, but also in their degree of urbanisation. We investigated temporal and spatial variability characteristics of rainfall fields across a range of spatial and temporal scales to identify main drivers for variability in runoff response. We identified 3 ranges of basin size with different temporal versus spatial rainfall variability characteristics. Total rainfall volume proved to be the dominant agent determining runoff response at all basin scales, independent of their degree of urbanisation. Peak rainfall intensity and storm core volume are of secondary importance. This applies to all runoff parameters, including runoff volume, runoff peak, volume-to-peak and lag time. Position and movement of the storm with respect to the basin have a negligible influence on runoff response, with the exception of lag times in some of the larger basins. This highlights the importance of accuracy in rainfall estimation: getting the position right but the volume wrong will inevitably lead to large errors in runoff prediction. Our study helps to identify conditions where rainfall variability matters for correct estimation of the rainfall volume as well as the associated runoff response.
Short and Long-Term Outcomes After Surgical Procedures Lasting for More Than Six Hours.
Cornellà, Natalia; Sancho, Joan; Sitges-Serra, Antonio
2017-08-23
Long-term all-cause mortality and dependency after complex surgical procedures have not been assessed in the framework of value-based medicine. The aim of this study was to investigate the postoperative and long-term outcomes after surgical procedures lasting for more than six hours. Retrospective cohort study of patients undergoing a first elective complex surgical procedure between 2004 and 2013. Heart and transplant surgery was excluded. Mortality and dependency from the healthcare system were selected as outcome variables. Gender, age, ASA, creatinine, albumin kinetics, complications, benign vs malignant underlying condition, number of drugs at discharge, and admission and length of stay in the ICU were recorded as predictive variables. Some 620 adult patients were included in the study. Postoperative, <1year and <5years cumulative mortality was 6.8%, 17.6% and 45%, respectively. Of patients discharged from hospital after surgery, 76% remained dependent on the healthcare system. In multivariate analysis for postoperative, <1year and <5years mortality, postoperative albumin concentration, ASA score and an ICU stay >7days, were the most significant independent predictive variables. Prolonged surgery carries a significant short and long-term mortality and disability. These data may contribute to more informed decisions taken concerning major surgery in the framework of value-based medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stetzel, KD; Aldrich, LL; Trimboli, MS
2015-03-15
This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables.more » (C) 2014 Elsevier B.V. All rights reserved.« less
Wall, Martin; Casswell, Sally
2017-05-01
The aim was to identify a typology of drinkers in New Zealand based on alcohol consumption, beverage choice, and public versus private drinking locations and investigate the relationship between drinker types, harms experienced, and policy-related variables. Model-based cluster analysis of male and female drinkers including volumes of alcohol consumed in the form of beer, wine, spirits, and ready-to-drinks (RTDs) in off- and on-premise settings. Cluster membership was then related to harm measures: alcohol dependence, self-rated health; and to 3 policy-relevant variables: liking for alcohol adverts, price paid for alcohol, and time of purchase. Males and females were analyzed separately. Men fell into 4 and women into 14 clearly discriminated clusters. The male clusters consumed a relatively high proportion of alcohol in the form of beer. Women had a number of small extreme clusters and some consumed mainly spirits-based RTDs, while others drank mainly wine. Those in the higher consuming clusters were more likely to have signs of alcohol dependency, to report lower satisfaction with their health, to like alcohol ads, and to have purchased late at night. Consumption patterns are sufficiently distinctive to identify typologies of male and female alcohol consumers. Women drinkers are more heterogeneous than men. The clusters relate differently to policy-related variables. Copyright © 2017 by the Research Society on Alcoholism.
Predictive Inference Using Latent Variables with Covariates*
Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.
2014-01-01
Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627
Adjoint-Based Methodology for Time-Dependent Optimization
NASA Technical Reports Server (NTRS)
Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.
2008-01-01
This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.
ERIC Educational Resources Information Center
Ayundawati, Dyah; Setyosari, Punaji; Susilo, Herawati; Sihkabuden
2016-01-01
This study aims for know influence of problem-based learning strategies and achievement motivation on learning achievement. The method used in this research is quantitative method. The instrument used in this study is two fold instruments to measure moderator variable (achievement motivation) and instruments to measure the dependent variable (the…
Giorgio Vacchiano; John D. Shaw; R. Justin DeRose; James N. Long
2008-01-01
Diameter increment is an important variable in modeling tree growth. Most facets of predicted tree development are dependent in part on diameter or diameter increment, the most commonly measured stand variable. The behavior of the Forest Vegetation Simulator (FVS) largely relies on the performance of the diameter increment model and the subsequent use of predicted dbh...
Single-diffractive production of dijets within the kt-factorization approach
NASA Astrophysics Data System (ADS)
Łuszczak, Marta; Maciuła, Rafał; Szczurek, Antoni; Babiarz, Izabela
2017-09-01
We discuss single-diffractive production of dijets. The cross section is calculated within the resolved Pomeron picture, for the first time in the kt-factorization approach, neglecting transverse momentum of the Pomeron. We use Kimber-Martin-Ryskin unintegrated parton (gluon, quark, antiquark) distributions in both the proton as well as in the Pomeron or subleading Reggeon. The unintegrated parton distributions are calculated based on conventional mmht2014nlo parton distribution functions in the proton and H1 Collaboration diffractive parton distribution functions used previously in the analysis of diffractive structure function and dijets at HERA. For comparison, we present results of calculations performed within the collinear-factorization approach. Our results remain those obtained in the next-to-leading-order approach. The calculation is (must be) supplemented by the so-called gap survival factor, which may, in general, depend on kinematical variables. We try to describe the existing data from Tevatron and make detailed predictions for possible LHC measurements. Several differential distributions are calculated. The E¯T, η ¯ and xp ¯ distributions are compared with the Tevatron data. A reasonable agreement is obtained for the first two distributions. The last one requires introducing a gap survival factor which depends on kinematical variables. We discuss how the phenomenological dependence on one kinematical variable may influence dependence on other variables such as E¯T and η ¯. Several distributions for the LHC are shown.
ERIC Educational Resources Information Center
Nimon, Kim; Henson, Robin K.
2015-01-01
The authors empirically examined whether the validity of a residualized dependent variable after covariance adjustment is comparable to that of the original variable of interest. When variance of a dependent variable is removed as a result of one or more covariates, the residual variance may not reflect the same meaning. Using the pretest-posttest…
Variables selection methods in near-infrared spectroscopy.
Xiaobo, Zou; Jiewen, Zhao; Povey, Malcolm J W; Holmes, Mel; Hanpin, Mao
2010-05-14
Near-infrared (NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields, such as the petrochemical, pharmaceutical, environmental, clinical, agricultural, food and biomedical sectors during the past 15 years. A NIR spectrum of a sample is typically measured by modern scanning instruments at hundreds of equally spaced wavelengths. The large number of spectral variables in most data sets encountered in NIR spectral chemometrics often renders the prediction of a dependent variable unreliable. Recently, considerable effort has been directed towards developing and evaluating different procedures that objectively identify variables which contribute useful information and/or eliminate variables containing mostly noise. This review focuses on the variable selection methods in NIR spectroscopy. Selection methods include some classical approaches, such as manual approach (knowledge based selection), "Univariate" and "Sequential" selection methods; sophisticated methods such as successive projections algorithm (SPA) and uninformative variable elimination (UVE), elaborate search-based strategies such as simulated annealing (SA), artificial neural networks (ANN) and genetic algorithms (GAs) and interval base algorithms such as interval partial least squares (iPLS), windows PLS and iterative PLS. Wavelength selection with B-spline, Kalman filtering, Fisher's weights and Bayesian are also mentioned. Finally, the websites of some variable selection software and toolboxes for non-commercial use are given. Copyright 2010 Elsevier B.V. All rights reserved.
Modeling of laser transmission contour welding process using FEA and DoE
NASA Astrophysics Data System (ADS)
Acherjee, Bappa; Kuar, Arunanshu S.; Mitra, Souren; Misra, Dipten
2012-07-01
In this research, a systematic investigation on laser transmission contour welding process is carried out using finite element analysis (FEA) and design of experiments (DoE) techniques. First of all, a three-dimensional thermal model is developed to simulate the laser transmission contour welding process with a moving heat source. The commercial finite element code ANSYS® multi-physics is used to obtain the numerical results by implementing a volumetric Gaussian heat source, and combined convection-radiation boundary conditions. Design of experiments together with regression analysis is then employed to plan the experiments and to develop mathematical models based on simulation results. Four key process parameters, namely power, welding speed, beam diameter, and carbon black content in absorbing polymer, are considered as independent variables, while maximum temperature at weld interface, weld width, and weld depths in transparent and absorbing polymers are considered as dependent variables. Sensitivity analysis is performed to determine how different values of an independent variable affect a particular dependent variable.
Evren, Cuneyt; Evren, Bilge; Bozkurt, Muge; Ciftci-Demirci, Arzu
2015-11-01
The aim of this study was to determine the effects of life-time tobacco, alcohol, and substance use on psychological and behavioral variables among 10th grade students in Istanbul/Turkey. This study employed a cross-sectional online self-report survey conducted in 45 schools from the 15 districts in Istanbul. The questionnaire featured a section about use of substances, including tobacco, alcohol, and drugs. The depression, anxiety, anger, assertiveness, sensation seeking and impulsiveness subscales of the Psychological Screening Test for Adolescents (PSTA) were used. The analyses were conducted based on 4957 subjects. Logistic regression analyses were conducted with each school with the related and behavioral variables as the dependent variables. Gender, tobacco, alcohol, and drug use being the independent variables. All four independent variables predicted the dependent variables. Lifetime tobacco and drug use had significant effects on all the subscale score, whereas lifetime alcohol use had significant effects on all the subscale scores other than lack of assertiveness, and male gender was a significant covariant for all the subscale scores. Drug use showed the highest effect on dependent variables. Interaction was found between effects of tobacco and alcohol on anxiety, whereas interactions were found between effects of tobacco and drugs on lack of assertiveness and impulsiveness. The findings suggested that male students with lifetime tobacco, alcohol or drug use have particularly high risk of psychological and behavioral problems. The unique effects of substance clusters on these problems may be useful in developing secondary preventive practices for substance use and abuse problems in Istanbul.
A general method to determine the stability of compressible flows
NASA Technical Reports Server (NTRS)
Guenther, R. A.; Chang, I. D.
1982-01-01
Several problems were studied using two completely different approaches. The initial method was to use the standard linearized perturbation theory by finding the value of the individual small disturbance quantities based on the equations of motion. These were serially eliminated from the equations of motion to derive a single equation that governs the stability of fluid dynamic system. These equations could not be reduced unless the steady state variable depends only on one coordinate. The stability equation based on one dependent variable was found and was examined to determine the stability of a compressible swirling jet. The second method applied a Lagrangian approach to the problem. Since the equations developed were based on different assumptions, the condition of stability was compared only for the Rayleigh problem of a swirling flow, both examples reduce to the Rayleigh criterion. This technique allows including the viscous shear terms which is not possible in the first method. The same problem was then examined to see what effect shear has on stability.
Automated combinatorial method for fast and robust prediction of lattice thermal conductivity
NASA Astrophysics Data System (ADS)
Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Toher, Cormac; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano
The lack of computationally inexpensive and accurate ab-initio based methodologies to predict lattice thermal conductivity, κl, without computing the anharmonic force constants or performing time-consuming ab-initio molecular dynamics, is one of the obstacles preventing the accelerated discovery of new high or low thermal conductivity materials. The Slack equation is the best alternative to other more expensive methodologies but is highly dependent on two variables: the acoustic Debye temperature, θa, and the Grüneisen parameter, γ. Furthermore, different definitions can be used for these two quantities depending on the model or approximation. Here, we present a combinatorial approach based on the quasi-harmonic approximation to elucidate which definitions of both variables produce the best predictions of κl. A set of 42 compounds was used to test accuracy and robustness of all possible combinations. This approach is ideal for obtaining more accurate values than fast screening models based on the Debye model, while being significantly less expensive than methodologies that solve the Boltzmann transport equation.
The effect of workstation and task variables on forces applied during simulated meat cutting.
McGorry, Raymond W; Dempsey, Patrick G; O'Brien, Niall V
2004-12-01
The purpose of the study was to investigate factors related to force and postural exposure during a simulated meat cutting task. The hypothesis was that workstation, tool and task variables would affect the dependent kinetic variables of gripping force, cutting moment and the dependent kinematic variables of elbow elevation and wrist angular displacement in the flexion/extension and radial/ulnar deviation planes. To evaluate this hypothesis a 3 x 3 x 2 x 2 x 2 (surface orientation by surface height by blade angle by cut complexity by work pace) within-subject factorial design was conducted with 12 participants. The results indicated that the variables can act and interact to modify the kinematics and kinetics of a cutting task. Participants used greater grip force and cutting moment when working at a pace based on productivity. The interactions of the work surface height and orientation indicated that the use of an adjustable workstation could minimize wrist deviation from neutral and improve shoulder posture during cutting operations. Angling the knife blade also interacted with workstation variables to improve wrist and upper extremity posture, but this benefit must be weighed against the potential for small increases in force exposure.
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
Ram Kumar Deo; Robert E. Froese; Michael J. Falkowski; Andrew T. Hudak
2016-01-01
The conventional approach to LiDAR-based forest inventory modeling depends on field sample data from fixed-radius plots (FRP). Because FRP sampling is cost intensive, combining variable-radius plot (VRP) sampling and LiDAR data has the potential to improve inventory efficiency. The overarching goal of this study was to evaluate the integration of LiDAR and VRP data....
Glosser, D.; Kutchko, B.; Benge, G.; ...
2016-03-21
Foamed cement is a critical component for wellbore stability. The mechanical performance of a foamed cement depends on its microstructure, which in turn depends on the preparation method and attendant operational variables. Determination of cement stability for field use is based on laboratory testing protocols governed by API Recommended Practice 10B-4 (API RP 10B-4, 2015). However, laboratory and field operational variables contrast considerably in terms of scale, as well as slurry mixing and foaming processes. Here in this paper, laboratory and field operational processes are characterized within a physics-based framework. It is shown that the “atomization energy” imparted by themore » high pressure injection of nitrogen gas into the field mixed foamed cement slurry is – by a significant margin – the highest energy process, and has a major impact on the void system in the cement slurry. There is no analog for this high energy exchange in current laboratory cement preparation and testing protocols. Quantifying the energy exchanges across the laboratory and field processes provides a basis for understanding relative impacts of these variables on cement structure, and can ultimately lead to the development of practices to improve cement testing and performance.« less
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe
1990-01-01
The main goals are the development, validation, and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems. A solution method that combines a finite volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.
Gaudette, Alexandra I; Thorarinsdottir, Agnes E; Harris, T David
2017-11-30
An Fe II complex that features a pH-dependent spin state population, by virtue of a variable ligand protonation state, is described. This behavior leads to a highly pH-dependent 19 F NMR chemical shift with a sensitivity of 13.9(5) ppm per pH unit at 37 °C, thereby demonstrating the potential utility of the complex as a 19 F chemical shift-based pH sensor.
NASA Astrophysics Data System (ADS)
Natali, Marco; Passeri, Daniele; Reggente, Melania; Tamburri, Emanuela; Terranova, Maria Letizia; Rossi, Marco
2016-06-01
Characterization of mechanical properties at the nanometer scale at variable temperature is one of the main challenges in the development of polymer-based nanocomposites for application in high temperature environments. Contact resonance atomic force microscopy (CR-AFM) is a powerful technique to characterize viscoelastic properties of materials at the nanoscale. In this work, we demonstrate the capability of CR-AFM of characterizing viscoelastic properties (i.e., storage and loss moduli, as well as loss tangent) of polymer-based nanocomposites at variable temperature. CR-AFM is first illustrated on two polymeric reference samples, i.e., low-density polyethylene (LDPE) and polycarbonate (PC). Then, temperature-dependent viscoelastic properties (in terms of loss tangent) of a nanocomposite sample constituted by a epoxy resin reinforced with single-wall carbon nanotubes (SWCNTs) are investigated.
NASA Astrophysics Data System (ADS)
Wable, Pawan S.; Jha, Madan K.
2018-02-01
The effects of rainfall and the El Niño Southern Oscillation (ENSO) on groundwater in a semi-arid basin of India were analyzed using Archimedean copulas considering 17 years of data for monsoon rainfall, post-monsoon groundwater level (PMGL) and ENSO Index. The evaluated dependence among these hydro-climatic variables revealed that PMGL-Rainfall and PMGL-ENSO Index pairs have significant dependence. Hence, these pairs were used for modeling dependence by employing four types of Archimedean copulas: Ali-Mikhail-Haq, Clayton, Gumbel-Hougaard, and Frank. For the copula modeling, the results of probability distributions fitting to these hydro-climatic variables indicated that the PMGL and rainfall time series are best represented by Weibull and lognormal distributions, respectively, while the non-parametric kernel-based normal distribution is the most suitable for the ENSO Index. Further, the PMGL-Rainfall pair is best modeled by the Clayton copula, and the PMGL-ENSO Index pair is best modeled by the Frank copula. The Clayton copula-based conditional probability of PMGL being less than or equal to its average value at a given mean rainfall is above 70% for 33% of the study area. In contrast, the spatial variation of the Frank copula-based probability of PMGL being less than or equal to its average value is 35-40% in 23% of the study area during El Niño phase, while it is below 15% in 35% of the area during the La Niña phase. This copula-based methodology can be applied under data-scarce conditions for exploring the impacts of rainfall and ENSO on groundwater at basin scales.
NASA Astrophysics Data System (ADS)
Dieppois, B.; Pohl, B.; Eden, J.; Crétat, J.; Rouault, M.; Keenlyside, N.; New, M. G.
2017-12-01
The water management community has hitherto neglected or underestimated many of the uncertainties in climate impact scenarios, in particular, uncertainties associated with decadal climate variability. Uncertainty in the state-of-the-art global climate models (GCMs) is time-scale-dependant, e.g. stronger at decadal than at interannual timescales, in response to the different parameterizations and to internal climate variability. In addition, non-stationarity in statistical downscaling is widely recognized as a key problem, in which time-scale dependency of predictors plays an important role. As with global climate modelling, therefore, the selection of downscaling methods must proceed with caution to avoid unintended consequences of over-correcting the noise in GCMs (e.g. interpreting internal climate variability as a model bias). GCM outputs from the Coupled Model Intercomparison Project 5 (CMIP5) have therefore first been selected based on their ability to reproduce southern African summer rainfall variability and their teleconnections with Pacific sea-surface temperature across the dominant timescales. In observations, southern African summer rainfall has recently been shown to exhibit significant periodicities at the interannual timescale (2-8 years), quasi-decadal (8-13 years) and inter-decadal (15-28 years) timescales, which can be interpret as the signature of ENSO, the IPO, and the PDO over the region. Most of CMIP5 GCMs underestimate southern African summer rainfall variability and their teleconnections with Pacific SSTs at these three timescales. In addition, according to a more in-depth analysis of historical and pi-control runs, this bias is might result from internal climate variability in some of the CMIP5 GCMs, suggesting potential for bias-corrected prediction based empirical statistical downscaling. A multi-timescale regression based downscaling procedure, which determines the predictors across the different timescales, has thus been used to simulate southern African summer rainfall. This multi-timescale procedure shows much better skills in simulating decadal timescales of variability compared to commonly used statistical downscaling approaches.
Climate variability has a stabilizing effect on the coexistence of prairie grasses
Adler, Peter B.; HilleRisLambers, Janneke; Kyriakidis, Phaedon C.; Guan, Qingfeng; Levine, Jonathan M.
2006-01-01
How expected increases in climate variability will affect species diversity depends on the role of such variability in regulating the coexistence of competing species. Despite theory linking temporal environmental fluctuations with the maintenance of diversity, the importance of climate variability for stabilizing coexistence remains unknown because of a lack of appropriate long-term observations. Here, we analyze three decades of demographic data from a Kansas prairie to demonstrate that interannual climate variability promotes the coexistence of three common grass species. Specifically, we show that (i) the dynamics of the three species satisfy all requirements of “storage effect” theory based on recruitment variability with overlapping generations, (ii) climate variables are correlated with interannual variation in species performance, and (iii) temporal variability increases low-density growth rates, buffering these species against competitive exclusion. Given that environmental fluctuations are ubiquitous in natural systems, our results suggest that coexistence based on the storage effect may be underappreciated and could provide an important alternative to recent neutral theories of diversity. Field evidence for positive effects of variability on coexistence also emphasizes the need to consider changes in both climate means and variances when forecasting the effects of global change on species diversity. PMID:16908862
Ell, Shawn W; Cosley, Brandon; McCoy, Shannon K
2011-02-01
The way in which we respond to everyday stressors can have a profound impact on cognitive functioning. Maladaptive stress responses in particular are generally associated with impaired cognitive performance. We argue, however, that the cognitive system mediating task performance is also a critical determinant of the stress-cognition relationship. Consistent with this prediction, we observed that stress reactivity consistent with a maladaptive, threat response differentially predicted performance on two categorization tasks. Increased threat reactivity predicted enhanced performance on an information-integration task (i.e., learning is thought to depend upon a procedural-based memory system), and a (nonsignificant) trend for impaired performance on a rule-based task (i.e., learning is thought to depend upon a hypothesis-testing system). These data suggest that it is critical to consider both variability in the stress response and variability in the cognitive system mediating task performance in order to fully understand the stress-cognition relationship.
Granger Causality Testing with Intensive Longitudinal Data.
Molenaar, Peter C M
2018-06-01
The availability of intensive longitudinal data obtained by means of ambulatory assessment opens up new prospects for prevention research in that it allows the derivation of subject-specific dynamic networks of interacting variables by means of vector autoregressive (VAR) modeling. The dynamic networks thus obtained can be subjected to Granger causality testing in order to identify causal relations among the observed time-dependent variables. VARs have two equivalent representations: standard and structural. Results obtained with Granger causality testing depend upon which representation is chosen, yet no criteria exist on which this important choice can be based. A new equivalent representation is introduced called hybrid VARs with which the best representation can be chosen in a data-driven way. Partial directed coherence, a frequency-domain statistic for Granger causality testing, is shown to perform optimally when based on hybrid VARs. An application to real data is provided.
Sharpening method of satellite thermal image based on the geographical statistical model
NASA Astrophysics Data System (ADS)
Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng
2016-04-01
To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.
Bühne, David; Alles, Torsten; Hetzel, Christian; Froböse, Ingo
2018-04-01
The aim of the study was to determine the ability of FCE (Functional Capacity Evaluation) to predict sustained return-to-work (RTW). A multicentric prospective cohort study was conducted in cooperation with 4 outpatient rehabilitation clinics. The sample consisted of 198 patients. Sustained RTW was defined as a combination of employment at 3-month follow-up with a low level of sick leave (dependent variable 1) resp. with a moderate or better rating of the current work ability with respect to the physical demands at work (dependent variable 2). Based on questionnaires and FCE information, logistic regression models were calculated to predict sustained RTW. The FCE-information at discharge predicted sustained RTW after adjusting for assessors (Odds Ratio - OR=17.2 [95% CI: 6.2-57.8] resp. OR 12.8 [95% CI: 5.1-32.1]) as well as after adjusting for additional RTW predictors (OR 14.6 [95% CI: 4.8-44.9] resp. OR 10.1 [95% CI: 3.5-29.4]). Concerning dependent variable 1 and the FCE-information at admission there was a gain of information towards a model based on patient self-reports (OR 2.6 [95% CI: 1.1-6.0]). The study supports the predictive validity of crude and adjusted FCE-information. The gain of information towards patient self-reports is unclear. © Georg Thieme Verlag KG Stuttgart · New York.
On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.
Koyama, Shinsuke
2015-07-01
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
A size-dependent constitutive model of bulk metallic glasses in the supercooled liquid region
Yao, Di; Deng, Lei; Zhang, Mao; Wang, Xinyun; Tang, Na; Li, Jianjun
2015-01-01
Size effect is of great importance in micro forming processes. In this paper, micro cylinder compression was conducted to investigate the deformation behavior of bulk metallic glasses (BMGs) in supercooled liquid region with different deformation variables including sample size, temperature and strain rate. It was found that the elastic and plastic behaviors of BMGs have a strong dependence on the sample size. The free volume and defect concentration were introduced to explain the size effect. In order to demonstrate the influence of deformation variables on steady stress, elastic modulus and overshoot phenomenon, four size-dependent factors were proposed to construct a size-dependent constitutive model based on the Maxwell-pulse type model previously presented by the authors according to viscosity theory and free volume model. The proposed constitutive model was then adopted in finite element method simulations, and validated by comparing the micro cylinder compression and micro double cup extrusion experimental data with the numerical results. Furthermore, the model provides a new approach to understanding the size-dependent plastic deformation behavior of BMGs. PMID:25626690
NASA Astrophysics Data System (ADS)
Balzarolo, M.; Vescovo, L.; Hammerle, A.; Gianelle, D.; Papale, D.; Tomelleri, E.; Wohlfahrt, G.
2015-05-01
In this paper we explore the skill of hyperspectral reflectance measurements and vegetation indices (VIs) derived from these in estimating carbon dioxide (CO2) fluxes of grasslands. Hyperspectral reflectance data, CO2 fluxes and biophysical parameters were measured at three grassland sites located in European mountain regions using standardized protocols. The relationships between CO2 fluxes, ecophysiological variables, traditional VIs and VIs derived using all two-band combinations of wavelengths available from the whole hyperspectral data space were analysed. We found that VIs derived from hyperspectral data generally explained a large fraction of the variability in the investigated dependent variables but differed in their ability to estimate midday and daily average CO2 fluxes and various derived ecophysiological parameters. Relationships between VIs and CO2 fluxes and ecophysiological parameters were site-specific, likely due to differences in soils, vegetation parameters and environmental conditions. Chlorophyll and water-content-related VIs explained the largest fraction of variability in most of the dependent variables. Band selection based on a combination of a genetic algorithm with random forests (GA-rF) confirmed that it is difficult to select a universal band region suitable across the investigated ecosystems. Our findings have major implications for upscaling terrestrial CO2 fluxes to larger regions and for remote- and proximal-sensing sampling and analysis strategies and call for more cross-site synthesis studies linking ground-based spectral reflectance with ecosystem-scale CO2 fluxes.
NASA Astrophysics Data System (ADS)
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
Surfing wave climate variability
NASA Astrophysics Data System (ADS)
Espejo, Antonio; Losada, Iñigo J.; Méndez, Fernando J.
2014-10-01
International surfing destinations are highly dependent on specific combinations of wind-wave formation, thermal conditions and local bathymetry. Surf quality depends on a vast number of geophysical variables, and analyses of surf quality require the consideration of the seasonal, interannual and long-term variability of surf conditions on a global scale. A multivariable standardized index based on expert judgment is proposed for this purpose. This index makes it possible to analyze surf conditions objectively over a global domain. A summary of global surf resources based on a new index integrating existing wave, wind, tides and sea surface temperature databases is presented. According to general atmospheric circulation and swell propagation patterns, results show that west-facing low to middle-latitude coasts are more suitable for surfing, especially those in the Southern Hemisphere. Month-to-month analysis reveals strong seasonal variations in the occurrence of surfable events, enhancing the frequency of such events in the North Atlantic and the North Pacific. Interannual variability was investigated by comparing occurrence values with global and regional modes of low-frequency climate variability such as El Niño and the North Atlantic Oscillation, revealing their strong influence at both the global and the regional scale. Results of the long-term trends demonstrate an increase in the probability of surfable events on west-facing coasts around the world in recent years. The resulting maps provide useful information for surfers, the surf tourism industry and surf-related coastal planners and stakeholders.
Context effects on second-language learning of tonal contrasts.
Chang, Charles B; Bowles, Anita R
2015-12-01
Studies of lexical tone learning generally focus on monosyllabic contexts, while reports of phonetic learning benefits associated with input variability are based largely on experienced learners. This study trained inexperienced learners on Mandarin tonal contrasts to test two hypotheses regarding the influence of context and variability on tone learning. The first hypothesis was that increased phonetic variability of tones in disyllabic contexts makes initial tone learning more challenging in disyllabic than monosyllabic words. The second hypothesis was that the learnability of a given tone varies across contexts due to differences in tonal variability. Results of a word learning experiment supported both hypotheses: tones were acquired less successfully in disyllables than in monosyllables, and the relative difficulty of disyllables was closely related to contextual tonal variability. These results indicate limited relevance of monosyllable-based data on Mandarin learning for the disyllabic majority of the Mandarin lexicon. Furthermore, in the short term, variability can diminish learning; its effects are not necessarily beneficial but dependent on acquisition stage and other learner characteristics. These findings thus highlight the importance of considering contextual variability and the interaction between variability and type of learner in the design, interpretation, and application of research on phonetic learning.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-05-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Zhang, Fan; Luo, Wensui; Parker, Jack C; Spalding, Brian P; Brooks, Scott C; Watson, David B; Jardine, Philip M; Gu, Baohua
2008-11-01
Many geochemical reactions that control aqueous metal concentrations are directly affected by solution pH. However, changes in solution pH are strongly buffered by various aqueous phase and solid phase precipitation/dissolution and adsorption/desorption reactions. The ability to predict acid-base behavior of the soil-solution system is thus critical to predict metal transport under variable pH conditions. This studywas undertaken to develop a practical generic geochemical modeling approach to predict aqueous and solid phase concentrations of metals and anions during conditions of acid or base additions. The method of Spalding and Spalding was utilized to model soil buffer capacity and pH-dependent cation exchange capacity by treating aquifer solids as a polyprotic acid. To simulate the dynamic and pH-dependent anion exchange capacity, the aquifer solids were simultaneously treated as a polyprotic base controlled by mineral precipitation/ dissolution reactions. An equilibrium reaction model that describes aqueous complexation, precipitation, sorption and soil buffering with pH-dependent ion exchange was developed using HydroGeoChem v5.0 (HGC5). Comparison of model results with experimental titration data of pH, Al, Ca, Mg, Sr, Mn, Ni, Co, and SO4(2-) for contaminated sediments indicated close agreement suggesting that the model could potentially be used to predictthe acid-base behavior of the sediment-solution system under variable pH conditions.
Correlation-based regularization and gradient operators for (joint) inversion on unstructured meshes
NASA Astrophysics Data System (ADS)
Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan
2017-04-01
When working with unstructured meshes for geophysical inversions, special attention should be paid to the design of the operators that are used for regularizing the inverse problem and coupling of different property models in joint inversions. Regularization constraints for inversions on unstructured meshes are often defined in a rather ad-hoc manner and usually only involve the cell to which the operator is applied and its direct neighbours. Similarly, most structural coupling operators for joint inversion, such as the popular cross-gradients operator, are only defined in the direct neighbourhood of a cell. As a result, the regularization and coupling length scales and strength of these operators depend on the discretization as well as cell sizes and shape. Especially for unstructured meshes, where the cell sizes vary throughout the model domain, the dependency of the operator on the discretization may lead to artefacts. Designing operators that are based on a spatial correlation model allows to define correlation length scales over which an operator acts (called footprint), reducing the dependency on the discretization and the effects of variable cell sizes. Moreover, correlation-based operators can accommodate for expected anisotropy by using different length scales in horizontal and vertical directions. Correlation-based regularization operators also known as stochastic regularization operators have already been successfully applied to inversions on regular grids. Here, we formulate stochastic operators for unstructured meshes and apply them in 2D surface and 3D cross-well electrical resistivity tomography data inversion examples of layered media. Especially for the synthetic cross-well example, improved inversion results are achieved when stochastic regularization is used instead of a classical smoothness constraint. For the case of cross-gradients operators for joint inversion, the correlation model is used to define the footprint of the operator and weigh the contributions of the property values that are used to calculate the cross-gradients. In a first series of synthetic-data tests, we examined the mesh dependency of the cross-gradients operators. Compared to operators that are only defined in the direct neighbourhood of a cell, the dependency on the cell size of the cross-gradients calculation is markedly reduced when using operators with larger footprints. A second test with synthetic models focussed on the effect of small-scale variabilities of the parameter value on the cross-gradients calculation. Small-scale variabilities that are superimposed on a global trend of the property value can potentially degrade the cross-gradients calculation and destabilize joint inversion. We observe that the cross-gradients from operators with footprints larger than the length scale of the variabilities are less affected compared to operators with a small footprint. In joint inversions on unstructured meshes, we thus expect the correlation-based coupling operators to ensure robust coupling on a physically meaningful scale.
Effect of suction-dependent soil deformability on landslide susceptibility maps
NASA Astrophysics Data System (ADS)
Lizarraga, Jose J.; Buscarnera, Giuseppe; Frattini, Paolo; Crosta, Giovanni B.
2016-04-01
This contribution presents a physically-based, spatially-distributed model for shallow landslides promoted by rainfall infiltration. The model features a set of Factor of Safety values aimed to capture different failure mechanisms, namely frictional slips with limited mobility and flowslide events associated with the liquefaction of the considered soils. Indices of failure associated with these two modes of instability have been derived from unsaturated soil stability principles. In particular, the propensity to wetting-induced collapse of unsaturated soils is quantified through the introduction of a rigid-plastic model with suction-dependent yielding and strength properties. The model is combined with an analytical approach (TRIGRS) to track the spatio-temporal evolution of soil suction in slopes subjected to transient infiltration. The model has been tested to reply the triggering of shallow landslides in pyroclastic deposits in Sarno (1998, Campania Region, Southern Italy). It is shown that suction-dependent mechanical properties, such as soil deformability, have important effects on the predicted landslide susceptibility scenarios, resulting on computed unstable zones that may encompass a wide range of slope inclinations, saturation levels, and depths. Such preliminary results suggest that the proposed methodology offers an alternative mechanistic interpretation to the variability in behavior of rainfall-induced landslides. Differently to standard methods the explanation to this variability is based on suction-dependent soil behavior characteristics.
NASA Technical Reports Server (NTRS)
Stouffer, D. C.; Sheh, M. Y.
1988-01-01
A micromechanical model based on crystallographic slip theory was formulated for nickel-base single crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the effect of back stress in single crystals. The results showed that (1) the back stress is orientation dependent; and (2) the back stress state variable in the inelastic flow equation is necessary for predicting anelastic behavior of the material. The model also demonstrated improved fatigue predictive capability. Model predictions and experimental data are presented for single crystal superalloy Rene N4 at 982 C.
Real-time plasma control in a dual-frequency, confined plasma etcher
NASA Astrophysics Data System (ADS)
Milosavljević, V.; Ellingboe, A. R.; Gaman, C.; Ringwood, J. V.
2008-04-01
The physics issues of developing model-based control of plasma etching are presented. A novel methodology for incorporating real-time model-based control of plasma processing systems is developed. The methodology is developed for control of two dependent variables (ion flux and chemical densities) by two independent controls (27 MHz power and O2 flow). A phenomenological physics model of the nonlinear coupling between the independent controls and the dependent variables of the plasma is presented. By using a design of experiment, the functional dependencies of the response surface are determined. In conjunction with the physical model, the dependencies are used to deconvolve the sensor signals onto the control inputs, allowing compensation of the interaction between control paths. The compensated sensor signals and compensated set-points are then used as inputs to proportional-integral-derivative controllers to adjust radio frequency power and oxygen flow to yield the desired ion flux and chemical density. To illustrate the methodology, model-based real-time control is realized in a commercial semiconductor dielectric etch chamber. The two radio frequency symmetric diode operates with typical commercial fluorocarbon feed-gas mixtures (Ar/O2/C4F8). Key parameters for dielectric etching are known to include ion flux to the surface and surface flux of oxygen containing species. Control is demonstrated using diagnostics of electrode-surface ion current, and chemical densities of O, O2, and CO measured by optical emission spectrometry and/or mass spectrometry. Using our model-based real-time control, the set-point tracking accuracy to changes in chemical species density and ion flux is enhanced.
ERIC Educational Resources Information Center
Yang, Ya-Ting C.; Newby, Timothy; Bill, Robert
2008-01-01
This experimental study investigated the effectiveness of structured Web-Based Bulletin Board (WBB) discussions in improving the critical thinking (CT) skills of learners involved in veterinary distance learning, as well as their attitudes toward learning via WBBs. The two dependent variables were learners' CT skills and their attitudes toward…
Using Design-Based Latent Growth Curve Modeling with Cluster-Level Predictor to Address Dependency
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L.
2014-01-01
The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
2017-10-13
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.
Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J
2017-12-01
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vechernin, Vladimir
2016-01-22
The transverse momentum dependence of the yields of particles produced from the clusters of dense cold nuclear matter in nuclei is calculated in the approach based on perturbative QCD calculations of the corresponding quark diagrams near the thresholds. It is shown that the transverse momentum dependence of the pion and proton spectra at different values of the Feynman variable x in the cumulative region, x > 1, can be described by the only parameter - the constituent quark mass, taken to be equal 300 MeV. It is found that the cumulative protons are formed predominantly via a coherent coalescence of threemore » fast cluster quarks, whereas the production of cumulative pions is dominated by one fast cluster quark hadronization. This enabled to explain the experimentally observed more slow increase of the mean transverse momentum of cumulative protons with the increase of the cumulative variable x, compared to pions.« less
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Allmendinger, M.; Gnann, S.; Heisserer, T.; Bárdossy, A.
2017-12-01
The basic problem of geostatistics is to estimate the primary variable (e.g. groundwater quality, nitrate) at an un-sampled location based on point measurements at locations in the vicinity. Typically, models are being used that describe the spatial dependence based on the geometry of the observation network. This presentation demonstrates methods that take the following properties additionally into account: the statistical distribution of the measurements, a different degree of dependence in different quantiles, censored measurements, the composition of categorical additional information in the neighbourhood (exhaustive secondary information), and the spatial dependence of a dependent secondary variable, possibly measured with a different observation network (non-exhaustive secondary data). Two modelling approaches are demonstrated individually and combined: The non-stationarity in the marginal distribution is accounted for by locally mixed distribution functions that depend on the composition of the categorical variable in the neighbourhood of each interpolation location. This methodology is currently being implemented for operational use at the environmental state agency of Baden-Württemberg. An alternative to co-Kriging in copula space with an arbitrary number of secondary parameters is presented: The method performs better than traditional techniques if the primary variable is undersampled and does not produce erroneous negative estimates. Even more, the quality of the uncertainty estimates is much improved. The worth of the secondary information is thoroughly evaluated. The improved geostatistical hydrogeological models are being analyzed using measurements of a large observation network ( 2500 measurement locations) in the state of Baden-Württemberg ( 36.000 km2). Typical groundwater quality parameters such as nitrate, chloride, barium, antrazine, and desethylatrazine are being assessed, cross-validated, and compared with traditional geostatistical methods. The secondary information of land use is available on a 30m x 30m raster. We show that the presented methods are not only better estimators (e.g. in the sense of an average quadratic error), but exhibit a much more realistic structure of the uncertainty and hence are improvements compared to existing methods.
Pacemaker Dependency after Cardiac Surgery: A Systematic Review of Current Evidence.
Steyers, Curtis M; Khera, Rohan; Bhave, Prashant
2015-01-01
Severe postoperative conduction disturbances requiring permanent pacemaker implantation frequently occur following cardiac surgery. Little is known about the long-term pacing requirements and risk factors for pacemaker dependency in this population. We performed a systematic review of the literature addressing rates and predictors of pacemaker dependency in patients requiring permanent pacemaker implantation after cardiac surgery. Using a comprehensive search of the Medline, Web of Science and EMBASE databases, studies were selected for review based on predetermined inclusion and exclusion criteria. A total of 8 studies addressing the endpoint of pacemaker-dependency were identified, while 3 studies were found that addressed the recovery of atrioventricular (AV) conduction endpoint. There were 10 unique studies with a total of 780 patients. Mean follow-up ranged from 6-72 months. Pacemaker dependency rates ranged from 32%-91% and recovery of AV conduction ranged from 16%-42%. There was significant heterogeneity with respect to the definition of pacemaker dependency. Several patient and procedure-specific variables were found to be independently associated with pacemaker dependency, but these were not consistent between studies. Pacemaker dependency following cardiac surgery occurs with variable frequency. While individual studies have identified various perioperative risk factors for pacemaker dependency and non-resolution of AV conduction disease, results have been inconsistent. Well-conducted studies using a uniform definition of pacemaker dependency might identify patients who will benefit most from early permanent pacemaker implantation after cardiac surgery.
Path Finding on High-Dimensional Free Energy Landscapes
NASA Astrophysics Data System (ADS)
Díaz Leines, Grisell; Ensing, Bernd
2012-07-01
We present a method for determining the average transition path and the free energy along this path in the space of selected collective variables. The formalism is based upon a history-dependent bias along a flexible path variable within the metadynamics framework but with a trivial scaling of the cost with the number of collective variables. Controlling the sampling of the orthogonal modes recovers the average path and the minimum free energy path as the limiting cases. The method is applied to resolve the path and the free energy of a conformational transition in alanine dipeptide.
Comparison of correlated correlations.
Cohen, A
1989-12-01
We consider a problem where kappa highly correlated variables are available, each being a candidate for predicting a dependent variable. Only one of the kappa variables can be chosen as a predictor and the question is whether there are significant differences in the quality of the predictors. We review several tests derived previously and propose a method based on the bootstrap. The motivating medical problem was to predict 24 hour proteinuria by protein-creatinine ratio measured at either 08:00, 12:00 or 16:00. The tests which we discuss are illustrated by this example and compared using a small Monte Carlo study.
Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.
Fu, Michael J; Cavuşoğlu, M Cenk
2012-12-01
Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.
Rand, Miya K; Shimansky, Y P; Hossain, Abul B M I; Stelmach, George E
2010-11-01
Based on an assumption of movement control optimality in reach-to-grasp movements, we have recently developed a mathematical model of transport-aperture coordination (TAC), according to which the hand-target distance is a function of hand velocity and acceleration, aperture magnitude, and aperture velocity and acceleration (Rand et al. in Exp Brain Res 188:263-274, 2008). Reach-to-grasp movements were performed by young adults under four different reaching speeds and two different transport distances. The residual error magnitude of fitting the above model to data across different trials and subjects was minimal for the aperture-closure phase, but relatively much greater for the aperture-opening phase, indicating considerable difference in TAC variability between those phases. This study's goal is to identify the main reasons for that difference and obtain insights into the control strategy of reach-to-grasp movements. TAC variability within the aperture-opening phase of a single trial was found minimal, indicating that TAC variability between trials was not due to execution noise, but rather a result of inter-trial and inter-subject variability of motor plan. At the same time, the dependence of the extent of trial-to-trial variability of TAC in that phase on the speed of hand transport was sharply inconsistent with the concept of speed-accuracy trade-off: the lower the speed, the larger the variability. Conversely, the dependence of the extent of TAC variability in the aperture-closure phase on hand transport speed was consistent with that concept. Taking into account recent evidence that the cost of neural information processing is substantial for movement planning, the dependence of TAC variability in the aperture-opening phase on task performance conditions suggests that it is not the movement time that the CNS saves in that phase, but the cost of neuro-computational resources and metabolic energy required for TAC regulation in that phase. Thus, the CNS performs a trade-off between that cost and TAC regulation accuracy. It is further discussed that such trade-off is possible because, due to a special control law that governs optimal switching from aperture opening to aperture closure, the inter-trial variability of the end of aperture opening does not affect the high accuracy of TAC regulation in the subsequent aperture-closure phase.
Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie
2018-04-01
A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.
Natural variability of marine ecosystems inferred from a coupled climate to ecosystem simulation
NASA Astrophysics Data System (ADS)
Le Mézo, Priscilla; Lefort, Stelly; Séférian, Roland; Aumont, Olivier; Maury, Olivier; Murtugudde, Raghu; Bopp, Laurent
2016-01-01
This modeling study analyzes the simulated natural variability of pelagic ecosystems in the North Atlantic and North Pacific. Our model system includes a global Earth System Model (IPSL-CM5A-LR), the biogeochemical model PISCES and the ecosystem model APECOSM that simulates upper trophic level organisms using a size-based approach and three interactive pelagic communities (epipelagic, migratory and mesopelagic). Analyzing an idealized (e.g., no anthropogenic forcing) 300-yr long pre-industrial simulation, we find that low and high frequency variability is dominant for the large and small organisms, respectively. Our model shows that the size-range exhibiting the largest variability at a given frequency, defined as the resonant range, also depends on the community. At a given frequency, the resonant range of the epipelagic community includes larger organisms than that of the migratory community and similarly, the latter includes larger organisms than the resonant range of the mesopelagic community. This study shows that the simulated temporal variability of marine pelagic organisms' abundance is not only influenced by natural climate fluctuations but also by the structure of the pelagic community. As a consequence, the size- and community-dependent response of marine ecosystems to climate variability could impact the sustainability of fisheries in a warming world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glosser, D.; Kutchko, B.; Benge, G.
Foamed cement is a critical component for wellbore stability. The mechanical performance of a foamed cement depends on its microstructure, which in turn depends on the preparation method and attendant operational variables. Determination of cement stability for field use is based on laboratory testing protocols governed by API Recommended Practice 10B-4 (API RP 10B-4, 2015). However, laboratory and field operational variables contrast considerably in terms of scale, as well as slurry mixing and foaming processes. Here in this paper, laboratory and field operational processes are characterized within a physics-based framework. It is shown that the “atomization energy” imparted by themore » high pressure injection of nitrogen gas into the field mixed foamed cement slurry is – by a significant margin – the highest energy process, and has a major impact on the void system in the cement slurry. There is no analog for this high energy exchange in current laboratory cement preparation and testing protocols. Quantifying the energy exchanges across the laboratory and field processes provides a basis for understanding relative impacts of these variables on cement structure, and can ultimately lead to the development of practices to improve cement testing and performance.« less
Developing a theoretical framework for complex community-based interventions.
Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana
2014-01-01
Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.
Quantifying Variability of Avian Colours: Are Signalling Traits More Variable?
Delhey, Kaspar; Peters, Anne
2008-01-01
Background Increased variability in sexually selected ornaments, a key assumption of evolutionary theory, is thought to be maintained through condition-dependence. Condition-dependent handicap models of sexual selection predict that (a) sexually selected traits show amplified variability compared to equivalent non-sexually selected traits, and since males are usually the sexually selected sex, that (b) males are more variable than females, and (c) sexually dimorphic traits more variable than monomorphic ones. So far these predictions have only been tested for metric traits. Surprisingly, they have not been examined for bright coloration, one of the most prominent sexual traits. This omission stems from computational difficulties: different types of colours are quantified on different scales precluding the use of coefficients of variation. Methodology/Principal Findings Based on physiological models of avian colour vision we develop an index to quantify the degree of discriminable colour variation as it can be perceived by conspecifics. A comparison of variability in ornamental and non-ornamental colours in six bird species confirmed (a) that those coloured patches that are sexually selected or act as indicators of quality show increased chromatic variability. However, we found no support for (b) that males generally show higher levels of variability than females, or (c) that sexual dichromatism per se is associated with increased variability. Conclusions/Significance We show that it is currently possible to realistically estimate variability of animal colours as perceived by them, something difficult to achieve with other traits. Increased variability of known sexually-selected/quality-indicating colours in the studied species, provides support to the predictions borne from sexual selection theory but the lack of increased overall variability in males or dimorphic colours in general indicates that sexual differences might not always be shaped by similar selective forces. PMID:18301766
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis
Gong, Xiajing; Hu, Meng
2018-01-01
Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640
NASA Astrophysics Data System (ADS)
Anwar, Faizan; Bárdossy, András; Seidel, Jochen
2017-04-01
Estimating missing values in a time series of a hydrological variable is an everyday task for a hydrologist. Existing methods such as inverse distance weighting, multivariate regression, and kriging, though simple to apply, provide no indication of the quality of the estimated value and depend mainly on the values of neighboring stations at a given step in the time series. Copulas have the advantage of representing the pure dependence structure between two or more variables (given the relationship between them is monotonic). They rid us of questions such as transforming the data before use or calculating functions that model the relationship between the considered variables. A copula-based approach is suggested to infill discharge, precipitation, and temperature data. As a first step the normal copula is used, subsequently, the necessity to use non-normal / non-symmetrical dependence is investigated. Discharge and temperature are treated as regular continuous variables and can be used without processing for infilling and quality checking. Due to the mixed distribution of precipitation values, it has to be treated differently. This is done by assigning a discrete probability to the zeros and treating the rest as a continuous distribution. Building on the work of others, along with infilling, the normal copula is also utilized to identify values in a time series that might be erroneous. This is done by treating the available value as missing, infilling it using the normal copula and checking if it lies within a confidence band (5 to 95% in our case) of the obtained conditional distribution. Hydrological data from two catchments Upper Neckar River (Germany) and Santa River (Peru) are used to demonstrate the application for datasets with different data quality. The Python code used here is also made available on GitHub. The required input is the time series of a given variable at different stations.
Operant Variability and Voluntary Action
ERIC Educational Resources Information Center
Neuringer, Allen; Jensen, Greg
2010-01-01
A behavior-based theory identified 2 characteristics of voluntary acts. The first, extensively explored in operant-conditioning experiments, is that voluntary responses produce the reinforcers that control them. This bidirectional relationship--in which reinforcer depends on response and response on reinforcer--demonstrates the functional nature…
A hierarchy of generalized Jaulent-Miodek equations and their explicit solutions
NASA Astrophysics Data System (ADS)
Geng, Xianguo; Guan, Liang; Xue, Bo
A hierarchy of generalized Jaulent-Miodek (JM) equations related to a new spectral problem with energy-dependent potentials is proposed. Depending on the Lax matrix and elliptic variables, the generalized JM hierarchy is decomposed into two systems of solvable ordinary differential equations. Explicit theta function representations of the meromorphic function and the Baker-Akhiezer function are constructed, the solutions of the hierarchy are obtained based on the theory of algebraic curves.
Regression Methods for Categorical Dependent Variables: Effects on a Model of Student College Choice
ERIC Educational Resources Information Center
Rapp, Kelly E.
2012-01-01
The use of categorical dependent variables with the classical linear regression model (CLRM) violates many of the model's assumptions and may result in biased estimates (Long, 1997; O'Connell, Goldstein, Rogers, & Peng, 2008). Many dependent variables of interest to educational researchers (e.g., professorial rank, educational attainment) are…
Huff, Mark J.; Bodner, Glen E.
2014-01-01
Whether encoding variability facilitates memory is shown to depend on whether item-specific and relational processing are both performed across study blocks, and whether study items are weakly versus strongly related. Variable-processing groups studied a word list once using an item-specific task and once using a relational task. Variable-task groups’ two different study tasks recruited the same type of processing each block. Repeated-task groups performed the same study task each block. Recall and recognition were greatest in the variable-processing group, but only with weakly related lists. A variable-processing benefit was also found when task-based processing and list-type processing were complementary (e.g., item-specific processing of a related list) rather than redundant (e.g., relational processing of a related list). That performing both item-specific and relational processing across trials, or within a trial, yields encoding-variability benefits may help reconcile decades of contradictory findings in this area. PMID:25018583
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Valence-Dependent Belief Updating: Computational Validation
Kuzmanovic, Bojana; Rigoux, Lionel
2017-01-01
People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499
Valence-Dependent Belief Updating: Computational Validation.
Kuzmanovic, Bojana; Rigoux, Lionel
2017-01-01
People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.
Various Approaches for Targeting Quasar Candidates
NASA Astrophysics Data System (ADS)
Zhang, Y.; Zhao, Y.
2015-09-01
With the establishment and development of space-based and ground-based observational facilities, the improvement of scientific output of high-cost facilities is still a hot issue for astronomers. The discovery of new and rare quasars attracts much attention. Different methods to select quasar candidates are in bloom. Among them, some are based on color cuts, some are from multiwavelength data, some rely on variability of quasars, some are based on data mining, and some depend on ensemble methods.
A Mulitivariate Statistical Model Describing the Compound Nature of Soil Moisture Drought
NASA Astrophysics Data System (ADS)
Manning, Colin; Widmann, Martin; Bevacqua, Emanuele; Maraun, Douglas; Van Loon, Anne; Vrac, Mathieu
2017-04-01
Soil moisture in Europe acts to partition incoming energy into sensible and latent heat fluxes, thereby exerting a large influence on temperature variability. Soil moisture is predominantly controlled by precipitation and evapotranspiration. When these meteorological variables are accumulated over different timescales, their joint multivariate distribution and dependence structure can be used to provide information of soil moisture. We therefore consider soil moisture drought as a compound event of meteorological drought (deficits of precipitation) and heat waves, or more specifically, periods of high Potential Evapotraspiration (PET). We present here a statistical model of soil moisture based on Pair Copula Constructions (PCC) that can describe the dependence amongst soil moisture and its contributing meteorological variables. The model is designed in such a way that it can account for concurrences of meteorological drought and heat waves and describe the dependence between these conditions at a local level. The model is composed of four variables; daily soil moisture (h); a short term and a long term accumulated precipitation variable (Y1 and Y_2) that account for the propagation of meteorological drought to soil moisture drought; and accumulated PET (Y_3), calculated using the Penman Monteith equation, which can represent the effect of a heat wave on soil conditions. Copula are multivariate distribution functions that allow one to model the dependence structure of given variables separately from their marginal behaviour. PCCs then allow in theory for the formulation of a multivariate distribution of any dimension where the multivariate distribution is decomposed into a product of marginal probability density functions and two-dimensional copula, of which some are conditional. We apply PCC here in such a way that allows us to provide estimates of h and their uncertainty through conditioning on the Y in the form h=h|y_1,y_2,y_3 (1) Applying the model to various Fluxnet sites across Europe, we find the model has good skill and can particularly capture periods of low soil moisture well. We illustrate the relevance of the dependence structure of these Y variables to soil moisture and show how it may be generalised to offer information of soil moisture on a widespread scale where few observations of soil moisture exist. We then present results from a validation study of a selection of EURO CORDEX climate models where we demonstrate the skill of these models in representing these dependencies and so offer insight into the skill seen in the representation of soil moisture in these models.
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Information Leakage Analysis by Abstract Interpretation
NASA Astrophysics Data System (ADS)
Zanioli, Matteo; Cortesi, Agostino
Protecting the confidentiality of information stored in a computer system or transmitted over a public network is a relevant problem in computer security. The approach of information flow analysis involves performing a static analysis of the program with the aim of proving that there will not be leaks of sensitive information. In this paper we propose a new domain that combines variable dependency analysis, based on propositional formulas, and variables' value analysis, based on polyhedra. The resulting analysis is strictly more accurate than the state of the art abstract interpretation based analyses for information leakage detection. Its modular construction allows to deal with the tradeoff between efficiency and accuracy by tuning the granularity of the abstraction and the complexity of the abstract operators.
Re-construction of action awareness depends on an internal model of action-outcome timing.
Stenner, Max-Philipp; Bauer, Markus; Machts, Judith; Heinze, Hans-Jochen; Haggard, Patrick; Dolan, Raymond J
2014-04-01
The subjective time of an instrumental action is shifted towards its outcome. This temporal binding effect is partially retrospective, i.e., occurs upon outcome perception. Retrospective binding is thought to reflect post-hoc inference on agency based on sensory evidence of the action - outcome association. However, many previous binding paradigms cannot exclude the possibility that retrospective binding results from bottom-up interference of sensory outcome processing with action awareness and is functionally unrelated to the processing of the action - outcome association. Here, we keep bottom-up interference constant and use a contextual manipulation instead. We demonstrate a shift of subjective action time by its outcome in a context of variable outcome timing. Crucially, this shift is absent when there is no such variability. Thus, retrospective action binding reflects a context-dependent, model-based phenomenon. Such top-down re-construction of action awareness seems to bias agency attribution when outcome predictability is low. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Re-construction of action awareness depends on an internal model of action-outcome timing
Stenner, Max-Philipp; Bauer, Markus; Machts, Judith; Heinze, Hans-Jochen; Haggard, Patrick; Dolan, Raymond J.
2014-01-01
The subjective time of an instrumental action is shifted towards its outcome. This temporal binding effect is partially retrospective, i.e., occurs upon outcome perception. Retrospective binding is thought to reflect post-hoc inference on agency based on sensory evidence of the action – outcome association. However, many previous binding paradigms cannot exclude the possibility that retrospective binding results from bottom-up interference of sensory outcome processing with action awareness and is functionally unrelated to the processing of the action – outcome association. Here, we keep bottom-up interference constant and use a contextual manipulation instead. We demonstrate a shift of subjective action time by its outcome in a context of variable outcome timing. Crucially, this shift is absent when there is no such variability. Thus, retrospective action binding reflects a context-dependent, model-based phenomenon. Such top-down re-construction of action awareness seems to bias agency attribution when outcome predictability is low. PMID:24555983
Entropy information of heart rate variability and its power spectrum during day and night
NASA Astrophysics Data System (ADS)
Jin, Li; Jun, Wang
2013-07-01
Physiologic systems generate complex fluctuations in their output signals that reflect the underlying dynamics. We employed the base-scale entropy method and the power spectral analysis to study the 24 hours heart rate variability (HRV) signals. The results show that such profound circadian-, age- and pathologic-dependent changes are accompanied by changes in base-scale entropy and power spectral distribution. Moreover, the base-scale entropy changes reflect the corresponding changes in the autonomic nerve outflow. With the suppression of the vagal tone and dominance of the sympathetic tone in congestive heart failure (CHF) subjects, there is more variability in the date fluctuation mode. So the higher base-scale entropy belongs to CHF subjects. With the decrease of the sympathetic tone and the respiratory frequency (RSA) becoming more pronounced with slower breathing during sleeping, the base-scale entropy drops in CHF subjects. The HRV series of the two healthy groups have the same diurnal/nocturnal trend as the CHF series. The fluctuation dynamics trend of data in the three groups can be described as “HF effect”.
Puig, Laura; Castellá, Gemma
2017-01-01
The genus Malassezia includes lipophilic yeasts, which are part of the skin microbiota of various mammals and birds. Unlike the rest of Malassezia species, M. pachydermatis is described as non-lipid-dependent, as it is able to grow on Sabouraud glucose agar (SGA) without lipid supplementation. In this study we have examined the phenotypic variability within M. pachydermatis and confirmed its lipid-dependent nature using a synthetic agar medium. We used a selection of representative non-lipid-dependent strains from different animal species and three atypical lipid-dependent strains of this species, which were not able to grow after multiple passages on SGA. More than 400 lipid-dependent Malassezia isolates from animals were studied in order to detect the three lipid-dependent strains of M. pachydermatis. The identity of the atypical strains was confirmed by DNA sequencing. On the other hand, we have modified the Tween diffusion test, which is widely used in the characterization of these yeasts, by using a synthetic agar-based medium instead of SGA. This modification has proved to be useful for differentiation of M. pachydermatis strains, providing reproducible results and a straightforward interpretation. The finding of these peculiar lipid-dependent strains exemplifies the large variability within the species M. pachydermatis, which involves rare atypical strains with particular growth requirements. PMID:28586389
Puig, Laura; Bragulat, M Rosa; Castellá, Gemma; Cabañes, F Javier
2017-01-01
The genus Malassezia includes lipophilic yeasts, which are part of the skin microbiota of various mammals and birds. Unlike the rest of Malassezia species, M. pachydermatis is described as non-lipid-dependent, as it is able to grow on Sabouraud glucose agar (SGA) without lipid supplementation. In this study we have examined the phenotypic variability within M. pachydermatis and confirmed its lipid-dependent nature using a synthetic agar medium. We used a selection of representative non-lipid-dependent strains from different animal species and three atypical lipid-dependent strains of this species, which were not able to grow after multiple passages on SGA. More than 400 lipid-dependent Malassezia isolates from animals were studied in order to detect the three lipid-dependent strains of M. pachydermatis. The identity of the atypical strains was confirmed by DNA sequencing. On the other hand, we have modified the Tween diffusion test, which is widely used in the characterization of these yeasts, by using a synthetic agar-based medium instead of SGA. This modification has proved to be useful for differentiation of M. pachydermatis strains, providing reproducible results and a straightforward interpretation. The finding of these peculiar lipid-dependent strains exemplifies the large variability within the species M. pachydermatis, which involves rare atypical strains with particular growth requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fan; Parker, Jack C.; Luo, Wensui
2008-01-01
Many geochemical reactions that control aqueous metal concentrations are directly affected by solution pH. However, changes in solution pH are strongly buffered by various aqueous phase and solid phase precipitation/dissolution and adsorption/desorption reactions. The ability to predict acid-base behavior of the soil-solution system is thus critical to predict metal transport under variable pH conditions. This study was undertaken to develop a practical generic geochemical modeling approach to predict aqueous and solid phase concentrations of metals and anions during conditions of acid or base additions. The method of Spalding and Spalding was utilized to model soil buffer capacity and pH-dependent cationmore » exchange capacity by treating aquifer solids as a polyprotic acid. To simulate the dynamic and pH-dependent anion exchange capacity, the aquifer solids were simultaneously treated as a polyprotic base controlled by mineral precipitation/dissolution reactions. An equilibrium reaction model that describes aqueous complexation, precipitation, sorption and soil buffering with pH-dependent ion exchange was developed using HydroGeoChem v5.0 (HGC5). Comparison of model results with experimental titration data of pH, Al, Ca, Mg, Sr, Mn, Ni, Co, and SO{sub 4}{sup 2-} for contaminated sediments indicated close agreement, suggesting that the model could potentially be used to predict the acid-base behavior of the sediment-solution system under variable pH conditions.« less
Gate sequence for continuous variable one-way quantum computation
Su, Xiaolong; Hao, Shuhong; Deng, Xiaowei; Ma, Lingyu; Wang, Meihong; Jia, Xiaojun; Xie, Changde; Peng, Kunchi
2013-01-01
Measurement-based one-way quantum computation using cluster states as resources provides an efficient model to perform computation and information processing of quantum codes. Arbitrary Gaussian quantum computation can be implemented sufficiently by long single-mode and two-mode gate sequences. However, continuous variable gate sequences have not been realized so far due to an absence of cluster states larger than four submodes. Here we present the first continuous variable gate sequence consisting of a single-mode squeezing gate and a two-mode controlled-phase gate based on a six-mode cluster state. The quantum property of this gate sequence is confirmed by the fidelities and the quantum entanglement of two output modes, which depend on both the squeezing and controlled-phase gates. The experiment demonstrates the feasibility of implementing Gaussian quantum computation by means of accessible gate sequences.
Pacemaker Dependency after Cardiac Surgery: A Systematic Review of Current Evidence
2015-01-01
Background Severe postoperative conduction disturbances requiring permanent pacemaker implantation frequently occur following cardiac surgery. Little is known about the long-term pacing requirements and risk factors for pacemaker dependency in this population. Methods We performed a systematic review of the literature addressing rates and predictors of pacemaker dependency in patients requiring permanent pacemaker implantation after cardiac surgery. Using a comprehensive search of the Medline, Web of Science and EMBASE databases, studies were selected for review based on predetermined inclusion and exclusion criteria. Results A total of 8 studies addressing the endpoint of pacemaker-dependency were identified, while 3 studies were found that addressed the recovery of atrioventricular (AV) conduction endpoint. There were 10 unique studies with a total of 780 patients. Mean follow-up ranged from 6–72 months. Pacemaker dependency rates ranged from 32%-91% and recovery of AV conduction ranged from 16%-42%. There was significant heterogeneity with respect to the definition of pacemaker dependency. Several patient and procedure-specific variables were found to be independently associated with pacemaker dependency, but these were not consistent between studies. Conclusions Pacemaker dependency following cardiac surgery occurs with variable frequency. While individual studies have identified various perioperative risk factors for pacemaker dependency and non-resolution of AV conduction disease, results have been inconsistent. Well-conducted studies using a uniform definition of pacemaker dependency might identify patients who will benefit most from early permanent pacemaker implantation after cardiac surgery. PMID:26470027
Average inactivity time model, associated orderings and reliability properties
NASA Astrophysics Data System (ADS)
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Solvency supervision based on a total balance sheet approach
NASA Astrophysics Data System (ADS)
Pitselis, Georgios
2009-11-01
In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.
Personality Subtypes of Suicidal Adults
Westen, Drew; Bradley, Rebekah
2009-01-01
Research into personality factors related to suicidality suggests substantial variability among suicide attempters. A potentially useful approach that accounts for this complexity is personality subtyping. As part of a large sample looking at personality pathology, this study used Q-factor analysis to identify subtypes of 311 adult suicide attempters using SWAP-II personality profiles. Identified subtypes included Internalizing, Emotionally Dysregulated, Dependent, Hostile-Isolated, Psychopathic, and Anxious-Somatizing. Subtypes differed in hypothesized ways on criterion variables that address their construct validity, including adaptive functioning, Axis I and II comorbidity, and etiology-related variables (e.g., history of abuse). Furthermore, dimensional ratings of the subtypes predicted adaptive functioning above DSM-based diagnoses and symptoms. PMID:19752649
Analysis of the labor productivity of enterprises via quantile regression
NASA Astrophysics Data System (ADS)
Türkan, Semra
2017-07-01
In this study, we have analyzed the factors that affect the performance of Turkey's Top 500 Industrial Enterprises using quantile regression. The variable about labor productivity of enterprises is considered as dependent variable, the variableabout assets is considered as independent variable. The distribution of labor productivity of enterprises is right-skewed. If the dependent distribution is skewed, linear regression could not catch important aspects of the relationships between the dependent variable and its predictors due to modeling only the conditional mean. Hence, the quantile regression, which allows modelingany quantilesof the dependent distribution, including the median,appears to be useful. It examines whether relationships between dependent and independent variables are different for low, medium, and high percentiles. As a result of analyzing data, the effect of total assets is relatively constant over the entire distribution, except the upper tail. It hasa moderately stronger effect in the upper tail.
NGA-West 2 Equations for predicting PGA, PGV, and 5%-Damped PSA for shallow crustal earthquakes
Boore, David M.; Stewart, Jon P.; Seyhan, Emel; Atkinson, Gail M.
2013-01-01
We provide ground-motion prediction equations for computing medians and standard deviations of average horizontal component intensity measures (IMs) for shallow crustal earthquakes in active tectonic regions. The equations were derived from a global database with M 3.0–7.9 events. We derived equations for the primary M- and distance-dependence of the IMs after fixing the VS30-based nonlinear site term from a parallel NGA-West 2 study. We then evaluated additional effects using mixed effects residuals analysis, which revealed no trends with source depth over the M range of interest, indistinct Class 1 and 2 event IMs, and basin depth effects that increase and decrease long-period IMs for depths larger and smaller, respectively, than means from regional VS30-depth relations. Our aleatory variability model captures decreasing between-event variability with M, as well as within-event variability that increases or decreases with M depending on period, increases with distance, and decreases for soft sites.
Systems effects on family planning innovativeness.
Lee, S B
1983-12-01
Data from Korea were used to explore the importance of community level variables in explaining family planning adoption at the individual level. An open system concept was applied, assuming that individual family planning behavior is influenced by both environmental and individual factors. The environmental factors were measured at the village level and designated as community characteristics. The dimension of communication network variables was introduced. Each individual was characterized in terms of the degree of her involvement in family planning communication with others in her village. It was assumed that the nature of the communication network linking individuals with each other effects family planning adoption at the individual level. Specific objectives were to determine 1) the relative importance of the specific independent variables in explaining family planning adoption and 2) the relative importance of the community level variables in comparison with the individual level variables in explaining family planning adoption at the individual level. The data were originally gathered in a 1973 research project on Korea's mothers' clubs. 1047 respondents were interviewed, comprising all married women in 25 sample villages having mothers' clubs. The dependent variable was family planning adoption behavior, defined as current use of any of the modern methods of family planning. The independent variables were defined at 3 levels: individual, community, and at a level intermediate between them involving communication links between individuals. More of the individual level independent variables were significantly correlated with the dependent variables than the community level variables. Among those variables with statistically significant correlations, the correlation coefficients were consistently higher for the individual level than for the community level variables. More of the variance in the dependent variable was explained by individual level than by community level variables. Community level variables accounted for only about 2.5% of the total variance in the dependent variable, in marked contrast to the result showing individual level variables accounting for as much as 19% of the total variance. When both individual and community level variables were entered into a multiple correlation analysis, a multiple correlation coefficient of .4714 was obtained together they explained about 20% of the total variance. The 2 communication network variables--connectedness and integrativeness--were correlated with the dependent variable at much higher levels than most of the individual or community level variables. Connectedness accounted for the greatest amount of the total variance. The communication network variables as a group explained as much of the total variance in the dependent variable as the individual level variables and greatly more that the community level variables.
A unified dislocation density-dependent physical-based constitutive model for cold metal forming
NASA Astrophysics Data System (ADS)
Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.
2017-10-01
Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.
Schwartz, Stephen E; Harshvardhan; Benkovitz, Carmen M
2002-02-19
The Twomey effect of enhanced cloud droplet concentration, optical depth, and albedo caused by anthropogenic aerosols is thought to contribute substantially to radiative forcing of climate change over the industrial period. However, present model-based estimates of this indirect forcing are highly uncertain. Satellite-based measurements would provide global or near-global coverage of this effect, but previous efforts to identify and quantify enhancement of cloud albedo caused by anthropogenic aerosols in satellite observations have been limited, largely because of strong dependence of albedo on cloud liquid water path (LWP), which is inherently highly variable. Here we examine satellite-derived cloud radiative properties over two 1-week episodes for which a chemical transport and transformation model indicates substantial influx of sulfate aerosol from industrial regions of Europe or North America to remote areas of the North Atlantic. Despite absence of discernible dependence of optical depth or albedo on modeled sulfate loading, examination of the dependence of these quantities on LWP readily permits detection and quantification of increases correlated with sulfate loading, which are otherwise masked by variability of LWP, demonstrating brightening of clouds because of the Twomey effect on a synoptic scale. Median cloud-top spherical albedo was enhanced over these episodes, relative to the unperturbed base case for the same LWP distribution, by 0.02 to 0.15.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
NASA Astrophysics Data System (ADS)
Xie, Bin; Zhang, Jingjing; Chen, Wei; Hao, Junjie; Cheng, Yanhua; Hu, Run; Wu, Dan; Wang, Kai; Luo, Xiaobing
2017-10-01
Human comfort has become one of the most important criteria in modern lighting architecture. Here, we proposed a tuning strategy to enhance the non-image forming photobiological effect on the human circadian rhythm based on quantum-dots-converted white light-emitting diodes (QDs-WLEDs). We introduced the limiting variability of the circadian action factor (CAF), defined as the ratio of circadian efficiency and luminous efficiency of radiation. The CAF was deeply discussed and was found to be a function of constraining the color rendering index (CRI) and correlated color temperatures. The maximum CAF variability of QDs-WLEDs was found to be dependent on the QDs’ peak wavelength and full width at half maximum. With the optimized parameters, the packaging materials were synthesized and WLEDs were packaged. Experimental results show that at CRI > 90, the maximum CAF variability can be tuned by 3.83 times (from 0.251 at 2700 K to 0.961 at 6500 K), which implies that our approach could reduce the number of tunable channels, and could achieve wider CAF variability.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Development of a new linearly variable edge filter (LVEF)-based compact slit-less mini-spectrometer
NASA Astrophysics Data System (ADS)
Mahmoud, Khaled; Park, Seongchong; Lee, Dong-Hoon
2018-02-01
This paper presents the development of a compact charge-coupled detector (CCD) spectrometer. We describe the design, concept and characterization of VNIR linear variable edge filter (LVEF)- based mini-spectrometer. The new instrument has been realized for operation in the 300 nm to 850 nm wavelength range. The instrument consists of a linear variable edge filter in front of CCD array. Low-size, light-weight and low-cost could be achieved using the linearly variable filters with no need to use any moving parts for wavelength selection as in the case of commercial spectrometers available in the market. This overview discusses the main components characteristics, the main concept with the main advantages and limitations reported. Experimental characteristics of the LVEFs are described. The mathematical approach to get the position-dependent slit function of the presented prototype spectrometer and its numerical de-convolution solution for a spectrum reconstruction is described. The performance of our prototype instrument is demonstrated by measuring the spectrum of a reference light source.
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe
1990-01-01
The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.
Applying causal mediation analysis to personality disorder research.
Walters, Glenn D
2018-01-01
This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Climate-driven vital rates do not always mean climate-driven population.
Tavecchia, Giacomo; Tenan, Simone; Pradel, Roger; Igual, José-Manuel; Genovart, Meritxell; Oro, Daniel
2016-12-01
Current climatic changes have increased the need to forecast population responses to climate variability. A common approach to address this question is through models that project current population state using the functional relationship between demographic rates and climatic variables. We argue that this approach can lead to erroneous conclusions when interpopulation dispersal is not considered. We found that immigration can release the population from climate-driven trajectories even when local vital rates are climate dependent. We illustrated this using individual-based data on a trans-equatorial migratory seabird, the Scopoli's shearwater Calonectris diomedea, in which the variation of vital rates has been associated with large-scale climatic indices. We compared the population annual growth rate λ i , estimated using local climate-driven parameters with ρ i , a population growth rate directly estimated from individual information and that accounts for immigration. While λ i varied as a function of climatic variables, reflecting the climate-dependent parameters, ρ i did not, indicating that dispersal decouples the relationship between population growth and climate variables from that between climatic variables and vital rates. Our results suggest caution when assessing demographic effects of climatic variability especially in open populations for very mobile organisms such as fish, marine mammals, bats, or birds. When a population model cannot be validated or it is not detailed enough, ignoring immigration might lead to misleading climate-driven projections. © 2016 John Wiley & Sons Ltd.
Samuel A. Cushman; Kevin McGarigal
2004-01-01
Multi-scale investigations of species/environment relationships are an important tool in ecological research. The scale at which independent and dependent variables are measured, and how they are coded for analysis, can strongly influence the relationships that are discovered. However, little is known about how the coding of the dependent variable set influences...
Schnoll, Robert A; George, Tony P; Hawk, Larry; Cinciripini, Paul; Wileyto, Paul; Tyndale, Rachel F
2014-06-01
Variability in the rate of nicotine metabolism, measured by the nicotine metabolite ratio (NMR), is associated with smoking behavior. However, data linking the NMR with nicotine dependence measured by the Fagerström test for nicotine dependence (FTND) are mixed. Few past studies have examined alternative measures of nicotine dependence and how this relationship may vary by sex and race. Using data from smokers undergoing eligibility evaluation for a smoking cessation clinical trial (n = 833), this study examined variability in the relationship between NMR and nicotine dependence across sex and race and using three measures of nicotine dependence: FTND, time-to-first-cigarette (TTFC), and the heaviness of smoking index (HSI). Controlling for sex and race, nicotine metabolism was associated with nicotine dependence only when using the HSI (p < 0.05). Male normal metabolizers of nicotine were more likely to have high nicotine dependence based on the FTND and HSI (p < 0.05), but NMR was not related to measures of nicotine dependence in women. For African Americans, the NMR was associated with nicotine dependence only for the TTFC (p < 0.05), but NMR was not associated with nicotine dependence among Caucasians. Post hoc analyses indicated that the NMR was associated with cigarettes per day, overall, and among men and Caucasians (p < 0.05). While there was some variation in the relationship between nicotine metabolism and nicotine dependence across measures and sex and race, the results indicate that this relationship may be more attributable to the association between NMR and cigarettes per day.
Sapkota, Lok Mani; Shrestha, Rajendra Prasad; Jourdain, Damien; Shivakoti, Ganesh P
2015-01-01
The attributes of social ecological systems affect the management of commons. Strengthening and enhancing social capital and the enforcement of rules and sanctions aid in the collective action of communities in forest fire management. Using a set of variables drawn from previous studies on the management of commons, we conducted a study across 20 community forest user groups in Central Siwalik, Nepal, by dividing the groups into two categories based on the type and level of their forest fire management response. Our study shows that the collective action in forest fire management is consistent with the collective actions in other community development activities. However, the effectiveness of collective action is primarily dependent on the complex interaction of various variables. We found that strong social capital, strong enforcement of rules and sanctions, and users' participation in crafting the rules were the major variables that strengthen collective action in forest fire management. Conversely, users' dependency on a daily wage and a lack of transparency were the variables that weaken collective action. In fire-prone forests such as the Siwalik, our results indicate that strengthening social capital and forming and enforcing forest fire management rules are important variables that encourage people to engage in collective action in fire management.
Motivation and exercise dependence: a study based on self-determination theory.
González-Cutre, David; Sicilia, Alvaro
2012-06-01
The objective of this study was to use self-determination theory to analyze the relationships of several motivational variables with exercise dependence. The study involved 531 exercisers, ranging in age from 16 to 60 years old, who responded to differentquestionnaires assessing perception of motivational climate, satisfaction of basic psychological needs, motivation types, and exercise dependence. The results of multiple mediation analysis revealed that ego-involving climate and perceived competence positively predicted exercise dependence in a directed and mediated manner through introjected and external regulation. Gender and age did not moderate the analyzed relationships. These results allow us to better understand the motivational process explaining exercise dependence, demonstrating the negative influence of the ego-involving climate in the context of exercise.
Predator Persistence through Variability of Resource Productivity in Tritrophic Systems.
Soudijn, Floor H; de Roos, André M
2017-12-01
The trophic structure of species communities depends on the energy transfer between trophic levels. Primary productivity varies strongly through time, challenging the persistence of species at higher trophic levels. Yet resource variability has mostly been studied in systems with only one or two trophic levels. We test the effect of variability in resource productivity in a tritrophic model system including a resource, a size-structured consumer, and a size-specific predator. The model complies with fundamental principles of mass conservation and the body-size dependence of individual-level energetics and predator-prey interactions. Surprisingly, we find that resource variability may promote predator persistence. The positive effect of variability on the predator arises through periods with starvation mortality of juvenile prey, which reduces the intraspecific competition in the prey population. With increasing variability in productivity and starvation mortality in the juvenile prey, the prey availability increases in the size range preferred by the predator. The positive effect of prey mortality on the trophic transfer efficiency depends on the biologically realistic consideration of body size-dependent and food-dependent functions for growth and reproduction in our model. Our findings show that variability may promote the trophic transfer efficiency, indicating that environmental variability may sustain species at higher trophic levels in natural ecosystems.
Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM
NASA Astrophysics Data System (ADS)
Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak
2015-04-01
Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.
Omari, Taher I.; Savilampi, Johanna; Kokkinn, Karmen; Schar, Mistyka; Lamvik, Kristin; Doeltgen, Sebastian; Cock, Charles
2016-01-01
Purpose. We evaluated the intra- and interrater agreement and test-retest reliability of analyst derivation of swallow function variables based on repeated high resolution manometry with impedance measurements. Methods. Five subjects swallowed 10 × 10 mL saline on two occasions one week apart producing a database of 100 swallows. Swallows were repeat-analysed by six observers using software. Swallow variables were indicative of contractility, intrabolus pressure, and flow timing. Results. The average intraclass correlation coefficients (ICC) for intra- and interrater comparisons of all variable means showed substantial to excellent agreement (intrarater ICC 0.85–1.00; mean interrater ICC 0.77–1.00). Test-retest results were less reliable. ICC for test-retest comparisons ranged from slight to excellent depending on the class of variable. Contractility variables differed most in terms of test-retest reliability. Amongst contractility variables, UES basal pressure showed excellent test-retest agreement (mean ICC 0.94), measures of UES postrelaxation contractile pressure showed moderate to substantial test-retest agreement (mean Interrater ICC 0.47–0.67), and test-retest agreement of pharyngeal contractile pressure ranged from slight to substantial (mean Interrater ICC 0.15–0.61). Conclusions. Test-retest reliability of HRIM measures depends on the class of variable. Measures of bolus distension pressure and flow timing appear to be more test-retest reliable than measures of contractility. PMID:27190520
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
NASA Technical Reports Server (NTRS)
Rubesin, M. W.; Rose, W. C.
1973-01-01
The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.
Valsangkar, Nakul P; Bush, Devon M; Michaelson, James S; Ferrone, Cristina R; Wargo, Jennifer A; Lillemoe, Keith D; Fernández-del Castillo, Carlos; Warshaw, Andrew L; Thayer, Sarah P
2013-02-01
We evaluated the prognostic accuracy of LN variables (N0/N1), numbers of positive lymph nodes (PLN), and lymph node ratio (LNR) in the context of the total number of examined lymph nodes (ELN). Patients from SEER and a single institution (MGH) were reviewed and survival analyses performed in subgroups based on numbers of ELN to calculate excess risk of death (hazard ratio, HR). In SEER and MGH, higher numbers of ELN improved the overall survival for N0 patients. The prognostic significance (N0/N1) and PLN were too variable as the importance of a single PLN depended on the total number of LN dissected. LNR consistently correlated with survival once a certain number of lymph nodes were dissected (≥13 in SEER and ≥17 in the MGH dataset). Better survival for N0 patients with increasing ELN likely represents improved staging. PLN have some predictive value but the ELN strongly influence their impact on survival, suggesting the need for a ratio-based classification. LNR strongly correlates with outcome provided that a certain number of lymph nodes is evaluated, suggesting that the prognostic accuracy of any LN variable depends on the total number of ELN.
Leak, Tashara M; Swenson, Alison; Vickers, Zata; Mann, Traci; Mykerezi, Elton; Redden, Joseph P; Rendahl, Aaron; Reicks, Marla
2015-01-01
To test the effectiveness of behavioral economics strategies for increasing vegetable intake, variety, and liking among children residing in homes receiving food assistance. A randomized controlled trial with data collected at baseline, once weekly for 6 weeks, and at study conclusion. Family homes. Families with a child (9-12 years) will be recruited through community organizations and randomly assigned to an intervention (n = 36) or control (n = 10) group. The intervention group will incorporate a new behavioral economics strategy during home dinner meal occasions each week for 6 weeks. Strategies are simple and low-cost. The primary dependent variable will be child's dinner meal vegetable consumption based on weekly reports by caregivers. Fixed independent variables will include the strategy and week of strategy implementation. Secondary dependent variables will include vegetable liking and variety of vegetables consumed based on data collected at baseline and study conclusion. Mean vegetable intake for each strategy across families will be compared using a mixed-model analysis of variance with a random effect for child. In additionally, overall mean changes in vegetable consumption, variety, and liking will be compared between intervention and control groups. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalashilin, Dmitrii V.; Burghardt, Irene
2008-08-28
In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less
Dependence of Halo Bias and Kinematics on Assembly Variables
NASA Astrophysics Data System (ADS)
Xu, Xiaoju; Zheng, Zheng
2018-06-01
Using dark matter haloes identified in a large N-body simulation, we study halo assembly bias, with halo formation time, peak maximum circular velocity, concentration, and spin as the assembly variables. Instead of grouping haloes at fixed mass into different percentiles of each assembly variable, we present the joint dependence of halo bias on the values of halo mass and each assembly variable. In the plane of halo mass and one assembly variable, the joint dependence can be largely described as halo bias increasing outward from a global minimum. We find it unlikely to have a combination of halo variables to absorb all assembly bias effects. We then present the joint dependence of halo bias on two assembly variables at fixed halo mass. The gradient of halo bias does not necessarily follow the correlation direction of the two assembly variables and it varies with halo mass. Therefore in general for two correlated assembly variables one cannot be used as a proxy for the other in predicting halo assembly bias trend. Finally, halo assembly is found to affect the kinematics of haloes. Low-mass haloes formed earlier can have much higher pairwise velocity dispersion than those of massive haloes. In general, halo assembly leads to a correlation between halo bias and halo pairwise velocity distribution, with more strongly clustered haloes having higher pairwise velocity and velocity dispersion. However, the correlation is not tight, and the kinematics of haloes at fixed halo bias still depends on halo mass and assembly variables.
Byun, Kyeongho; Hyodo, Kazuki; Suwabe, Kazuya; Kujach, Sylwester; Kato, Morimasa; Soya, Hideaki
2014-01-01
[Purpose] Functional near-infrared spectroscopy (fNIRS) provides functional imaging of cortical activations by measuring regional oxy- and deoxy-hemoglobin (Hb) changes in the forehead during a cognitive task. There are, however, potential problems regarding NIRS signal contamination by non-cortical hemodynamic (NCH) variables such as skin blood flow, middle cerebral artery blood flow, and heart rate (HR), which are further complicated during acute exercise. It is thus necessary to determine the appropriate post-exercise timing that allows for valid NIRS assessment during a task without any increase in NCH variables. Here, we monitored post-exercise changes in NCH parameters with different intensities of exercise. [Methods] Fourteen healthy young participants cycled 30, 50 and 70% of their peak oxygen uptake (Vo2peak) for 10 min per intensity, each on different days. Changes in skin blood flow velocity (SBFv), middle cerebral artery mean blood velocity (MCA Vmean) and HR were monitored before, during, and after the exercise. [Results] Post-exercise levels of both SBFv and HR in contrast to MCA Vmean remained high compared to basal levels and the times taken to return to baseline levels for both parameters were delayed (2-8 min after exercise), depending upon exercise intensity. [Conclusion] These results indicate that the delayed clearance of NCH variables of up to 8 min into the post-exercise phase may contaminate NIRS measurements, and could be a limitation of NIRS-based neuroimaging studies. PMID:25671198
ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference.
van Breukelen, Gerard J P
2013-11-01
The pretest-posttest control group design can be analyzed with the posttest as dependent variable and the pretest as covariate (ANCOVA) or with the difference between posttest and pretest as dependent variable (CHANGE). These 2 methods can give contradictory results if groups differ at pretest, a phenomenon that is known as Lord's paradox. Literature claims that ANCOVA is preferable if treatment assignment is based on randomization or on the pretest and questionable for preexisting groups. Some literature suggests that Lord's paradox has to do with measurement error in the pretest. This article shows two new things: First, the claims are confirmed by proving the mathematical equivalence of ANCOVA to a repeated measures model without group effect at pretest. Second, correction for measurement error in the pretest is shown to lead back to ANCOVA or to CHANGE, depending on the assumed absence or presence of a true group difference at pretest. These two new theoretical results are illustrated with multilevel (mixed) regression and structural equation modeling of data from two studies.
Draheim, Christopher; Hicks, Kenny L; Engle, Randall W
2016-01-01
It is generally agreed upon that the mechanisms underlying task switching heavily depend on working memory, yet numerous studies have failed to show a strong relationship between working memory capacity (WMC) and task-switching ability. We argue that this relationship does indeed exist but that the dependent variable used to measure task switching is problematic. To support our claim, we reanalyzed data from two studies with a new scoring procedure that combines reaction time (RT) and accuracy into a single score. The reanalysis revealed a strong relationship between task switching and WMC that was not present when RT-based switch costs were used as the dependent variable. We discuss the theoretical implications of this finding along with the potential uses and limitations of the scoring procedure we used. More broadly, we emphasize the importance of using measures that incorporate speed and accuracy in other areas of research, particularly in comparisons of subjects differing in cognitive and developmental levels. © The Author(s) 2015.
Valcke, Mathieu; Haddad, Sami
2015-01-01
The objective of this study was to compare the magnitude of interindividual variability in internal dose for inhalation exposure to single versus multiple chemicals. Physiologically based pharmacokinetic models for adults (AD), neonates (NEO), toddlers (TODD), and pregnant women (PW) were used to simulate inhalation exposure to "low" (RfC-like) or "high" (AEGL-like) air concentrations of benzene (Bz) or dichloromethane (DCM), along with various levels of toluene alone or toluene with ethylbenzene and xylene. Monte Carlo simulations were performed and distributions of relevant internal dose metrics of either Bz or DCM were computed. Area under the blood concentration of parent compound versus time curve (AUC)-based variability in AD, TODD, and PW rose for Bz when concomitant "low" exposure to mixtures of increasing complexities occurred (coefficient of variation (CV) = 16-24%, vs. 12-15% for Bz alone), but remained unchanged considering DCM. Conversely, AUC-based CV in NEO fell (15 to 5% for Bz; 12 to 6% for DCM). Comparable trends were observed considering production of metabolites (AMET), except for NEO's CYP2E1-mediated metabolites of Bz, where an increased CV was observed (20 to 71%). For "high" exposure scenarios, Cmax-based variability of Bz and DCM remained unchanged in AD and PW, but decreased in NEO (CV= 11-16% to 2-6%) and TODD (CV= 12-13% to 7-9%). Conversely, AMET-based variability for both substrates rose in every subpopulation. This study analyzed for the first time the impact of multiple exposures on interindividual variability in toxicokinetics. Evidence indicates that this impact depends upon chemical concentrations and biochemical properties, as well as the subpopulation and internal dose metrics considered.
Wuehr, M; Schniepp, R; Pradhan, C; Ilmberger, J; Strupp, M; Brandt, T; Jahn, K
2013-01-01
Healthy persons exhibit relatively small temporal and spatial gait variability when walking unimpeded. In contrast, patients with a sensory deficit (e.g., polyneuropathy) show an increased gait variability that depends on speed and is associated with an increased fall risk. The purpose of this study was to investigate the role of vision in gait stabilization by determining the effects of withdrawing visual information (eyes closed) on gait variability at different locomotion speeds. Ten healthy subjects (32.2 ± 7.9 years, 5 women) walked on a treadmill for 5-min periods at their preferred walking speed and at 20, 40, 70, and 80 % of maximal walking speed during the conditions of walking with eyes open (EO) and with eyes closed (EC). The coefficient of variation (CV) and fractal dimension (α) of the fluctuations in stride time, stride length, and base width were computed and analyzed. Withdrawing visual information increased the base width CV for all walking velocities (p < 0.001). The effects of absent visual information on CV and α of stride time and stride length were most pronounced during slow locomotion (p < 0.001) and declined during fast walking speeds. The results indicate that visual feedback control is used to stabilize the medio-lateral (i.e., base width) gait parameters at all speed sections. In contrast, sensory feedback control in the fore-aft direction (i.e., stride time and stride length) depends on speed. Sensory feedback contributes most to fore-aft gait stabilization during slow locomotion, whereas passive biomechanical mechanisms and an automated central pattern generation appear to control fast locomotion.
Paternal alcoholism and offspring ADHD problems: a children of twins design.
Knopik, Valerie S; Jacob, Theodore; Haber, Jon Randolph; Swenson, Lance P; Howell, Donelle N
2009-02-01
A recent Children-of-Female-Twin design suggests that the association between maternal alcohol use disorder and offspring ADHD is due to a combination of genetic and environmental factors, such as prenatal nicotine exposure. We present here a complementary analysis using a Children-of-Male-Twin design examining the association between paternal alcoholism and offspring attention deficit hyperactivity problems (ADHP). Children-of-twins design: offspring were classified into 4 groups of varying genetic and environmental risk based on father and co-twin's alcohol dependence status. Univariate results are suggestive of a genetic association between paternal alcohol dependence and broadly defined offspring ADHP. Specifically, offspring of male twins with a history of DSM-III-R alcohol dependence, as well as offspring of non-alcohol dependent monozygotic twins whose co-twin was alcohol dependent, were significantly more likely to exhibit ADHP than control offspring. However, multivariate models show maternal variables independently predicting increased risk for offspring ADHP and significantly decreased support for a genetic mechanism of parent-to-child transmission. In support of earlier work, maternal variables (i.e., maternal ADHD and prenatal exposure) were strongly associated with child ADHP; however, the role of paternal alcohol dependence influences was not definitive. While genetic transmission may be important, the association between paternal alcohol dependence and child ADHP is more likely to be indirect and a result of several pathways.
PATERNAL ALCOHOLISM AND OFFSPRING ADHD PROBLEMS: A CHILDREN OF TWINS DESIGN
Knopik, Valerie S.; Jacob, Theodore; Haber, Jon Randolph; Swenson, Lance P.; Howell, Donelle N.
2013-01-01
Objective A recent Children-of-Female-Twin design suggests that the association between maternal alcohol use disorder and offspring ADHD is due to a combination of genetic and environmental factors, such as prenatal nicotine exposure. We present here a complementary analysis using a Children-of-Male-Twin design examining the association between paternal alcoholism and offspring attention deficit hyperactivity problems (ADHP). Methods Children-of-twins design: offspring were classified into 4 groups of varying genetic and environmental risk based on father and co-twin’s alcohol dependence status. Results Univariate results are suggestive of a genetic association between paternal alcohol dependence and broadly defined offspring ADHP. Specifically, offspring of male twins with a history of DSM-III-R alcohol dependence, as well as offspring of non-alcohol dependent monozygotic twins whose cotwin was alcohol dependent, were significantly more likely to exhibit ADHP than control offspring. However, multivariate models show maternal variables independently predicting increased risk for offspring ADHP and significantly decreased support for a genetic mechanism of parent-to-child transmission. Conclusions In support of earlier work, maternal variables (i.e., maternal ADHD and prenatal exposure) were strongly associated with child ADHP; however, the role of paternal alcohol dependence influences was not definitive. While genetic transmission may be important, the association between paternal alcohol dependence and child ADHP is more likely to be indirect and a result of several pathways. PMID:19210180
Wang, Xiao; Gu, Jinghua; Hilakivi-Clarke, Leena; Clarke, Robert; Xuan, Jianhua
2017-01-15
The advent of high-throughput DNA methylation profiling techniques has enabled the possibility of accurate identification of differentially methylated genes for cancer research. The large number of measured loci facilitates whole genome methylation study, yet posing great challenges for differential methylation detection due to the high variability in tumor samples. We have developed a novel probabilistic approach, D: ifferential M: ethylation detection using a hierarchical B: ayesian model exploiting L: ocal D: ependency (DM-BLD), to detect differentially methylated genes based on a Bayesian framework. The DM-BLD approach features a joint model to capture both the local dependency of measured loci and the dependency of methylation change in samples. Specifically, the local dependency is modeled by Leroux conditional autoregressive structure; the dependency of methylation changes is modeled by a discrete Markov random field. A hierarchical Bayesian model is developed to fully take into account the local dependency for differential analysis, in which differential states are embedded as hidden variables. Simulation studies demonstrate that DM-BLD outperforms existing methods for differential methylation detection, particularly when the methylation change is moderate and the variability of methylation in samples is high. DM-BLD has been applied to breast cancer data to identify important methylated genes (such as polycomb target genes and genes involved in transcription factor activity) associated with breast cancer recurrence. A Matlab package of DM-BLD is available at http://www.cbil.ece.vt.edu/software.htm CONTACT: Xuan@vt.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Model accuracy impact through rescaled observations in hydrological data assimilation studies
USDA-ARS?s Scientific Manuscript database
Signal and noise time-series variability of soil moisture datasets (e.g. satellite-, model-, station-based) vary greatly. Optimality of the analysis obtained after observations are assimilated into the model depends on the degree that the differences between the signal variances of model and observa...
Pre-Service Primary Teachers' Attitudes towards Inclusive Education
ERIC Educational Resources Information Center
Varcoe, Linda; Boyle, Christopher
2014-01-01
Research has demonstrated that an important factor in the success of inclusive education is dependent upon teachers' attitudes. Based on this evidence, the present study investigated the impact of a range of teacher variables in association with training on primary pre-service teachers' attitudes by examining total inclusion scores, positive…
Bayesian analysis of factors associated with fibromyalgia syndrome subjects
NASA Astrophysics Data System (ADS)
Jayawardana, Veroni; Mondal, Sumona; Russek, Leslie
2015-01-01
Factors contributing to movement-related fear were assessed by Russek, et al. 2014 for subjects with Fibromyalgia (FM) based on the collected data by a national internet survey of community-based individuals. The study focused on the variables, Activities-Specific Balance Confidence scale (ABC), Primary Care Post-Traumatic Stress Disorder screen (PC-PTSD), Tampa Scale of Kinesiophobia (TSK), a Joint Hypermobility Syndrome screen (JHS), Vertigo Symptom Scale (VSS-SF), Obsessive-Compulsive Personality Disorder (OCPD), Pain, work status and physical activity dependent from the "Revised Fibromyalgia Impact Questionnaire" (FIQR). The study presented in this paper revisits same data with a Bayesian analysis where appropriate priors were introduced for variables selected in the Russek's paper.
A crystallographic model for the tensile and fatigue response for Rene N4 at 982 C
NASA Technical Reports Server (NTRS)
Sheh, M. Y.; Stouffer, D. C.
1990-01-01
An anisotropic constitutive model based on crystallographic slip theory was formulated for nickel-base single-crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the existence of back stress in single crystals. The results showed that the back stress effect of reverse inelastic flow on the unloading stress is orientation-dependent, and a back stress state variable in the inelastic flow equation is necessary for predicting inelastic behavior. Model correlations and predictions of experimental data are presented for the single crystal superalloy Rene N4 at 982 C.
NASA Astrophysics Data System (ADS)
Savina, M.; Lunghi, M.; Archambault, B.; Baulier, L.; Huret, M.; Le Pape, O.
2016-05-01
Simulating fish larval drift helps assess the sensitivity of recruitment variability to early life history. An individual-based model (IBM) coupled to a hydrodynamic model was used to simulate common sole larval supply from spawning areas to coastal and estuarine nursery grounds at the meta-population scale (4 assessed stocks), from the southern North Sea to the Bay of Biscay (Western Europe) on a 26-yr time series, from 1982 to 2007. The IBM allowed each particle released to be transported by currents, to grow depending on temperature, to migrate vertically depending on development stage, to die along pelagic stages or to settle on a nursery, representing the life history from spawning to metamorphosis. The model outputs were analysed to explore interannual patterns in the amounts of settled sole larvae at the population scale; they suggested: (i) a low connectivity between populations at the larval stage, (ii) a moderate influence of interannual variation in the spawning biomass, (iii) dramatic consequences of life history on the abundance of settling larvae and (iv) the effects of climate variability on the interannual variability of the larvae settlement success.
Song, Chuan-xia; Chen, Hong-mei; Dai, Yu; Kang, Min; Hu, Jia; Deng, Yun
2014-11-01
To optimize the process of Icraiin be hydrolyzed to Baohuoside I by cellulase by Plackett-Burman design combined with Central Composite Design (CCD) response surface methodology. To select the main influencing factors by Plackett-Burman design, using CCD response surface methodology to optimize the process of Icraiin be hydrolyzed to Baohuoside I by cellulase. Taking substrate concentration, the pH of buffer and reaction time as independent variables, with conversion rate of icariin as dependent variable,using regression fitting of completely quadratic response surface between independent variable and dependent variable,the optimum process of Icraiin be hydrolyzed to Baohuoside I by cellulase was intuitively analyzed by 3D surface chart, and taking verification tests and predictive analysis. The best enzymatic hydrolytic process was as following: substrate concentration 8. 23 mg/mL, pH 5. 12 of buffer,reaction time 35. 34 h. The optimum process of Icraiin be hydrolyzed to Baohuoside I by cellulase is determined by Plackett-Burman design combined with CCD response surface methodology. The optimized enzymatic hydrolytic process is simple, convenient, accurate, reproducible and predictable.
Angstman, Nicholas B.; Kiessling, Maren C.; Frank, Hans-Georg; Schmitz, Christoph
2015-01-01
In blast-related mild traumatic brain injury (br-mTBI) little is known about the connections between initial trauma and expression of individual clinical symptoms. Partly due to limitations of current in vitro and in vivo models of br-mTBI, reliable prediction of individual short- and long-term symptoms based on known blast input has not yet been possible. Here we demonstrate a dose-dependent effect of shock wave exposure on C. elegans using shock waves that share physical characteristics with those hypothesized to induce br-mTBI in humans. Increased exposure to shock waves resulted in decreased mean speed of movement while increasing the proportion of worms rendered paralyzed. Recovery of these two behavioral symptoms was observed during increasing post-traumatic waiting periods. Although effects were observed on a population-wide basis, large interindividual variability was present between organisms exposed to the same highly controlled conditions. Reduction of cavitation by exposing worms to shock waves in polyvinyl alcohol resulted in reduced effect, implicating primary blast effects as damaging components in shock wave induced trauma. Growing worms on NGM agar plates led to the same general results in initial shock wave effect in a standard medium, namely dose-dependence and high interindividual variability, as raising worms in liquid cultures. Taken together, these data indicate that reliable prediction of individual clinical symptoms based on known blast input as well as drawing conclusions on blast input from individual clinical symptoms is not feasible in br-mTBI. PMID:25705183
Numerical solution of the two-dimensional time-dependent incompressible Euler equations
NASA Technical Reports Server (NTRS)
Whitfield, David L.; Taylor, Lafayette K.
1994-01-01
A numerical method is presented for solving the artificial compressibility form of the 2D time-dependent incompressible Euler equations. The approach is based on using an approximate Riemann solver for the cell face numerical flux of a finite volume discretization. Characteristic variable boundary conditions are developed and presented for all boundaries and in-flow out-flow situations. The system of algebraic equations is solved using the discretized Newton-relaxation (DNR) implicit method. Numerical results are presented for both steady and unsteady flow.
A Damage-Dependent Finite Element Analysis for Fiber-Reinforced Composite Laminates
NASA Technical Reports Server (NTRS)
Coats, Timothy W.; Harris, Charles E.
1998-01-01
A progressive damage methodology has been developed to predict damage growth and residual strength of fiber-reinforced composite structure with through penetrations such as a slit. The methodology consists of a damage-dependent constitutive relationship based on continuum damage mechanics. Damage is modeled using volume averaged strain-like quantities known as internal state variables and is represented in the equilibrium equations as damage induced force vectors instead of the usual degradation and modification of the global stiffness matrix.
NASA Astrophysics Data System (ADS)
Steiger, Nathan J.; Smerdon, Jason E.
2017-10-01
Because of the relatively brief observational record, the climate dynamics that drive multiyear to centennial hydroclimate variability are not adequately characterized and understood. Paleoclimate reconstructions based on data assimilation (DA) optimally fuse paleoclimate proxies with the dynamical constraints of climate models, thus providing a coherent dynamical picture of the past. DA is therefore an important new tool for elucidating the mechanisms of hydroclimate variability over the last several millennia. But DA has so far remained untested for global hydroclimate reconstructions. Here we explore whether or not DA can be used to skillfully reconstruct global hydroclimate variability along with the driving climate dynamics. Through a set of idealized pseudoproxy experiments, we find that an established DA reconstruction approach can in principle be used to reconstruct hydroclimate at both annual and seasonal timescales. We find that the skill of such reconstructions is generally highest near the proxy sites. This set of reconstruction experiments is specifically designed to estimate a realistic upper bound for the skill of this DA approach. Importantly, this experimental framework allows us to see where and for what variables the reconstruction approach may never achieve high skill. In particular for tree rings, we find that hydroclimate reconstructions depend critically on moisture-sensitive trees, while temperature reconstructions depend critically on temperature-sensitive trees. Real-world DA-based reconstructions will therefore likely require a spatial mixture of temperature- and moisture-sensitive trees to reconstruct both temperature and hydroclimate variables. Additionally, we illustrate how DA can be used to elucidate the dynamical mechanisms of drought with two examples: tropical drivers of multiyear droughts in the North American Southwest and in equatorial East Africa. This work thus provides a foundation for future DA-based hydroclimate reconstructions using real-proxy networks while also highlighting the utility of this important tool for hydroclimate research.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.
Gong, Xiajing; Hu, Meng; Zhao, Liang
2018-05-01
Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Overcoming multicollinearity in multiple regression using correlation coefficient
NASA Astrophysics Data System (ADS)
Zainodin, H. J.; Yap, S. J.
2013-09-01
Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.
Weight-based discrimination: an ubiquitary phenomenon?
Sikorski, C; Spahlholz, J; Hartlev, M; Riedel-Heller, S G
2016-02-01
Despite strong indications of a high prevalence of weight-related stigmatization in individuals with obesity, limited attention has been given to the role of weight discrimination in examining the stigma obesity. Studies, up to date, rely on a limited basis of data sets and additional studies are needed to confirm the findings of previous studies. In particular, data for Europe are lacking, and are needed in light of a recent ruling of the European Court of Justice that addressed weight-based discrimination. The data were derived from a large representative telephone survey in Germany (n=3003). The dependent variable, weight-based discrimination, was assessed with a one-item question. The lifetime prevalence of weight discrimination across different sociodemographic variables was determined. Logistic regression models were used to assess the association of independent and dependent variables. A sub-group analysis was conducted analyzing all participants with a body mass index ⩾25 kg m(-)(2). The overall prevalence of weight-based discrimination was 7.3%. Large differences, however, were observed regarding weight status. In normal weight and overweight participants the prevalence was 5.6%, but this number doubled in participants with obesity class I (10.2%), and quadrupled in participants with obesity class II (18.7%) and underweight (19.7%). In participants with obesity class III, every third participant reported accounts of weight-based discrimination (38%). In regression models, after adjustment, the associations of weight status and female gender (odds ratio: 2.59, P<0.001) remained highly significant. Discrimination seems to be an ubiquitary phenomenon at least for some groups that are at special risk, such as heavier individuals and women. Our findings therefore emphasize the need for research and intervention on weight discrimination among adults with obesity, including anti-discrimination legislation.
Review of Methods for Buildings Energy Performance Modelling
NASA Astrophysics Data System (ADS)
Krstić, Hrvoje; Teni, Mihaela
2017-10-01
Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting - replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance predictive model.
NASA Technical Reports Server (NTRS)
Palmer, Paul I.; Abbot, Dorian S.; Fu, Tzung-May; Jacob, Daniel J.; Chance, Kelly; Kurosu, Thomas P.; Guenther, Alex; Wiedinmyer, Christine; Stanton, Jenny C.; Pilling, Michael J.;
2006-01-01
Quantifying isoprene emissions using satellite observations of the formaldehyde (HCHO) columns is subject to errors involving the column retrieval and the assumed relationship between HCHO columns and isoprene emissions, taken here from the GEOS-CHEM chemical transport model. Here we use a 6-year (1996-2001) HCHO column data set from the Global Ozone Monitoring Experiment (GOME) satellite instrument to (1) quantify these errors, (2) evaluate GOME-derived isoprene emissions with in situ flux measurements and a process-based emission inventory (Model of Emissions of Gases and Aerosols from Nature, MEGAN), and (3) investigate the factors driving the seasonal and interannual variability of North American isoprene emissions. The error in the GOME HCHO column retrieval is estimated to be 40%. We use the Master Chemical Mechanism (MCM) to quantify the time-dependent HCHO production from isoprene, alpha- and beta-pinenes, and methylbutenol and show that only emissions of isoprene are detectable by GOME. The time-dependent HCHO yield from isoprene oxidation calculated by MCM is 20-30% larger than in GEOS-CHEM. GOME-derived isoprene fluxes track the observed seasonal variation of in situ measurements at a Michigan forest site with a -30% bias. The seasonal variation of North American isoprene emissions during 2001 inferred from GOME is similar to MEGAN, with GOME emissions typically 25% higher (lower) at the beginning (end) of the growing season. GOME and MEGAN both show a maximum over the southeastern United States, but they differ in the precise location. The observed interannual variability of this maximum is 20-30%, depending on month. The MEGAN isoprene emission dependence on surface air temperature explains 75% of the month-to-month variability in GOME-derived isoprene emissions over the southeastern United States during May-September 1996-2001.
NASA Astrophysics Data System (ADS)
Haddad, Z. S.; Steward, J. L.; Tseng, H.-C.; Vukicevic, T.; Chen, S.-H.; Hristova-Veleva, S.
2015-06-01
Satellite microwave observations of rain, whether from radar or passive radiometers, depend in a very crucial way on the vertical distribution of the condensed water mass and on the types and sizes of the hydrometeors in the volume resolved by the instrument. This crucial dependence is nonlinear, with different types and orders of nonlinearity that are due to differences in the absorption/emission and scattering signatures at the different instrument frequencies. Because it is not monotone as a function of the underlying condensed water mass, the nonlinearity requires great care in its representation in the observation operator, as the inevitable uncertainties in the numerous precipitation variables are not directly convertible into an additive white uncertainty in the forward calculated observations. In particular, when attempting to assimilate such data into a cloud-permitting model, special care needs to be applied to describe and quantify the expected uncertainty in the observations operator in order not to turn the implicit white additive uncertainty on the input values into complicated biases in the calculated radiances. One approach would be to calculate the means and covariances of the nonlinearly calculated radiances given an a priori joint distribution for the input variables. This would be a very resource-intensive proposal if performed in real time. We propose a representation of the observation operator based on performing this moment calculation off line, with a dimensionality reduction step to allow for the effective calculation of the observation operator and the associated covariance in real time during the assimilation. The approach is applicable to other remotely sensed observations that depend nonlinearly on model variables, including wind vector fields. The approach has been successfully applied to the case of tropical cyclones, where the organization of the system helps in identifying the dimensionality-reducing variables.
A liquid lens switching-based motionless variable fiber-optic delay line
NASA Astrophysics Data System (ADS)
Khwaja, Tariq Shamim; Reza, Syed Azer; Sheikh, Mumtaz
2018-05-01
We present a Variable Fiber-Optic Delay Line (VFODL) module capable of imparting long variable delays by switching an input optical/RF signal between Single Mode Fiber (SMF) patch cords of different lengths through a pair of Electronically Controlled Tunable Lenses (ECTLs) resulting in a polarization-independent operation. Depending on intended application, the lengths of the SMFs can be chosen accordingly to achieve the desired VFODL operation dynamic range. If so desired, the state of the input signal polarization can be preserved with the use of commercially available polarization-independent ECTLs along with polarization-maintaining SMFs (PM-SMFs), resulting in an output polarization that is identical to the input. An ECTL-based design also improves power consumption and repeatability. The delay switching mechanism is electronically-controlled, involves no bulk moving parts, and can be fully-automated. The VFODL module is compact due to the use of small optical components and SMFs that can be packaged compactly.
Demirjian's method in the estimation of age: A study on human third molars.
Lewis, Amitha J; Boaz, Karen; Nagesh, K R; Srikant, N; Gupta, Neha; Nandita, K P; Manaktala, Nidhi
2015-01-01
The primary aim of the following study is to estimate the chronological age based on the stages of third molar development following the eight stages (A to H) method of Demirjian et al. (along with two modifications-Orhan) and secondary aim is to compare third molar development with sex and age. The sample consisted of 115 orthopantomograms from South Indian subjects with known chronological age and gender. Multiple regression analysis was performed with chronological age as the dependable variable and third molar root development as independent variable. All the statistical analysis was performed using the SPSS 11.0 package (IBM ® Corporation). Statistically no significant differences were found in third molar development between males and females. Depending on the available number of wisdom teeth in an individual, R (2) varied for males from 0.21 to 0.48 and for females from 0.16 to 0.38. New equations were derived for estimating the chronological age. The chronological age of a South Indian individual between 14 and 22 years may be estimated based on the regression formulae. However, additional studies with a larger study population must be conducted to meet the need for population-based information on third molar development.
Hörnchen, H; Betz, R; Kotlarek, F; Roebruck, P
1983-01-01
In 1965 URBACH et al. and RUDOLPH et al. [35, 39] described a loss of heart rate variability in severely ill neonates. In this study we investigated the correlation between instantaneous heart rate patterns and status diagnosis. We used a microprocessor-based cardiorespirography system. Seventy five newborn infants (51 prematures and 24 term neonates) were studied for about 12 hours each. Twenty nine patients had a second record after the first investigation. Parameters were: Type of frequency and oscillation, long time variability (LTV), short time variability (STV) and the newly introduced P-value (maximal difference between two successive R-peaks in five minutes). We found clear differences between the study groups. With increasing severity of illness mean values ("group mean values") of long time variability, short time variability and P-value decreased. Fixed heart rate became predominant. The most pronounced loss of heart rate variability was seen in infants with severe intracranial bleeding, thus offering a tentative diagnosis. For statistical analysis long time variability and the silent oscillation type have been proved as best parameters for this diagnosis. Severely decreased heart rate variations also have been seen in infants with acute renal failure--possibly because of brain edema--, after application of muscle relaxants, repeated doses of sedatives, and after prolonged anesthesia. Otherwise, the heart rate variability was probably dependent on age and gestational age in prematures and newborn infants without intracranial bleeding. It is possible to use microprocessor-based long time cardiorespirography as a simple screening method for the diagnosis of neonatal intracerebral bleeding. In future experiences transcutaneous measurements of oxygen tension should be included.
NASA Astrophysics Data System (ADS)
Kumari, K.; Oberheide, J.
2017-12-01
Nonmigrating tidal diagnostics of SABER temperature observations in the ionospheric dynamo region reveal a large amount of variability on time-scales of a few days to weeks. In this paper, we discuss the physical reasons for the observed short-term tidal variability using a novel approach based on Information theory and Bayesian statistics. We diagnose short-term tidal variability as a function of season, QBO, ENSO, and solar cycle and other drivers using time dependent probability density functions, Shannon entropy and Kullback-Leibler divergence. The statistical significance of the approach and its predictive capability is exemplified using SABER tidal diagnostics with emphasis on the responses to the QBO and solar cycle. Implications for F-region plasma density will be discussed.
Tigers on trails: occupancy modeling for cluster sampling.
Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U
2010-07-01
Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.
Reward-Dependent Modulation of Movement Variability
Izawa, Jun; Shadmehr, Reza
2015-01-01
Movement variability is often considered an unwanted byproduct of a noisy nervous system. However, variability can signal a form of implicit exploration, indicating that the nervous system is intentionally varying the motor commands in search of actions that yield the greatest success. Here, we investigated the role of the human basal ganglia in controlling reward-dependent motor variability as measured by trial-to-trial changes in performance during a reaching task. We designed an experiment in which the only performance feedback was success or failure and quantified how reach variability was modulated as a function of the probability of reward. In healthy controls, reach variability increased as the probability of reward decreased. Control of variability depended on the history of past rewards, with the largest trial-to-trial changes occurring immediately after an unrewarded trial. In contrast, in participants with Parkinson's disease, a known example of basal ganglia dysfunction, reward was a poor modulator of variability; that is, the patients showed an impaired ability to increase variability in response to decreases in the probability of reward. This was despite the fact that, after rewarded trials, reach variability in the patients was comparable to healthy controls. In summary, we found that movement variability is partially a form of exploration driven by the recent history of rewards. When the function of the human basal ganglia is compromised, the reward-dependent control of movement variability is impaired, particularly affecting the ability to increase variability after unsuccessful outcomes. PMID:25740529
Variability in large-scale wind power generation: Variability in large-scale wind power generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiviluoma, Juha; Holttinen, Hannele; Weir, David
2015-10-25
The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1more » h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.« less
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
NASA Astrophysics Data System (ADS)
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
NASA Astrophysics Data System (ADS)
Tang, J.; Riley, W. J.
2017-12-01
Most existing soil carbon cycle models have modeled the moisture and temperature dependence of soil respiration using deterministic response functions. However, empirical data suggest abundant variability in both of these dependencies. We here use the recently developed SUPECA (Synthesizing Unit and Equilibrium Chemistry Approximation) theory and a published dynamic energy budget based microbial model to investigate how soil carbon decomposition responds to changes in soil moisture and temperature under the influence of organo-mineral interactions. We found that both the temperature and moisture responses are hysteretic and cannot be represented by deterministic functions. We then evaluate how the multi-scale variability in temperature and moisture forcing affect soil carbon decomposition. Our results indicate that when the model is run in scenarios mimicking laboratory incubation experiments, the often-observed temperature and moisture response functions can be well reproduced. However, when such response functions are used for model extrapolation involving more transient variability in temperature and moisture forcing (as found in real ecosystems), the dynamic model that explicitly accounts for hysteresis in temperature and moisture dependency produces significantly different estimations of soil carbon decomposition, suggesting there are large biases in models that do not resolve such hysteresis. We call for more studies on organo-mineral interactions to improve modeling of such hysteresis.
A Protection Motivation Theory application to date rape education.
Singh, Shweta; Orwat, John; Grossman, Susan
2011-12-01
Date rape risk communication is a key component of education-based Date Rape Prevention Programs, common across colleges. In such programs, risk assessment in date rape is approached cautiously in order to avoid a tone of "victim blaming." Since it is important in the assessment of any risk to understand the surrounding social context of the risky situation and the individual's unique relationship with that social context, this study examines Protection Motivation Theory as it applies to handling the risk of date rape without victim blaming. The paper links individual personality and social contexts with risk communication. The study sample comprised 367 undergraduate women enrolled in a large Southern Public University. The study examines the relationships between dating activity, social competency, and type of information provided with the dependents variables of date rape related protection behavior (intent), belief, and knowledge. A factorial multiple analysis of covariance analysis found that the dependent variables had a significant relationship with aspects of social competency and dating activity. The exposure to varying information about date rape was not significantly related to the dependent variables of date rape-related protection behavior (intent), belief, and knowledge. The identification of social competency and dating activity status as protective factors in this study makes a significant contribution to the practice and research efforts in date rape education.
Widespread Transient Hoogsteen Base-Pairs in Canonical Duplex DNA with Variable Energetics
Alvey, Heidi S.; Gottardo, Federico L.; Nikolova, Evgenia N.; Al-Hashimi, Hashim M.
2015-01-01
Hoogsteen base-pairing involves a 180 degree rotation of the purine base relative to Watson-Crick base-pairing within DNA duplexes, creating alternative DNA conformations that can play roles in recognition, damage induction, and replication. Here, using Nuclear Magnetic Resonance R1ρ relaxation dispersion, we show that transient Hoogsteen base-pairs occur across more diverse sequence and positional contexts than previously anticipated. We observe sequence-specific variations in Hoogsteen base-pair energetic stabilities that are comparable to variations in Watson-Crick base-pair stability, with Hoogsteen base-pairs being more abundant for energetically less favorable Watson-Crick base-pairs. Our results suggest that the variations in Hoogsteen stabilities and rates of formation are dominated by variations in Watson-Crick base pair stability, suggesting a late transition state for the Watson-Crick to Hoogsteen conformational switch. The occurrence of sequence and position-dependent Hoogsteen base-pairs provide a new potential mechanism for achieving sequence-dependent DNA transactions. PMID:25185517
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
Application of effective discharge analysis to environmental flow decision-making
McKay, S. Kyle; Freeman, Mary C.; Covich, A.P.
2016-01-01
Well-informed river management decisions rely on an explicit statement of objectives, repeatable analyses, and a transparent system for assessing trade-offs. These components may then be applied to compare alternative operational regimes for water resource infrastructure (e.g., diversions, locks, and dams). Intra- and inter-annual hydrologic variability further complicates these already complex environmental flow decisions. Effective discharge analysis (developed in studies of geomorphology) is a powerful tool for integrating temporal variability of flow magnitude and associated ecological consequences. Here, we adapt the effectiveness framework to include multiple elements of the natural flow regime (i.e., timing, duration, and rate-of-change) as well as two flow variables. We demonstrate this analytical approach using a case study of environmental flow management based on long-term (60 years) daily discharge records in the Middle Oconee River near Athens, GA, USA. Specifically, we apply an existing model for estimating young-of-year fish recruitment based on flow-dependent metrics to an effective discharge analysis that incorporates hydrologic variability and multiple focal taxa. We then compare three alternative methods of environmental flow provision. Percentage-based withdrawal schemes outcompete other environmental flow methods across all levels of water withdrawal and ecological outcomes.
Application of Effective Discharge Analysis to Environmental Flow Decision-Making.
McKay, S Kyle; Freeman, Mary C; Covich, Alan P
2016-06-01
Well-informed river management decisions rely on an explicit statement of objectives, repeatable analyses, and a transparent system for assessing trade-offs. These components may then be applied to compare alternative operational regimes for water resource infrastructure (e.g., diversions, locks, and dams). Intra- and inter-annual hydrologic variability further complicates these already complex environmental flow decisions. Effective discharge analysis (developed in studies of geomorphology) is a powerful tool for integrating temporal variability of flow magnitude and associated ecological consequences. Here, we adapt the effectiveness framework to include multiple elements of the natural flow regime (i.e., timing, duration, and rate-of-change) as well as two flow variables. We demonstrate this analytical approach using a case study of environmental flow management based on long-term (60 years) daily discharge records in the Middle Oconee River near Athens, GA, USA. Specifically, we apply an existing model for estimating young-of-year fish recruitment based on flow-dependent metrics to an effective discharge analysis that incorporates hydrologic variability and multiple focal taxa. We then compare three alternative methods of environmental flow provision. Percentage-based withdrawal schemes outcompete other environmental flow methods across all levels of water withdrawal and ecological outcomes.
Design approaches to experimental mediation☆
Pirlott, Angela G.; MacKinnon, David P.
2016-01-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259
Design approaches to experimental mediation.
Pirlott, Angela G; MacKinnon, David P
2016-09-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.
Discrete-time bidirectional associative memory neural networks with variable delays
NASA Astrophysics Data System (ADS)
Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.
2005-02-01
Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
Variables in psychology: a critique of quantitative psychology.
Toomela, Aaro
2008-09-01
Mind is hidden from direct observation; it can be studied only by observing behavior. Variables encode information about behaviors. There is no one-to-one correspondence between behaviors and mental events underlying the behaviors, however. In order to understand mind it would be necessary to understand exactly what information is represented in variables. This aim cannot be reached after variables are already encoded. Therefore, statistical data analysis can be very misleading in studies aimed at understanding mind that underlies behavior. In this article different kinds of information that can be represented in variables are described. It is shown how informational ambiguity of variables leads to problems of theoretically meaningful interpretation of the results of statistical data analysis procedures in terms of hidden mental processes. Reasons are provided why presence of dependence between variables does not imply causal relationship between events represented by variables and absence of dependence between variables cannot rule out the causal dependence of events represented by variables. It is concluded that variable-psychology has a very limited range of application for the development of a theory of mind-psychology.
NASA Astrophysics Data System (ADS)
Ho, M. W.; Devineni, N.; Cook, E. R.; Lall, U.
2017-12-01
As populations and associated economic activity in the US evolve, regional demands for water likewise change. For regions dependent on surface water, dams and reservoirs are critical to storing and managing releases of water and regulating the temporal and spatial availability of water in order to meet these demands. Storage capacities typically range from seasonal storage in the east to multi-annual and decadal-scale storage in the drier west. However, most dams in the US were designed with limited knowledge regarding the range, frequency, and persistence of hydroclimatic extremes. Demands for water supplied by these dams have likewise changed. Furthermore, many dams in the US are now reaching or have already exceeded their economic design life. The converging issues of aging dams, improved knowledge of hydroclimatic variability, and evolving demands for dam services result in a pressing need to evaluate existing reservoir capacities with respect to contemporary water demands, long term hydroclimatic variability, and service reliability into the future. Such an effort is possible given the recent development of two datasets that respectively address hydroclimatic variability in the conterminous United States over the past 555 years and human water demand related water stress over the same region. The first data set is a paleoclimate reconstruction of streamflow variability across the CONUS region based on a tree-ring informed reconstruction of the Palmer Drought Severity Index. This streamflow reconstruction suggested that wet spells with shorter drier spells were a key feature of 20th century streamflow compared with the preceding 450 years. The second data set in an annual cumulative drought index that is a measure of water balance based on water supplied through precipitation and water demands based on evaporative demands, agricultural, urban, and industrial demands. This index identified urban and regional hotspots that were particularly dependent on water transfers and vulnerable to persistent drought risk. These data sets are used in conjunction with the national inventory of dams to assess the current capacity of dams to meet water demands considering variability in streamflow over the past 555 years. A case study in the North-East US is presented.
Using directed information for influence discovery in interconnected dynamical systems
NASA Astrophysics Data System (ADS)
Rao, Arvind; Hero, Alfred O.; States, David J.; Engel, James Douglas
2008-08-01
Structure discovery in non-linear dynamical systems is an important and challenging problem that arises in various applications such as computational neuroscience, econometrics, and biological network discovery. Each of these systems have multiple interacting variables and the key problem is the inference of the underlying structure of the systems (which variables are connected to which others) based on the output observations (such as multiple time trajectories of the variables). Since such applications demand the inference of directed relationships among variables in these non-linear systems, current methods that have a linear assumption on structure or yield undirected variable dependencies are insufficient. Hence, in this work, we present a methodology for structure discovery using an information-theoretic metric called directed time information (DTI). Using both synthetic dynamical systems as well as true biological datasets (kidney development and T-cell data), we demonstrate the utility of DTI in such problems.
Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport
NASA Astrophysics Data System (ADS)
Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike
2017-04-01
Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.
NASA Astrophysics Data System (ADS)
Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.
2008-04-01
Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Azimuthal dependence in the gravity field induced by recent and past cryospheric forcings
NASA Technical Reports Server (NTRS)
Yuen, David A.; Gasperini, Paolo; Sabadini, Roberto; Boschi, Enzo
1987-01-01
Present-day glacial activities and the current variability of the Antarctic ice volume can cause variations in the long-wavelength gravity field as a consequence of transient viscoelastic responses in the mantle. The azimuthal dependence of the secular variations of the gravitational potential are studied and it is found that the nonaxisymmetric contributions are more important for recent glacial retreats than for Pleistocene deglaciation. Changes in land-based ice covering Antarctica can be detected by monitoring satellite orbits and their sensitivity to variations in gravitational harmonic for degree l greater than 3. Resonances in satellite orbits may be useful for detecting these azimuthally-dependent gravity signals.
NASA Astrophysics Data System (ADS)
Lustig-Yaeger, Jacob; Schwieterman, Edward; Meadows, Victoria; Fujii, Yuka; NAI Virtual Planetary Laboratory, ISSI 'The Exo-Cartography Inverse Problem'
2016-10-01
Earth is our only example of a habitable world and is a critical reference point for potentially habitable exoplanets. While disk-averaged views of Earth that mimic exoplanet data can be obtained by interplanetary spacecraft, these datasets are often restricted in wavelength range, and are limited to the Earth phases and viewing geometries that the spacecraft can feasibly access. We can overcome these observational limitations using a sophisticated UV-MIR spectral model of Earth that has been validated against spacecraft observations in wavelength-dependent brightness and phase (Robinson et al., 2011; 2014). This model can be used to understand the information content - and the optimal means for extraction of that information - for multi-wavelength, time-dependent, disk-averaged observations of the Earth. In this work, we explore key telescope parameters and observing strategies that offer the greatest insight into the wavelength-, phase-, and rotationally-dependent variability of Earth as if it were an exoplanet. Using a generalized coronagraph instrument simulator (Robinson et al., 2016), we synthesize multi-band, time-series observations of the Earth that are consistent with large space-based telescope mission concepts, such as the Large UV/Optical/IR (LUVOIR) Surveyor. We present fits to this dataset that leverage the rotationally-induced variability to infer the number of large-scale planetary surface types, as well as their respective longitudinal distributions and broadband albedo spectra. Finally, we discuss the feasibility of using such methods to identify and map terrestrial exoplanets surfaces with the next generation of space-based telescopes.
Isolation of circulating tumor cells from pancreatic cancer by automated filtration
Brychta, Nora; Drosch, Michael; Driemel, Christiane; Fischer, Johannes C.; Neves, Rui P.; Esposito, Irene; Knoefel, Wolfram; Möhlendick, Birte; Hille, Claudia; Stresemann, Antje; Krahn, Thomas; Kassack, Matthias U.; Stoecklein, Nikolas H.; von Ahsen, Oliver
2017-01-01
It is now widely recognized that the isolation of circulating tumor cells based on cell surface markers might be hindered by variability in their protein expression. Especially in pancreatic cancer, isolation based only on EpCAM expression has produced very diverse results. Methods that are independent of surface markers and therefore independent of phenotypical changes in the circulating cells might increase CTC recovery also in pancreatic cancer. We compared an EpCAM-dependent (IsoFlux) and a size-dependent (automated Siemens Healthineers filtration device) isolation method for the enrichment of pancreatic cancer CTCs. The recovery rate of the filtration based approach is dramatically superior to the EpCAM-dependent approach especially for cells with low EpCAM-expression (filtration: 52%, EpCAM-dependent: 1%). As storage and shipment of clinical samples is important for centralized analyses, we also evaluated the use of frozen diagnostic leukapheresis (DLA) as source for isolating CTCs and subsequent genetic analysis such as KRAS mutation detection analysis. Using frozen DLA samples of pancreatic cancer patients we detected CTCs in 42% of the samples by automated filtration. PMID:29156783
Isolation of circulating tumor cells from pancreatic cancer by automated filtration.
Brychta, Nora; Drosch, Michael; Driemel, Christiane; Fischer, Johannes C; Neves, Rui P; Esposito, Irene; Knoefel, Wolfram; Möhlendick, Birte; Hille, Claudia; Stresemann, Antje; Krahn, Thomas; Kassack, Matthias U; Stoecklein, Nikolas H; von Ahsen, Oliver
2017-10-17
It is now widely recognized that the isolation of circulating tumor cells based on cell surface markers might be hindered by variability in their protein expression. Especially in pancreatic cancer, isolation based only on EpCAM expression has produced very diverse results. Methods that are independent of surface markers and therefore independent of phenotypical changes in the circulating cells might increase CTC recovery also in pancreatic cancer. We compared an EpCAM-dependent (IsoFlux) and a size-dependent (automated Siemens Healthineers filtration device) isolation method for the enrichment of pancreatic cancer CTCs. The recovery rate of the filtration based approach is dramatically superior to the EpCAM-dependent approach especially for cells with low EpCAM-expression (filtration: 52%, EpCAM-dependent: 1%). As storage and shipment of clinical samples is important for centralized analyses, we also evaluated the use of frozen diagnostic leukapheresis (DLA) as source for isolating CTCs and subsequent genetic analysis such as KRAS mutation detection analysis. Using frozen DLA samples of pancreatic cancer patients we detected CTCs in 42% of the samples by automated filtration.
DBH Prediction Using Allometry Described by Bivariate Copula Distribution
NASA Astrophysics Data System (ADS)
Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.
2017-12-01
Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
Forcada, J.; Malone, D.; Royle, J. Andrew; Staniland, I.J.
2009-01-01
Correctly quantifying the impacts of rare apex marine predators is essential to ecosystem-based approaches to fisheries management, where harvesting must be sustainable for targeted species and their dependent predators. This requires modelling the uncertainty in such processes as predator life history, seasonal abundance and movement, size-based predation, energetic requirements, and prey vulnerability. We combined these uncertainties to evaluate the predatory impact of transient leopard seals on a community of mesopredators (seals and penguins) and their prey at South Georgia, and assess the implications for an ecosystem-based management. The mesopredators are highly dependent on Antarctic krill and icefish, which are targeted by regional fisheries. We used a state-space formulation to combine (1) a mark-recapture open-population model and individual identification data to assess seasonally variable leopard seal arrival and departure dates, numbers, and residency times; (2) a size-based bioenergetic model; and (3) a size-based prey choice model from a diet analysis. Our models indicated that prey choice and consumption reflected seasonal changes in leopard seal population size and structure, size-selective predation and prey vulnerability. A population of 104 (90-125) leopard seals, of which 64% were juveniles, consumed less than 2% of the Antarctic fur seal pup production of the area (50% of total ingested energy, IE), but ca. 12-16% of the local gentoo penguin population (20% IE). Antarctic krill (28% IE) were the only observed food of leopard seal pups and supplemented the diet of older individuals. Direct impacts on krill and fish were negligible, but the "escapement" due to leopard seal predation on fur seal pups and penguins could be significant for the mackerel icefish fishery at South Georgia. These results suggest that: (1) rare apex predators like leopard seals may control, and may depend on, populations of mesopredators dependent on prey species targeted by fisheries; and (2) predatory impacts and community control may vary throughout the predator's geographic range, and differ across ecosystems and management areas, depending on the seasonal abundance of the prey and the predator's dispersal movements. This understanding is important to integrate the predator needs as natural mortality of its prey in models to set prey catch limits for fisheries. Reliable estimates of the variability of these needs are essential for a precautionary interpretation in the context of an ecosystem-based management. ?? 2009 Elsevier B.V.
Forcada, J.; Royle, J. Andrew; Staniland, I.J.
2009-01-01
Correctly quantifying the impacts of rare apex marine predators is essential to ecosystem-based approaches to fisheries management, where harvesting must be sustainable for targeted species and their dependent predators. This requires modelling the uncertainty in such processes as predator life history, seasonal abundance and movement, size-based predation, energetic requirements, and prey vulnerability. We combined these uncertainties to evaluate the predatory impact of transient leopard seals on a community of mesopredators (seals and penguins) and their prey at South Georgia, and assess the implications for an ecosystem-based management. The mesopredators are highly dependent on Antarctic krill and icefish, which are targeted by regional fisheries. We used a state-space formulation to combine (1) a mark-recapture open-population model and individual identification data to assess seasonally variable leopard seal arrival and departure dates, numbers, and residency times; (2) a size-based bioenergetic model; and (3) a size-based prey choice model from a diet analysis. Our models indicated that prey choice and consumption reflected seasonal changes in leopard seal population size and structure, size-selective predation and prey vulnerability. A population of 104 (90?125) leopard seals, of which 64% were juveniles, consumed less than 2% of the Antarctic fur seal pup production of the area (50% of total ingested energy, IE), but ca. 12?16% of the local gentoo penguin population (20% IE). Antarctic krill (28% IE) were the only observed food of leopard seal pups and supplemented the diet of older individuals. Direct impacts on krill and fish were negligible, but the ?escapement? due to leopard seal predation on fur seal pups and penguins could be significant for the mackerel icefish fishery at South Georgia. These results suggest that: (1) rare apex predators like leopard seals may control, and may depend on, populations of mesopredators dependent on prey species targeted by fisheries; and (2) predatory impacts and community control may vary throughout the predator's geographic range, and differ across ecosystems and management areas, depending on the seasonal abundance of the prey and the predator's dispersal movements. This understanding is important to integrate the predator needs as natural mortality of its prey in models to set prey catch limits for fisheries. Reliable estimates of the variability of these needs are essential for a precautionary interpretation in the context of an ecosystem-based management.
A model of urban scaling laws based on distance dependent interactions
NASA Astrophysics Data System (ADS)
Ribeiro, Fabiano L.; Meirelles, Joao; Ferreira, Fernando F.; Neto, Camilo Rodrigues
2017-03-01
Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log-log plot, the so-called superlinear scaling. Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences.
A model of urban scaling laws based on distance dependent interactions.
Ribeiro, Fabiano L; Meirelles, Joao; Ferreira, Fernando F; Neto, Camilo Rodrigues
2017-03-01
Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log-log plot, the so-called superlinear scaling . Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences.
Superior Intraparietal Sulcus Controls the Variability of Visual Working Memory Precision.
Galeano Weber, Elena M; Peters, Benjamin; Hahn, Tim; Bledowski, Christoph; Fiebach, Christian J
2016-05-18
Limitations of working memory (WM) capacity depend strongly on the cognitive resources that are available for maintaining WM contents in an activated state. Increasing the number of items to be maintained in WM was shown to reduce the precision of WM and to increase the variability of WM precision over time. Although WM precision was recently associated with neural codes particularly in early sensory cortex, we have so far no understanding of the neural bases underlying the variability of WM precision, and how WM precision is preserved under high load. To fill this gap, we combined human fMRI with computational modeling of behavioral performance in a delayed color-estimation WM task. Behavioral results replicate a reduction of WM precision and an increase of precision variability under high loads (5 > 3 > 1 colors). Load-dependent BOLD signals in primary visual cortex (V1) and superior intraparietal sulcus (IPS), measured during the WM task at 2-4 s after sample onset, were modulated by individual differences in load-related changes in the variability of WM precision. Although stronger load-related BOLD increase in superior IPS was related to lower increases in precision variability, thus stabilizing WM performance, the reverse was observed for V1. Finally, the detrimental effect of load on behavioral precision and precision variability was accompanied by a load-related decline in the accuracy of decoding the memory stimuli (colors) from left superior IPS. We suggest that the superior IPS may contribute to stabilizing visual WM performance by reducing the variability of memory precision in the face of higher load. This study investigates the neural bases of capacity limitations in visual working memory by combining fMRI with cognitive modeling of behavioral performance, in human participants. It provides evidence that the superior intraparietal sulcus (IPS) is a critical brain region that influences the variability of visual working memory precision between and within individuals (Fougnie et al., 2012; van den Berg et al., 2012) under increased memory load, possibly in cooperation with perceptual systems of the occipital cortex. These findings substantially extend our understanding of the nature of capacity limitations in visual working memory and their neural bases. Our work underlines the importance of integrating cognitive modeling with univariate and multivariate methods in fMRI research, thus improving our knowledge of brain-behavior relationships. Copyright © 2016 the authors 0270-6474/16/365623-13$15.00/0.
Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing
2017-08-05
The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set. Copyright © 2017 Elsevier B.V. All rights reserved.
District nursing workforce planning: a review of the methods.
Reid, Bernie; Kane, Kay; Curran, Carol
2008-11-01
District nursing services in Northern Ireland face increasing demands and challenges which may be responded to by effective and efficient workforce planning and development. The aim of this paper is to critically analyse district nursing workforce planning and development methods, in an attempt to find a suitable method for Northern Ireland. A systematic analysis of the literature reveals four methods: professional judgement; population-based health needs; caseload analysis and dependency-acuity. Each method has strengths and weaknesses. Professional judgement offers a 'belt and braces' approach but lacks sensitivity to fluctuating patient numbers. Population-based health needs methods develop staffing algorithms that reflect deprivation and geographical spread, but are poorly understood by district nurses. Caseload analysis promotes equitable workloads but poorly performing district nursing localities may continue if benchmarking processes only consider local data. Dependency-acuity methods provide a means of equalizing and prioritizing workload but are prone to district nurses overstating factors in patient dependency or understating carers' capability. In summary a mixed method approach is advocated to evaluate and adjust the size and mix of district nursing teams using empirically determined patient dependency and activity-based variables based on the population's health needs.
Pöysä, Hannu; Rintala, Jukka; Johnson, Douglas H.; Kauppinen, Jukka; Lammi, Esa; Nudds, Thomas D.; Väänänen, Veli-Matti
2016-01-01
Density dependence, population regulation, and variability in population size are fundamental population processes, the manifestation and interrelationships of which are affected by environmental variability. However, there are surprisingly few empirical studies that distinguish the effect of environmental variability from the effects of population processes. We took advantage of a unique system, in which populations of the same duck species or close ecological counterparts live in highly variable (north American prairies) and in stable (north European lakes) environments, to distinguish the relative contributions of environmental variability (measured as between-year fluctuations in wetland numbers) and intraspecific interactions (density dependence) in driving population dynamics. We tested whether populations living in stable environments (in northern Europe) were more strongly governed by density dependence than populations living in variable environments (in North America). We also addressed whether relative population dynamical responses to environmental variability versus density corresponded to differences in life history strategies between dabbling (relatively “fast species” and governed by environmental variability) and diving (relatively “slow species” and governed by density) ducks. As expected, the variance component of population fluctuations caused by changes in breeding environments was greater in North America than in Europe. Contrary to expectations, however, populations in more stable environments were not less variable nor clearly more strongly density dependent than populations in highly variable environments. Also, contrary to expectations, populations of diving ducks were neither more stable nor stronger density dependent than populations of dabbling ducks, and the effect of environmental variability on population dynamics was greater in diving than in dabbling ducks. In general, irrespective of continent and species life history, environmental variability contributed more to variation in species abundances than did density. Our findings underscore the need for more studies on populations of the same species in different environments to verify the generality of current explanations about population dynamics and its association with species life history.
Pöysä, Hannu; Rintala, Jukka; Johnson, Douglas H; Kauppinen, Jukka; Lammi, Esa; Nudds, Thomas D; Väänänen, Veli-Matti
2016-10-01
Density dependence, population regulation, and variability in population size are fundamental population processes, the manifestation and interrelationships of which are affected by environmental variability. However, there are surprisingly few empirical studies that distinguish the effect of environmental variability from the effects of population processes. We took advantage of a unique system, in which populations of the same duck species or close ecological counterparts live in highly variable (north American prairies) and in stable (north European lakes) environments, to distinguish the relative contributions of environmental variability (measured as between-year fluctuations in wetland numbers) and intraspecific interactions (density dependence) in driving population dynamics. We tested whether populations living in stable environments (in northern Europe) were more strongly governed by density dependence than populations living in variable environments (in North America). We also addressed whether relative population dynamical responses to environmental variability versus density corresponded to differences in life history strategies between dabbling (relatively "fast species" and governed by environmental variability) and diving (relatively "slow species" and governed by density) ducks. As expected, the variance component of population fluctuations caused by changes in breeding environments was greater in North America than in Europe. Contrary to expectations, however, populations in more stable environments were not less variable nor clearly more strongly density dependent than populations in highly variable environments. Also, contrary to expectations, populations of diving ducks were neither more stable nor stronger density dependent than populations of dabbling ducks, and the effect of environmental variability on population dynamics was greater in diving than in dabbling ducks. In general, irrespective of continent and species life history, environmental variability contributed more to variation in species abundances than did density. Our findings underscore the need for more studies on populations of the same species in different environments to verify the generality of current explanations about population dynamics and its association with species life history.
Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)
ERIC Educational Resources Information Center
Steyn, H. S., Jr.; Ellis, S. M.
2009-01-01
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
Examining Student-Adult Relationships during K-12 School Age Years
ERIC Educational Resources Information Center
Lappi, Shelly J.
2012-01-01
This study examined the relationship between dependent and independent variables and the effects relationships have on K-12 students as they struggle through life stressors. Thus, the research study was based upon this over arching question: How does having positive student-adult relationships impact a student's ability to cope with life…
Blame Attribution as a Moderator of Perceptions of Sexual Orientation-Based Hate Crimes
ERIC Educational Resources Information Center
Cramer, Robert J.; Chandler, Joseph F.; Wakeman, Emily E.
2010-01-01
Blame attribution is a valuable mechanism explaining decision making. However, present literature mainly employs blame attribution as a dependent variable. The shortcoming of this fact is that blame attribution offers a potentially valuable explanatory mechanism for decision making. The authors designed two studies to investigate blame attribution…
Determination of the High School Students' Attitudes towards Their Teachers
ERIC Educational Resources Information Center
Gelisli, Yücel; Baidrahmanov, Dossym Kh.; Beisenbaeva, Lyazzat; Sultanbek, Malik
2017-01-01
In the current study, the aim is to determine the high school students' attitudes towards their teachers depending on some variables and the relationship between their attitudes and achievements. Thus, the study was designed according to relational survey model. The population of the study, which was specified based on the purposive sampling…
ERIC Educational Resources Information Center
Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.
2011-01-01
Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…
The Experiences of Behavior Interventionists Who Work with Children with Autism in Families' Homes
ERIC Educational Resources Information Center
Elfert, Miriam; Mirenda, Pat
2006-01-01
This study examined the experiences of 65 behavior interventionists (BIs) who provide 1:1 home-based instruction to children with autism in two Canadian provinces. Dependent variables included occupational stress; the relationships among stress, strain, and coping; the relationship between stress and the characteristics of both challenging…
Geometric Implications of Maxwell's Equations
NASA Astrophysics Data System (ADS)
Smith, Felix T.
2015-03-01
Maxwell's synthesis of the varied results of the accumulated knowledge of electricity and magnetism, based largely on the searching insights of Faraday, still provide new issues to explore. A case in point is a well recognized anomaly in the Maxwell equations: The laws of electricity and magnetism require two 3-vector and two scalar equations, but only six dependent variables are available to be their solutions, the 3-vectors E and B. This leaves an apparent redundancy of two degrees of freedom (J. Rosen, AJP 48, 1071 (1980); Jiang, Wu, Povinelli, J. Comp. Phys. 125, 104 (1996)). The observed self-consistency of the eight equations suggests that they contain additional information. This can be sought as a previously unnoticed constraint connecting the space and time variables, r and t. This constraint can be identified. It distorts the otherwise Euclidean 3-space of r with the extremely slight, time dependent curvature k (t) =Rcurv-2 (t) of the 3-space of a hypersphere whose radius has the time dependence dRcurv / dt = +/- c nonrelativistically, or dRcurvLor / dt = +/- ic relativistically. The time dependence is exactly that of the Hubble expansion. Implications of this identification will be explored.
Density dependence in demography and dispersal generates fluctuating invasion speeds
Li, Bingtuan; Miller, Tom E. X.
2017-01-01
Density dependence plays an important role in population regulation and is known to generate temporal fluctuations in population density. However, the ways in which density dependence affects spatial population processes, such as species invasions, are less understood. Although classical ecological theory suggests that invasions should advance at a constant speed, empirical work is illuminating the highly variable nature of biological invasions, which often exhibit nonconstant spreading speeds, even in simple, controlled settings. Here, we explore endogenous density dependence as a mechanism for inducing variability in biological invasions with a set of population models that incorporate density dependence in demographic and dispersal parameters. We show that density dependence in demography at low population densities—i.e., an Allee effect—combined with spatiotemporal variability in population density behind the invasion front can produce fluctuations in spreading speed. The density fluctuations behind the front can arise from either overcompensatory population growth or density-dependent dispersal, both of which are common in nature. Our results show that simple rules can generate complex spread dynamics and highlight a source of variability in biological invasions that may aid in ecological forecasting. PMID:28442569
NASA Astrophysics Data System (ADS)
Denli, H. H.; Koc, Z.
2015-12-01
Estimation of real properties depending on standards is difficult to apply in time and location. Regression analysis construct mathematical models which describe or explain relationships that may exist between variables. The problem of identifying price differences of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. Considering regression analysis for real estate valuation, which are presented in real marketing process with its current characteristics and quantifiers, the method will help us to find the effective factors or variables in the formation of the value. In this study, prices of housing for sale in Zeytinburnu, a district in Istanbul, are associated with its characteristics to find a price index, based on information received from a real estate web page. The associated variables used for the analysis are age, size in m2, number of floors having the house, floor number of the estate and number of rooms. The price of the estate represents the dependent variable, whereas the rest are independent variables. Prices from 60 real estates have been used for the analysis. Same price valued locations have been found and plotted on the map and equivalence curves have been drawn identifying the same valued zones as lines.
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
A Computer Program for Preliminary Data Analysis
Dennis L. Schweitzer
1967-01-01
ABSTRACT. -- A computer program written in FORTRAN has been designed to summarize data. Class frequencies, means, and standard deviations are printed for as many as 100 independent variables. Cross-classifications of an observed dependent variable and of a dependent variable predicted by a multiple regression equation can also be generated.
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
Shevlin, Mark; Houston, James E; Dorahy, Martin J; Adamson, Gary
2008-01-01
Previous research has shown that traumatic life events are associated with a diagnosis of psychosis. Rather than focus on particular events, this study aimed to estimate the effect of cumulative traumatic experiences on psychosis. The study was based on 2 large community samples (The National Comorbidity Survey [NCS], The British Psychiatric Morbidity Survey [BPMS]). All analyses were conducted using hierarchical binary logistic regression, with psychosis diagnosis as the dependent variable. Background demographic variables were included in the first block, in addition to alcohol/drug dependence and depression. A variable indicating the number of traumas experienced was entered in the second block. Experiencing 2 or more trauma types significantly predicted psychosis, and there appeared to be a dose-response type relationship. Particular traumatic experiences have been implicated in the etiology of psychosis. Consistent with previous research, molestation and physical abuse were significant predictors of psychosis using the NCS, whereas for the BPMS, serious injury or assault and violence in the home were statistically significant. This study indicated the added risk of multiple traumatic experiences.
Shevlin, Mark; Houston, James E.; Dorahy, Martin J.; Adamson, Gary
2008-01-01
Previous research has shown that traumatic life events are associated with a diagnosis of psychosis. Rather than focus on particular events, this study aimed to estimate the effect of cumulative traumatic experiences on psychosis. The study was based on 2 large community samples (The National Comorbidity Survey [NCS], The British Psychiatric Morbidity Survey [BPMS]). All analyses were conducted using hierarchical binary logistic regression, with psychosis diagnosis as the dependent variable. Background demographic variables were included in the first block, in addition to alcohol/drug dependence and depression. A variable indicating the number of traumas experienced was entered in the second block. Experiencing 2 or more trauma types significantly predicted psychosis, and there appeared to be a dose-response type relationship. Particular traumatic experiences have been implicated in the etiology of psychosis. Consistent with previous research, molestation and physical abuse were significant predictors of psychosis using the NCS, whereas for the BPMS, serious injury or assault and violence in the home were statistically significant. This study indicated the added risk of multiple traumatic experiences. PMID:17586579
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
NASA Astrophysics Data System (ADS)
Schlawin, E.; Burgasser, Adam J.; Karalidi, T.; Gizis, J. E.; Teske, J.
2017-11-01
L dwarfs exhibit low-level, rotationally modulated photometric variability generally associated with heterogeneous, cloud-covered atmospheres. The spectral character of these variations yields insight into the particle sizes and vertical structure of the clouds. Here, we present the results of a high-precision, ground-based, near-infrared, spectral monitoring study of two mid-type L dwarfs that have variability reported in the literature, 2MASS J08354256-0819237 and 2MASS J18212815+1414010, using the SpeX instrument on the Infrared Telescope Facility. By simultaneously observing a nearby reference star, we achieve < 0.15 % per-band sensitivity in relative brightness changes across the 0.9-2.4 μm bandwidth. We find that 2MASS J0835-0819 exhibits marginal (≲0.5% per band) variability with no clear spectral dependence, while 2MASS J1821+1414 varies by up to ±1.5% at 0.9 μm, with the variability amplitude declining toward longer wavelengths. The latter result extends the variability trend observed in prior HST/WFC3 spectral monitoring of 2MASS J1821+1414, and we show that the full 0.9-2.4 μm variability amplitude spectrum can be reproduced by Mie extinction from dust particles with a log-normal particle size distribution with a median radius of 0.24 μm. We do not detect statistically significant phase variations with wavelength. The different variability behavior of 2MASS J0835-0819 and 2MASS J1821+1414 suggests dependencies on viewing angle and/or overall cloud content, underlying factors that can be examined through a broader survey.
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
Age-Based Methods to Explore Time-Related Variables in Occupational Epidemiology Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janice P. Watkins, Edward L. Frome, Donna L. Cragle
2005-08-31
Although age is recognized as the strongest predictor of mortality in chronic disease epidemiology, a calendar-based approach is often employed when evaluating time-related variables. An age-based analysis file, created by determining the value of each time-dependent variable for each age that a cohort member is followed, provides a clear definition of age at exposure and allows development of diverse analytic models. To demonstrate methods, the relationship between cancer mortality and external radiation was analyzed with Poisson regression for 14,095 Oak Ridge National Laboratory workers. Based on previous analysis of this cohort, a model with ten-year lagged cumulative radiation doses partitionedmore » by receipt before (dose-young) or after (dose-old) age 45 was examined. Dose-response estimates were similar to calendar-year-based results with elevated risk for dose-old, but not when film badge readings were weekly before 1957. Complementary results showed increasing risk with older hire ages and earlier birth cohorts, since workers hired after age 45 were born before 1915, and dose-young and dose-old were distributed differently by birth cohorts. Risks were generally higher for smokingrelated than non-smoking-related cancers. It was difficult to single out specific variables associated with elevated cancer mortality because of: (1) birth cohort differences in hire age and mortality experience completeness, and (2) time-period differences in working conditions, dose potential, and exposure assessment. This research demonstrated the utility and versatility of the age-based approach.« less
Mediating the distal crime-drug relationship with proximal reactive criminal thinking.
Walters, Glenn D
2016-02-01
This article describes the results of a study designed to test whether reactive criminal thinking (RCT) does a better job of mediating the crime → drug relationship than it does mediating the drug → crime relationship after the direct effects of crime on drug use/dependency and of drug use/dependency on crime have been rendered nonsignificant by control variables. All 1,170 male members of the Pathways to Desistance study (Mulvey, 2012) served as participants in the current investigation. As predicted, the total (unmediated) effects of crime on substance use/dependence and of substance use/dependence on crime were nonsignificant when key demographic and third variables were controlled, although the indirect (RCT-mediated) effect of crime on drug use was significant. Proactive criminal thinking (PCT), by comparison, failed to mediate either relationship. The RCT continued to mediate the crime → drug relationship and the PCT continued to not mediate either relationship when more specific forms of offending (aggressive, income) and substance use/dependence (drug use, substance-use dependency symptoms) were analyzed. This offers preliminary support for the notion that even when the total crime-drug effect is nonsignificant the indirect path from crime to reactive criminal thinking to drugs can still be significant. Based on these results, it is concluded that mediation by proximal reactive criminal thinking is a mechanism by which distal measures of crime and drug use/dependence are connected. (c) 2016 APA, all rights reserved).
Khan, Azizuddin; Sharma, Narendra K; Dixit, Shikha
2008-09-01
Prospective memory is memory for the realization of delayed intention. Researchers distinguish 2 kinds of prospective memory: event- and time-based (G. O. Einstein & M. A. McDaniel, 1990). Taking that distinction into account, the present authors explored participants' comparative performance under event- and time-based tasks. In an experimental study of 80 participants, the authors investigated the roles of cognitive load and task condition in prospective memory. Cognitive load (low vs. high) and task condition (event- vs. time-based task) were the independent variables. Accuracy in prospective memory was the dependent variable. Results showed significant differential effects under event- and time-based tasks. However, the effect of cognitive load was more detrimental in time-based prospective memory. Results also revealed that time monitoring is critical in successful performance of time estimation and so in time-based prospective memory. Similarly, participants' better performance on the event-based prospective memory task showed that they acted on the basis of environment cues. Event-based prospective memory was environmentally cued; time-based prospective memory required self-initiation.
Correlative and multivariate analysis of increased radon concentration in underground laboratory.
Maletić, Dimitrije M; Udovičić, Vladimir I; Banjanac, Radomir M; Joković, Dejan R; Dragić, Aleksandar L; Veselinović, Nikola B; Filipović, Jelena
2014-11-01
The results of analysis using correlative and multivariate methods, as developed for data analysis in high-energy physics and implemented in the Toolkit for Multivariate Analysis software package, of the relations of the variation of increased radon concentration with climate variables in shallow underground laboratory is presented. Multivariate regression analysis identified a number of multivariate methods which can give a good evaluation of increased radon concentrations based on climate variables. The use of the multivariate regression methods will enable the investigation of the relations of specific climate variable with increased radon concentrations by analysis of regression methods resulting in 'mapped' underlying functional behaviour of radon concentrations depending on a wide spectrum of climate variables. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Effect of land use on the spatial variability of organic matter and nutrient status in an Oxisol
NASA Astrophysics Data System (ADS)
Paz-Ferreiro, Jorge; Alves, Marlene Cristina; Vidal Vázquez, Eva
2013-04-01
Heterogeneity is now considered as an inherent soil property. Spatial variability of soil attributes in natural landscapes results mainly from soil formation factors. In cultivated soils much heterogeneity can additionally occur as a result of land use, agricultural systems and management practices. Organic matter content (OMC) and nutrients associated to soil exchange complex are key attribute in the maintenance of a high quality soil. Neglecting spatial heterogeneity in soil OMC and nutrient status at the field scale might result in reduced yield and in environmental damage. We analyzed the impact of land use on the pattern of spatial variability of OMC and soil macronutrients at the stand scale. The study was conducted in São Paulo state, Brazil. Land uses were pasture, mango orchard and corn field. Soil samples were taken at 0-10 cm and 10-20 cm depth in 84 points, within 100 m x 100 m plots. Texture, pH, OMC, cation exchange capacity (CEC), exchangeable cations (Ca, Mg, K, H, Al) and resin extractable phosphorus were analyzed.. Statistical variability was found to be higher in parameters defining the soil nutrient status (resin extractable P, K, Ca and Mg) than in general soil properties (OMC, CEC, base saturation and pH). Geostatistical analysis showed contrasting patterns of spatial dependence for the different soil uses, sampling depths and studied properties. Most of the studied data sets collected at two different depths exhibited spatial dependence at the sampled scale and their semivariograms were modeled by a nugget effect plus a structure. The pattern of soil spatial variability was found to be different between the three study soil uses and at the two sampling depths, as far as model type, nugget effect or ranges of spatial dependence were concerned. Both statistical and geostatistical results pointed out the importance of OMC as a driver responsible for the spatial variability of soil nutrient status.
NASA Astrophysics Data System (ADS)
Chai, Jun; Tian, Bo; Xie, Xi-Yang; Chai, Han-Peng
2016-12-01
Investigation is given to a forced generalized variable-coefficient Korteweg-de Vries equation for the atmospheric blocking phenomenon. Applying the double-logarithmic and rational transformations, respectively, under certain variable-coefficient constraints, we get two different types of bilinear forms: (a) Based on the first type, the bilinear Bäcklund transformation (BT) is derived, the N-soliton solutions in the Wronskian form are constructed, and the (N - 1)- and N-soliton solutions are proved to satisfy the bilinear BT; (b) Based on the second type, via the Hirota method, the one- and two-soliton solutions are obtained. Those two types of solutions are different. Graphic analysis on the two types shows that the soliton velocity depends on d(t), h(t), f(t) and R(t), the soliton amplitude is merely related to f(t), and the background depends on R(t) and f(t), where d(t), h(t), q(t) and f(t) are the dissipative, dispersive, nonuniform and line-damping coefficients, respectively, and R(t) is the external-force term. We present some types of interactions between the two solitons, including the head-on and overtaking interactions, interactions between the velocity- and amplitude-unvarying two solitons, between the velocity-varying while amplitude-unvarying two solitons and between the velocity- and amplitude-varying two solitons, as well as the interactions occurring on the constant and varying backgrounds.
NASA Astrophysics Data System (ADS)
Chai, Jun; Tian, Bo; Qu, Qi-Xing; Zhen, Hui-Ling; Chai, Han-Peng
2018-07-01
In this paper, investigation is given to a forced generalized variable-coefficient Korteweg-de Vries equation for the atmospheric blocking phenomenon. Based on the Lax pair, under certain variable-coefficient-dependent constraints, we present an infinite sequence of the conservation laws. Through the Riccati equations obtained from the Lax pair, a Wahlquist-Estabrook-type Bäcklund transformation (BT) is derived, based on which the nonlinear superposition formula as well as one- and two-soliton-like solutions are obtained. Via the truncated Painlevé expansion, we give a Painlevé BT, along with the one-soliton-like solutions. With the Painlevé BT, bilinear forms are constructed, and we get a bilinear BT as well as the corresponding one-soliton-like solutions. Bell-type bright and dark soliton-like waves and kink-type soliton-like waves are observed, respectively. Graphic analysis shows that (1) the velocities of the soliton-like waves are related to h(t), d(t), f(t) and R(t), while the soliton-like wave amplitudes just depend on f(t), and (2) with the nonzero f(t) and R(t), soliton-like waves propagate on the varying backgrounds, where h(t), d(t) and f(t) are the dispersive, dissipative and line-damping coefficients, respectively, R(t) is the external-force term, and t is the scaled time coordinate.
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Xu, Haigen; Cao, Yun; Cao, Mingchang; Wu, Jun; Wu, Yi; Le, Zhifang; Cui, Peng; Li, Jiaqi; Ma, Fangzhou; Liu, Li; Hu, Feilong; Chen, Mengmeng; Tong, Wenjun
2017-11-01
Proxies are adopted to represent biodiversity patterns due to inadequate information for all taxa. Despite the wide use of proxies, their efficacy remains unclear. Previous analyses focused on overall species richness for fewer groups, affecting the generality and depth of inference. Biological taxa often exhibit very different habitat preferences. Habitat groupings may be an appropriate approach to advancing the study of richness patterns. Diverse geographical patterns of species richness and their potential mechanisms were then examined for habitat groups. We used a database of the spatial distribution of 32,824 species of mammals, birds, reptiles, amphibians and plants from 2,376 counties across China, divided the five taxa into 30 habitat groups, calculated Spearman correlations of species richness among taxa and habitat groups, and tested five hypotheses about richness patterns using multivariate models. We identified one major group [i.e., forest- and shrub-dependent (FS) groups], and some minor groups such as grassland-dependent vertebrates and desert-dependent vertebrates. There were mostly high or moderate correlations among FS groups, but mostly low or moderate correlations among other habitat groups. The prominent variables differed among habitat groups of the same taxon, such as birds and reptiles. The sets of predictors were also different within the same habitat, such as forests, grasslands, and deserts. Average correlations among the same habitat groups of vertebrates and among habitat groups of a single taxon were low or moderate, except correlations among FS groups. The sets of prominent variables of species richness differed strongly among habitat groups, although elevation range was the most important variable for most FS groups. The ecological and evolutionary processes that underpin richness patterns might be disparate among different habitat groups. Appropriate groupings based on habitats could reveal important patterns of richness gradients and valuable biodiversity components.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, W; Zaghian, M; Lim, G
2015-06-15
Purpose: The current practice of considering the relative biological effectiveness (RBE) of protons in intensity modulated proton therapy (IMPT) planning is to use a generic RBE value of 1.1. However, RBE is indeed a variable depending on the dose per fraction, the linear energy transfer, tissue parameters, etc. In this study, we investigate the impact of using variable RBE based optimization (vRBE-OPT) on IMPT dose distributions compared by conventional fixed RBE based optimization (fRBE-OPT). Methods: Proton plans of three head and neck cancer patients were included for our study. In order to calculate variable RBE, tissue specific parameters were obtainedmore » from the literature and dose averaged LET values were calculated by Monte Carlo simulations. Biological effects were calculated using the linear quadratic model and they were utilized in the variable RBE based optimization. We used a Polak-Ribiere conjugate gradient algorithm to solve the model. In fixed RBE based optimization, we used conventional physical dose optimization to optimize doses weighted by 1.1. IMPT plans for each patient were optimized by both methods (vRBE-OPT and fRBE-OPT). Both variable and fixed RBE weighted dose distributions were calculated for both methods and compared by dosimetric measures. Results: The variable RBE weighted dose distributions were more homogenous within the targets, compared with the fixed RBE weighted dose distributions for the plans created by vRBE-OPT. We observed that there were noticeable deviations between variable and fixed RBE weighted dose distributions if the plan were optimized by fRBE-OPT. For organs at risk sparing, dose distributions from both methods were comparable. Conclusion: Biological dose based optimization rather than conventional physical dose based optimization in IMPT planning may bring benefit in improved tumor control when evaluating biologically equivalent dose, without sacrificing OAR sparing, for head and neck cancer patients. The research is supported in part by National Institutes of Health Grant No. 2U19CA021239-35.« less
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1991-01-01
An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.
A Primer on Logistic Regression.
ERIC Educational Resources Information Center
Woldbeck, Tanya
This paper introduces logistic regression as a viable alternative when the researcher is faced with variables that are not continuous. If one is to use simple regression, the dependent variable must be measured on a continuous scale. In the behavioral sciences, it may not always be appropriate or possible to have a measured dependent variable on a…
A global perspective on Glacial- to Interglacial variability change
NASA Astrophysics Data System (ADS)
Rehfeld, Kira; Münch, Thomas; Ho, Sze Ling; Laepple, Thomas
2017-04-01
Changes in climate variability are more important for society than changes in the mean state alone. While we will be facing a large-scale shift of the mean climate in the future, its implications for climate variability are not well constrained. Here we quantify changes in temperature variability as climate shifted from the Last Glacial cold to the Holocene warm period. Greenland ice core oxygen isotope records provide evidence of this climatic shift, and are used as reference datasets in many palaeoclimate studies worldwide. A striking feature in these records is pronounced millennial variability in the Glacial, and a distinct reduction in variance in the Holocene. We present quantitative estimates of the change in variability on 500- to 1500-year timescales based on a global compilation of high-resolution proxy records for temperature which span both the Glacial and the Holocene. The estimates are derived based on power spectral analysis, and corrected using estimates of the proxy signal-to-noise ratios. We show that, on a global scale, variability at the Glacial maximum is five times higher than during the Holocene, with a possible range of 3-10 times. The spatial pattern of the variability change is latitude-dependent. While the tropics show no changes in variability, mid-latitude changes are higher. A slight overall reduction in variability in the centennial to millennial range is found in Antarctica. The variability decrease in the Greenland ice core oxygen isotope records is larger than in any other proxy dataset. These results therefore contradict the view of a globally quiescent Holocene following the instable Glacial, and imply that, in terms of centennial to millennial temperature variability, the two states may be more similar than previously thought.
Joas, Jacques; Vulcain, Emmanuelle; Desvignes, Claire; Morales, Emeline; Léchaudel, Mathieu
2012-04-01
Climacteric fruits are harvested at the green-mature stage and ripen during their marketing cycle. However, growing conditions induce variability into the maturity stage of mangoes at harvest, with an impact on their final quality. Assuming that the physiological age can be correctly evaluated by a criterion based on the variable chlorophyll fluorescence of the skin (F(v)) and that differences in physiological age depend on growing conditions, controlled stress experiments were carried out on mango fruit by manipulating either the leaf/fruit ratio or the light environment. Delays from 9 to 30 days were observed, depending on stress level and harvest stage, to obtain the same F(v) value. For moderate stress, fruit composition after ripening was partially compensated for, with little or no difference in sugar, dry matter, carotenoid and aroma contents. For more pronounced stress, the major metabolites were not particularly affected, but the synthesis capacity of carotenoids and aromas was lower after maturity. The ripening ability of a fruit is acquired on the tree and defines its postharvest changes. Control of the physiological age at harvest can minimise the variability observed under natural conditions and guarantee fruit batches whose postharvest changes will be relatively homogeneous. Copyright © 2011 Society of Chemical Industry.
Part mutual information for quantifying direct associations in networks.
Zhao, Juan; Zhou, Yiwei; Zhang, Xiujun; Chen, Luonan
2016-05-03
Quantitatively identifying direct dependencies between variables is an important task in data analysis, in particular for reconstructing various types of networks and causal relations in science and engineering. One of the most widely used criteria is partial correlation, but it can only measure linearly direct association and miss nonlinear associations. However, based on conditional independence, conditional mutual information (CMI) is able to quantify nonlinearly direct relationships among variables from the observed data, superior to linear measures, but suffers from a serious problem of underestimation, in particular for those variables with tight associations in a network, which severely limits its applications. In this work, we propose a new concept, "partial independence," with a new measure, "part mutual information" (PMI), which not only can overcome the problem of CMI but also retains the quantification properties of both mutual information (MI) and CMI. Specifically, we first defined PMI to measure nonlinearly direct dependencies between variables and then derived its relations with MI and CMI. Finally, we used a number of simulated data as benchmark examples to numerically demonstrate PMI features and further real gene expression data from Escherichia coli and yeast to reconstruct gene regulatory networks, which all validated the advantages of PMI for accurately quantifying nonlinearly direct associations in networks.
Wiedermann, Wolfgang; Li, Xintong
2018-04-16
In nonexperimental data, at least three possible explanations exist for the association of two variables x and y: (1) x is the cause of y, (2) y is the cause of x, or (3) an unmeasured confounder is present. Statistical tests that identify which of the three explanatory models fits best would be a useful adjunct to the use of theory alone. The present article introduces one such statistical method, direction dependence analysis (DDA), which assesses the relative plausibility of the three explanatory models on the basis of higher-moment information about the variables (i.e., skewness and kurtosis). DDA involves the evaluation of three properties of the data: (1) the observed distributions of the variables, (2) the residual distributions of the competing models, and (3) the independence properties of the predictors and residuals of the competing models. When the observed variables are nonnormally distributed, we show that DDA components can be used to uniquely identify each explanatory model. Statistical inference methods for model selection are presented, and macros to implement DDA in SPSS are provided. An empirical example is given to illustrate the approach. Conceptual and empirical considerations are discussed for best-practice applications in psychological data, and sample size recommendations based on previous simulation studies are provided.
Variability of multilevel switching in scaled hybrid RS/CMOS nanoelectronic circuits: theory
NASA Astrophysics Data System (ADS)
Heittmann, Arne; Noll, Tobias G.
2013-07-01
A theory is presented which describes the variability of multilevel switching in scaled hybrid resistive-switching/CMOS nanoelectronic circuits. Variability is quantified in terms of conductance variation using the first two moments derived from the probability density function (PDF) of the RS conductance. For RS, which are based on the electrochemical metallization effect (ECM), this variability is - to some extent - caused by discrete events such as electrochemical reactions, which occur on atomic scale and are at random. The theory shows that the conductance variation depends on the joint interaction between the programming circuit and the resistive switch (RS), and explicitly quantifies the impact of RS device parameters and parameters of the programming circuit on the conductance variance. Using a current mirror as an exemplary programming circuit an upper limit of 2-4 bits (dependent on the filament surface area) is estimated as the storage capacity exploiting the multilevel capabilities of an ECM cell. The theoretical results were verified by Monte Carlo circuit simulations on a standard circuit simulation environment using an ECM device model which models the filament growth by a Poisson process. Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.
Constrained variability of modeled T:ET ratio across biomes
NASA Astrophysics Data System (ADS)
Fatichi, Simone; Pappas, Christoforos
2017-07-01
A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.
Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns
Lau, Joseph C. Y.; Wong, Patrick C. M.
2016-01-01
We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. NEW & NOTEWORTHY Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience. PMID:27832606
Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns.
Lau, Joseph C Y; Wong, Patrick C M; Chandrasekaran, Bharath
2017-02-01
We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience. Copyright © 2017 the American Physiological Society.
"Quenchbodies": quench-based antibody probes that show antigen-dependent fluorescence.
Abe, Ryoji; Ohashi, Hiroyuki; Iijima, Issei; Ihara, Masaki; Takagi, Hiroaki; Hohsaka, Takahiro; Ueda, Hiroshi
2011-11-02
Here, we describe a novel reagentless fluorescent biosensor strategy based on the antigen-dependent removal of a quenching effect on a fluorophore attached to antibody domains. Using a cell-free translation-mediated position-specific protein labeling system, we found that an antibody single chain variable region (scFv) that had been fluorolabeled at the N-terminal region showed a significant antigen-dependent fluorescence enhancement. Investigation of the enhancement mechanism by mutagenesis of the carboxytetramethylrhodamine (TAMRA)-labeled anti-osteocalcin scFv showed that antigen-dependency was dependent on semiconserved tryptophan residues near the V(H)/V(L) interface. This suggested that the binding of the antigen led to the interruption of a quenching effect caused by the proximity of tryptophan residues to the linker-tagged fluorophore. Using TAMRA-scFv, many targets including peptides, proteins, and haptens including morphine-related drugs could be quantified. Similar or higher sensitivities to those observed in competitive ELISA were obtained, even in human plasma. Because of its versatility, this "quenchbody" is expected to have a range of applications, from in vitro diagnostics, to imaging of various targets in situ.
Spectral variability of sea surface skylight reflectance and its effect on ocean color.
Cui, Ting-Wei; Song, Qing-Jun; Tang, Jun-Wu; Zhang, Jie
2013-10-21
In this study, sea surface skylight spectral reflectance ρ(λ) was retrieved by means of the non-linear spectral optimization method and a bio-optical model. The spectral variability of ρ(λ) was found to be mainly influenced by the uniformity of the incident skylight, and a model is proposed to predict the ρ(λ) spectral dependency based on skylight reflectance at 750 nm. It is demonstrated that using the spectrally variable ρ(λ), rather than a constant, yields an improved agreement between the above-water remote sensing reflectance R(rs)(λ) estimates and concurrent profiling ones. The findings of this study highlight the necessity to re-process the relevant historical above-water data and update ocean color retrieval algorithms accordingly.
A method of fitting the gravity model based on the Poisson distribution.
Flowerdew, R; Aitkin, M
1982-05-01
"In this paper, [the authors] suggest an alternative method for fitting the gravity model. In this method, the interaction variable is treated as the outcome of a discrete probability process, whose mean is a function of the size and distance variables. This treatment seems appropriate when the dependent variable represents a count of the number of items (people, vehicles, shipments) moving from one place to another. It would seem to have special advantages where there are some pairs of places between which few items move. The argument will be illustrated with reference to data on the numbers of migrants moving in 1970-1971 between pairs of the 126 labor market areas defined for Great Britain...." excerpt
The Gait Disorder in Downbeat Nystagmus Syndrome
Schniepp, Roman; Wuehr, Max; Huth, Sabrina; Pradhan, Cauchy; Schlick, Cornelia; Brandt, Thomas; Jahn, Klaus
2014-01-01
Background Downbeat nystagmus (DBN) is a common form of acquired fixation nystagmus with key symptoms of oscillopsia and gait disturbance. Gait disturbance could be a result of impaired visual feedback due to the involuntary ocular oscillations. Alternatively, a malfunction of cerebellar locomotor control might be involved, since DBN is considered a vestibulocerebellar disorder. Methods Investigation of walking in 50 DBN patients (age 72±11 years, 23 females) and 50 healthy controls (HS) (age 70±11 years, 23 females) using a pressure sensitive carpet (GAITRite). The patient cohort comprised subjects with only ocular motor signs (DBN) and subjects with an additional limb ataxia (DBNCA). Gait investigation comprised different walking speeds and walking with eyes closed. Results In DBN, gait velocity was reduced (p<0.001) with a reduced stride length (p<0.001), increased base of support (p<0.050), and increased double support (p<0.001). Walking with eyes closed led to significant gait changes in both HS and DBN. These changes were more pronounced in DBN patients (p<0.001). Speed-dependency of gait variability revealed significant differences between the subgroups of DBN and DBNCA (p<0.050). Conclusions (I) Impaired visual control caused by involuntary ocular oscillations cannot sufficiently explain the gait disorder. (II) The gait of patients with DBN is impaired in a speed dependent manner. (III) Analysis of gait variability allows distinguishing DBN from DBNCA: Patients with pure DBN show a speed dependency of gait variability similar to that of patients with afferent vestibular deficits. In DBNCA, gait variability resembles the pattern found in cerebellar ataxia. PMID:25140517
The gait disorder in downbeat nystagmus syndrome.
Schniepp, Roman; Wuehr, Max; Huth, Sabrina; Pradhan, Cauchy; Schlick, Cornelia; Brandt, Thomas; Jahn, Klaus
2014-01-01
Downbeat nystagmus (DBN) is a common form of acquired fixation nystagmus with key symptoms of oscillopsia and gait disturbance. Gait disturbance could be a result of impaired visual feedback due to the involuntary ocular oscillations. Alternatively, a malfunction of cerebellar locomotor control might be involved, since DBN is considered a vestibulocerebellar disorder. Investigation of walking in 50 DBN patients (age 72 ± 11 years, 23 females) and 50 healthy controls (HS) (age 70 ± 11 years, 23 females) using a pressure sensitive carpet (GAITRite). The patient cohort comprised subjects with only ocular motor signs (DBN) and subjects with an additional limb ataxia (DBNCA). Gait investigation comprised different walking speeds and walking with eyes closed. In DBN, gait velocity was reduced (p<0.001) with a reduced stride length (p<0.001), increased base of support (p<0.050), and increased double support (p<0.001). Walking with eyes closed led to significant gait changes in both HS and DBN. These changes were more pronounced in DBN patients (p<0.001). Speed-dependency of gait variability revealed significant differences between the subgroups of DBN and DBNCA (p<0.050). (I) Impaired visual control caused by involuntary ocular oscillations cannot sufficiently explain the gait disorder. (II) The gait of patients with DBN is impaired in a speed dependent manner. (III) Analysis of gait variability allows distinguishing DBN from DBNCA: Patients with pure DBN show a speed dependency of gait variability similar to that of patients with afferent vestibular deficits. In DBNCA, gait variability resembles the pattern found in cerebellar ataxia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fei, Yiyan; Landry, James P.; Zhu, X. D., E-mail: xdzhu@physics.ucdavis.edu
A biological state is equilibrium of multiple concurrent biomolecular reactions. The relative importance of these reactions depends on physiological temperature typically between 10 °C and 50 °C. Experimentally the temperature dependence of binding reaction constants reveals thermodynamics and thus details of these biomolecular processes. We developed a variable-temperature opto-fluidic system for real-time measurement of multiple (400–10 000) biomolecular binding reactions on solid supports from 10 °C to 60 °C within ±0.1 °C. We illustrate the performance of this system with investigation of binding reactions of plant lectins (carbohydrate-binding proteins) with 24 synthetic glycans (i.e., carbohydrates). We found that the lectin-glycan reactions in general can be enthalpy-driven,more » entropy-driven, or both, and water molecules play critical roles in the thermodynamics of these reactions.« less
MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.
Lok, Judith J
2017-04-01
In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
NASA Astrophysics Data System (ADS)
Zamaletdinov, R. I.; Okulova, S. M.; Gavrilova, E. A.; Zakhvatova, A. A.
2018-01-01
This article examines the results of many years of research on the reproductive performance of six species of leguminous plants (FabaceaeLind., 1836) under conditions of urbanization of habitat (Kazan). The range of variability of the main reproductive indices in six species is illustrated: the potential productivity, the actual productivity of the six main types of leguminous plants. The features of variability of seed death at different stages of development are shown depending on habitat conditions. It is established that the main regularities of changes in reproductive parameters depending on habitat conditions are manifested both in native species and in the introduced species Caraganaarborescens Lam., 1785. Based on the results of the study we made conclusion about the advisability of monitoring the reproductive parameters of leguminous plants for indicating the state of the environment in a large city.
Functional Freedom: A Psychological Model of Freedom in Decision-Making.
Lau, Stephan; Hiemisch, Anette
2017-07-05
The freedom of a decision is not yet sufficiently described as a psychological variable. We present a model of functional decision freedom that aims to fill that role. The model conceptualizes functional freedom as a capacity of people that varies depending on certain conditions of a decision episode. It denotes an inner capability to consciously shape complex decisions according to one's own values and needs. Functional freedom depends on three compensatory dimensions: it is greatest when the decision-maker is highly rational, when the structure of the decision is highly underdetermined, and when the decision process is strongly based on conscious thought and reflection. We outline possible research questions, argue for psychological benefits of functional decision freedom, and explicate the model's implications on current knowledge and research. In conclusion, we show that functional freedom is a scientific variable, permitting an additional psychological foothold in research on freedom, and that is compatible with a deterministic worldview.
NASA Astrophysics Data System (ADS)
Fei, Yiyan; Landry, James P.; Li, Yanhong; Yu, Hai; Lau, Kam; Huang, Shengshu; Chokhawala, Harshal A.; Chen, Xi; Zhu, X. D.
2013-11-01
A biological state is equilibrium of multiple concurrent biomolecular reactions. The relative importance of these reactions depends on physiological temperature typically between 10 °C and 50 °C. Experimentally the temperature dependence of binding reaction constants reveals thermodynamics and thus details of these biomolecular processes. We developed a variable-temperature opto-fluidic system for real-time measurement of multiple (400-10 000) biomolecular binding reactions on solid supports from 10 °C to 60 °C within ±0.1 °C. We illustrate the performance of this system with investigation of binding reactions of plant lectins (carbohydrate-binding proteins) with 24 synthetic glycans (i.e., carbohydrates). We found that the lectin-glycan reactions in general can be enthalpy-driven, entropy-driven, or both, and water molecules play critical roles in the thermodynamics of these reactions.
Crime Modeling using Spatial Regression Approach
NASA Astrophysics Data System (ADS)
Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.
2018-01-01
Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.
A framework for the study of coping, illness behaviour and outcomes.
Shaw, C
1999-05-01
This paper presents a theoretical framework for the study of coping, illness attribution, health behaviour and outcomes. It is based upon models developed within health psychology and aims to provide a theoretical basis for nurse researchers to utilize psychosocial variables. It is an interactionist model which views outcomes as dependent upon both situation and person variables. The situation is viewed as the health threat or illness symptoms as well as the psychosocial context within which the person is operating. This context includes socio-economic factors, social support, social norms, and external factors such as the mass media. The experience of health threat is dependent upon individual appraisal, and the framework incorporates Folkman and Lazarus' transactional model of stress, as well as Leventhal's illness representation model. Behaviour and the perception of threat are also dependent upon outcome expectancies and the appraisal of one's own coping resources, and so the concepts of locus of control and self-efficacy are also incorporated. This framework allows one to identify determinants of behaviour and outcome, and will aid nurses in identifying areas for psycho-social intervention.
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
Piper, Megan E.; Bolt, Daniel M.; Kim, Su-Young; Japuntich, Sandra J.; Smith, Stevens S.; Niederdeppe, Jeff; Cannon, Dale S.; Baker, Timothy B.
2008-01-01
The construct of tobacco dependence is important from both scientific and public health perspectives, but it is poorly understood. The current research integrates person-centered analyses (e.g., latent profile analysis) and variable-centered analyses (e.g., exploratory factor analysis) to understand better the latent structure of dependence and to guide distillation of the phenotype. Using data from four samples of smokers (including treatment and non-treatment samples), latent profiles were derived using the Wisconsin Inventory of Smoking Dependence Motives (WISDM) subscale scores. Across all four samples, results revealed a unique latent profile that had relative elevations on four dependence motive subscales (Automaticity, Craving, Loss of Control, and Tolerance). Variable-centered analyses supported the uniqueness of these four subscales both as measures of a common factor distinct from that underlying the other nine subscales, and as the strongest predictors of relapse, withdrawal and other dependence criteria. Conversely, the remaining nine motives carried little unique predictive validity regarding dependence. Applications of a factor mixture model further support the presence of a unique class of smokers in relation to a common factor underlying the four subscales. The results illustrate how person-centered analyses may be useful as a supplement to variable-centered analyses for uncovering variables that are necessary and/or sufficient predictors of disorder criteria, as they may uncover small segments of a population in which the variables are uniquely distributed. The results also suggest that severe dependence is associated with a pattern of smoking that is heavy, pervasive, automatic and relatively unresponsive to instrumental contingencies. PMID:19025223
NASA Astrophysics Data System (ADS)
Lee, S.; Maharani, Y. N.; Ki, S. J.
2015-12-01
The application of Self-Organizing Map (SOM) to analyze social vulnerability to recognize the resilience within sites is a challenging tasks. The aim of this study is to propose a computational method to identify the sites according to their similarity and to determine the most relevant variables to characterize the social vulnerability in each cluster. For this purposes, SOM is considered as an effective platform for analysis of high dimensional data. By considering the cluster structure, the characteristic of social vulnerability of the sites identification can be fully understand. In this study, the social vulnerability variable is constructed from 17 variables, i.e. 12 independent variables which represent the socio-economic concepts and 5 dependent variables which represent the damage and losses due to Merapi eruption in 2010. These variables collectively represent the local situation of the study area, based on conducted fieldwork on September 2013. By using both independent and dependent variables, we can identify if the social vulnerability is reflected onto the actual situation, in this case, Merapi eruption 2010. However, social vulnerability analysis in the local communities consists of a number of variables that represent their socio-economic condition. Some of variables employed in this study might be more or less redundant. Therefore, SOM is used to reduce the redundant variable(s) by selecting the representative variables using the component planes and correlation coefficient between variables in order to find the effective sample size. Then, the selected dataset was effectively clustered according to their similarities. Finally, this approach can produce reliable estimates of clustering, recognize the most significant variables and could be useful for social vulnerability assessment, especially for the stakeholder as decision maker. This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering Potential Volcanic Risk around Korea' [MPSS-NH-2015-81] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea. Keywords: Self-organizing map, Component Planes, Correlation coefficient, Cluster analysis, Sites identification, Social vulnerability, Merapi eruption 2010
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
ERIC Educational Resources Information Center
Morningstar, Mary E.; Frey, Bruce B.; Noonan, Patricia M.; Ng, Jennifer; Clavenna-Deane, Beth; Graves, Perry; Kellems, Ryan; McCall, Zach; Pearson, Mary; Wade, Diana Bjorkman; Williams-Diehm, Kendra
2010-01-01
This study examined the relationship between high school transition preparation (school and family based) and self-determination among postsecondary students with disabilities. Seventy-six participants from 4-year universities completed a two-part online survey. The first part of the survey measured three dependent variables: psychological…
The Findings of an Assessment Audit: An NTFS Project Report
ERIC Educational Resources Information Center
Hughes, Ian
2006-01-01
An Assessment Audit is described consisting of 47 questions, each being scored 0 to 4, by the module team depending on the extent to which the audit point was satisfied. Scores of 2 or less indicated unsatisfactory provision. Audits were carried out on 14 bioscience- or medicine-based modules in 13 universities. There was great variability between…
Bradly A. Trumbo; Keith H. Nislow; Jonathan Stallings; Mark Hudy; Eric P. Smith; Dong-Yun Kim; Bruce Wiggins; Charles A. Dolloff
2014-01-01
Models based on simple air temperatureâwater temperature relationships have been useful in highlighting potential threats to coldwater-dependent species such as Brook Trout Salvelinus fontinalis by predicting major losses of habitat and substantial reductions in geographic distribution. However, spatial variability in the relationship between changes...
Sexual Attraction and Romantic Love: Forgotten Variables in Marital Therapy.
ERIC Educational Resources Information Center
Roberts, Thomas W.
1992-01-01
Addresses lack of attention in marriage therapy literature to romantic love and sexual attraction. Notes that few guidelines are available to therapists concerning how to deal with love as an issue in therapy. Presents model based on assumption that marriage problems are emotional in nature and that success of marital therapists depends upon skill…
Boundary pint corrections for variable radius plots - simulation results
Margaret Penner; Sam Otukol
2000-01-01
The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...
ERIC Educational Resources Information Center
Li, Yi; Wang, Qiu; Campbell, John
2015-01-01
This study focused on learning equity in colleges and universities where teaching and learning depends heavily on computer technologies. The study used the Structural Equation Modeling (SEM) to investigate gender and racial/ethnic heterogeneity in the use of a computer based course management system (CMS). Two latent variables (CMS usage and…
Gregory L. Finstad; Knut Kielland
2011-01-01
Productivity of a managed grazing system is dependent upon both the grazing strategy of ungulates and decisions made by humans. Herds of domestic reindeer (Rangifer tarandus tarandus) graze on discrete ranges of the Seward Peninsula, Alaska with variable production rates. We show that the 15N natural abundance of reindeer...
ERIC Educational Resources Information Center
Nonnenmacher, Alexandra; Friedrichs, Jurgen
2013-01-01
To explain country differences in an analytical or structural dependent variable, the application of a macro-micro-model containing contextual hypotheses is necessary. Our methodological study examines whether empirical studies apply such a model. We propose that a theoretical base for country differences is well described in multilevel studies,…
Multiplicative Forests for Continuous-Time Processes
Weiss, Jeremy C.; Natarajan, Sriraam; Page, David
2013-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967
Multiplicative Forests for Continuous-Time Processes.
Weiss, Jeremy C; Natarajan, Sriraam; Page, David
2012-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.
Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions
ERIC Educational Resources Information Center
Vuolo, Mike
2017-01-01
Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…
NASA Technical Reports Server (NTRS)
Markowitz, A.; Turner, T. J.; Papadakis, I.; Arevalo, P.; Reeves, J. N.; Miller, L.
2007-01-01
We present the energy-dependent power spectral density (PSD) and cross-spectral properties of Mkn 766 obtained from a six-revolution XMM-Newton observation in 2005. The resulting PSDs, which have highest temporal frequency resolution for an AGN PSD to date, show breaks which increase in temporal frequency as photon energy increases; break frequencies differ by an average of approx.0.4 in the log between the softest and hardest bands. The consistency of the 2001 and 2005 observations variability properties, namely PSD shapes and the linear rms-flux relation, suggests the 2005 observation is simply a low-flux extension of the 2001 observation. The coherence function is measured to be approx.0.6-0.9 at temporal frequencies below the PSD break, and is lower for relatively larger energy band separation; coherence also drops significantly towards zero above the PSD break frequency. Temporal frequency-dependent soft-to-hard time lags are detected in this object for the first time: lags increase towards longer time scales and as energy separation increases. Cross-spectral properties are the thus consistent with previous measurements for Mkn 766 (Vaughan & Fabian 2003) and other accreting black hole systems. The results are discussed in the context of several variability models, including those based on inwardly-propagating viscosity variations in the accretion disk.
Demirjian's method in the estimation of age: A study on human third molars
Lewis, Amitha J.; Boaz, Karen; Nagesh, K. R; Srikant, N; Gupta, Neha; Nandita, K. P; Manaktala, Nidhi
2015-01-01
Aim: The primary aim of the following study is to estimate the chronological age based on the stages of third molar development following the eight stages (A to H) method of Demirjian et al. (along with two modifications-Orhan) and secondary aim is to compare third molar development with sex and age. Materials and Methods: The sample consisted of 115 orthopantomograms from South Indian subjects with known chronological age and gender. Multiple regression analysis was performed with chronological age as the dependable variable and third molar root development as independent variable. All the statistical analysis was performed using the SPSS 11.0 package (IBM ® Corporation). Results: Statistically no significant differences were found in third molar development between males and females. Depending on the available number of wisdom teeth in an individual, R2 varied for males from 0.21 to 0.48 and for females from 0.16 to 0.38. New equations were derived for estimating the chronological age. Conclusion: The chronological age of a South Indian individual between 14 and 22 years may be estimated based on the regression formulae. However, additional studies with a larger study population must be conducted to meet the need for population-based information on third molar development. PMID:26005306
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanty, Subhasish; Barua, Bipul; Soppet, William K.
This report provides an update of an earlier assessment of environmentally assisted fatigue for components in light water reactors. This report is a deliverable in September 2016 under the work package for environmentally assisted fatigue under DOE’s Light Water Reactor Sustainability program. In an April 2016 report, we presented a detailed thermal-mechanical stress analysis model for simulating the stress-strain state of a reactor pressure vessel and its nozzles under grid-load-following conditions. In this report, we provide stress-controlled fatigue test data for 508 LAS base metal alloy under different loading amplitudes (constant, variable, and random grid-load-following) and environmental conditions (in airmore » or pressurized water reactor coolant water at 300°C). Also presented is a cyclic plasticity-based analytical model that can simultaneously capture the amplitude and time dependency of the component behavior under fatigue loading. Results related to both amplitude-dependent and amplitude-independent parameters are presented. The validation results for the analytical/mechanistic model are discussed. This report provides guidance for estimating time-dependent, amplitude-independent parameters related to material behavior under different service conditions. The developed mechanistic models and the reported material parameters can be used to conduct more accurate fatigue and ratcheting evaluation of reactor components.« less
Mating tactics determine patterns of condition dependence in a dimorphic horned beetle.
Knell, Robert J; Simmons, Leigh W
2010-08-07
The persistence of genetic variability in performance traits such as strength is surprising given the directional selection that such traits experience, which should cause the fixation of the best genetic variants. One possible explanation is 'genic capture' which is usually considered as a candidate mechanism for the maintenance of high genetic variability in sexual signalling traits. This states that if a trait is 'condition dependent', with expression being strongly influenced by the bearer's overall viability, then genetic variability can be maintained via mutation-selection balance. Using a species of dimorphic beetle with males that gain matings either by fighting or by 'sneaking', we tested the prediction of strong condition dependence for strength, walking speed and testes mass. Strength was strongly condition dependent only in those beetles that fight for access to females. Walking speed, with less of an obvious selective advantage, showed no condition dependence, and testes mass was more condition dependent in sneaks, which engage in higher levels of sperm competition. Within a species, therefore, condition dependent expression varies between morphs, and corresponds to the specific selection pressures experienced by that morph. These results support genic capture as a general explanation for the maintenance of genetic variability in traits under directional selection.
Risk-adjusted antibiotic consumption in 34 public acute hospitals in Ireland, 2006 to 2014
Oza, Ajay; Donohue, Fionnuala; Johnson, Howard; Cunney, Robert
2016-01-01
As antibiotic consumption rates between hospitals can vary depending on the characteristics of the patients treated, risk-adjustment that compensates for the patient-based variation is required to assess the impact of any stewardship measures. The aim of this study was to investigate the usefulness of patient-based administrative data variables for adjusting aggregate hospital antibiotic consumption rates. Data on total inpatient antibiotics and six broad subclasses were sourced from 34 acute hospitals from 2006 to 2014. Aggregate annual patient administration data were divided into explanatory variables, including major diagnostic categories, for each hospital. Multivariable regression models were used to identify factors affecting antibiotic consumption. Coefficient of variation of the root mean squared errors (CV-RMSE) for the total antibiotic usage model was very good (11%), however, the value for two of the models was poor (> 30%). The overall inpatient antibiotic consumption increased from 82.5 defined daily doses (DDD)/100 bed-days used in 2006 to 89.2 DDD/100 bed-days used in 2014; the increase was not significant after risk-adjustment. During the same period, consumption of carbapenems increased significantly, while usage of fluoroquinolones decreased. In conclusion, patient-based administrative data variables are useful for adjusting hospital antibiotic consumption rates, although additional variables should also be employed. PMID:27541730
Song, Ruiguang; Hall, H Irene; Harrison, Kathleen McDavid; Sharpe, Tanya Telfair; Lin, Lillian S; Dean, Hazel D
2011-01-01
We developed a statistical tool that brings together standard, accessible, and well-understood analytic approaches and uses area-based information and other publicly available data to identify social determinants of health (SDH) that significantly affect the morbidity of a specific disease. We specified AIDS as the disease of interest and used data from the American Community Survey and the National HIV Surveillance System. Morbidity and socioeconomic variables in the two data systems were linked through geographic areas that can be identified in both systems. Correlation and partial correlation coefficients were used to measure the impact of socioeconomic factors on AIDS diagnosis rates in certain geographic areas. We developed an easily explained approach that can be used by a data analyst with access to publicly available datasets and standard statistical software to identify the impact of SDH. We found that the AIDS diagnosis rate was highly correlated with the distribution of race/ethnicity, population density, and marital status in an area. The impact of poverty, education level, and unemployment depended on other SDH variables. Area-based measures of socioeconomic variables can be used to identify risk factors associated with a disease of interest. When correlation analysis is used to identify risk factors, potential confounding from other variables must be taken into account.
NASA Technical Reports Server (NTRS)
Poulain, Pierre-Marie; Luther, Douglas S.; Patzert, William C.
1992-01-01
Two techniques were developed for estimating statistics of inertial oscillations from satellite-tracked drifters that overcome the difficulties inherent in estimating such statistics from data dependent upon space coordinates that are a function of time. Application of these techniques to tropical surface drifter data collected during the NORPAX, EPOCS, and TOGA programs reveals a latitude-dependent, statistically significant 'blue shift' of inertial wave frequency. The latitudinal dependence of the blue shift is similar to predictions based on 'global' internal-wave spectral models, with a superposition of frequency shifting due to modification of the effective local inertial frequency by the presence of strongly sheared zonal mean currents within 12 deg of the equator.
ERIC Educational Resources Information Center
Sendhil, Geetha R.
2012-01-01
The purpose of this national study was to utilize quantitative methods to examine institutional characteristics, financial resource variables, personnel variables, and customer variables of public and private institutions that have and have not implemented enterprise resource planning (ERP) systems, from a resource dependence perspective.…
Context dependent prediction and category encoding for DPCM image compression
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.
1989-01-01
Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.
Relationship between wind speed and gas exchange over the ocean
NASA Technical Reports Server (NTRS)
Wanninkhof, Rik
1992-01-01
A quadratic dependence of gas exchange on wind speed is employed to analyze the relationship between gas transfer and wind speed with particular emphasizing variable and/or low wind speeds. The quadratic dependence is fit through gas-transfer velocities over the ocean determined by methods based on the natural C-14 disequilibrium and the bomb C-14 inventory. The variation in the CO2 levels is related to these mechanisms, but the results show that other causes play significant roles. A weaker dependence of gas transfer on wind is suggested for steady winds, and long-term averaged winds demonstrate a stronger dependence in the present model. The chemical enhancement of CO2 exchange is also shown to play a role by increasing CO2 fluxes at low wind speeds.
NASA Astrophysics Data System (ADS)
Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.
2012-04-01
Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the relationships between variable are set a priori according to literature and/or experience in the field (expert knowledge). The statistical validation is verified according to error rate of classification, and the significance of the variables. Sensibility analysis has also been performed to characterise the relative influence of every single variable in the model. Once validated, the models allow the estimation of impact of each variable on the behaviour of the water supply or sanitation providing an interesting mean to test scenarios and predict variables behaviours. The choices made, methods and description of the various models, for each cluster as well as the global model for water supply and sanitation will be presented. Key results and interpretation of the relationships depicted by the models will be detailed during the conference.
Urbina, Mauricio A
2016-12-15
The impacts of any activity on marine ecosystems will depend on the characteristics of the receptor medium and its resilience to external pressures. Salmon farming industry develops along a constant gradient of hydrodynamic conditions in the south of Chile. However, the influence of the hydrodynamic characteristics (weak or strong) on the impacts of intensive salmon farming is still poorly understood. This one year study evaluates the impacts of salmon farming on the marine sediments of both protected and exposed marine zones differing in their hydrodynamic characteristics. Six physico-chemical, five biological variables and seven indexes of marine sediments status were evaluated under the salmon farming cages and control sites. Our results identified a few key variables and indexes necessary to accurately evaluate the salmon farming impacts on both protected and exposed zones. Interestingly, the ranking of importance of the variables and the temporality of the observed changes, varied depending on the hydrodynamic characteristics. Biological variables (nematodes abundance) and environmental indexes (Simpson's dominance, Shannon's diversity and Pielou evenness) are the first to reflect detrimental impacts under the salmon farming cages. Then the physico-chemical variables such as redox, sulphurs and phosphorus in both zones also show detrimental impacts. Based on the present results we propose that the hydrodynamic regime is an important driver of the magnitude and temporality of the effects of salmon farming on marine sediments. The variables and indexes that best reflect the effects of salmon farming, in both protected and exposed zones, are also described. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Raju, C. S. K.; Sekhar, K. R.; Ibrahim, S. M.; Lorenzini, G.; Viswanatha Reddy, G.; Lorenzini, E.
2017-05-01
In this study, we proposed a theoretical investigation on the temperature-dependent viscosity effect on magnetohydrodynamic dissipative nanofluid over a truncated cone with heat source/sink. The involving set of nonlinear partial differential equations is transforming to set of nonlinear ordinary differential equations by using self-similarity solutions. The transformed governing equations are solved numerically using Runge-Kutta-based Newton's technique. The effects of various dimensionless parameters on the skin friction coefficient and the local Nusselt number profiles are discussed and presented with the support of graphs. We also obtained the validation of the current solutions with existing solution under some special cases. The water-based titanium alloy has a lesser friction factor coefficient as compared with kerosene-based titanium alloy, whereas the rate of heat transfer is higher in water-based titanium alloy compared with kerosene-based titanium alloy. From this we can highlight that depending on the industrial needs cooling/heating chooses the water- or kerosene-based titanium alloys.
EMG-based speech recognition using hidden markov models with global control variables.
Lee, Ki-Seung
2008-03-01
It is well known that a strong relationship exists between human voices and the movement of articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The sequence of EMG signals for each word is modelled by a hidden Markov model (HMM) framework. The main objective of the work involves building a model for state observation density when multichannel observation sequences are given. The proposed model reflects the dependencies between each of the EMG signals, which are described by introducing a global control variable. We also develop an efficient model training method, based on a maximum likelihood criterion. In a preliminary study, 60 isolated words were used as recognition variables. EMG signals were acquired from three articulatory facial muscles. The findings indicate that such a system may have the capacity to recognize speech signals with an accuracy of up to 87.07%, which is superior to the independent probabilistic model.
NASA Astrophysics Data System (ADS)
Petropavlovskikh, I.; Ahn, Changwoo; Bhartia, P. K.; Flynn, L. E.
2005-03-01
This analysis presents comparisons of upper-stratosphere ozone information observed by two independent systems: the Solar Backscatter UltraViolet (SBUV and SBUV/2) satellite instruments, and ground-based Dobson spectrophotometers. Both the new SBUV Version 8 and the new UMK04 profile retrieval algorithms are optimized for studying long-term variability and trends in ozone. Trend analyses of the ozone time series from the SBUV(/2) data set are complex because of the multiple instruments involved, changes in the instruments' geo-location, and short periods of overlaps for inter-calibrations among different instruments. Three northern middle latitudes Dobson ground stations (Arosa, Boulder, and Tateno) are used in this analysis to validate the trend quality of the combined 25-year SBUV/2 time series, 1979 to 2003. Generally, differences between the satellite and ground-based data do not suggest any significant time-dependent shifts or trends. The shared features confirm the value of these data sets for studies of ozone variability.
Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.
Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng
2011-10-01
This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.; ...
2017-11-26
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
The nature and use of prediction skills in a biological computer simulation
NASA Astrophysics Data System (ADS)
Lavoie, Derrick R.; Good, Ron
The primary goal of this study was to examine the science process skill of prediction using qualitative research methodology. The think-aloud interview, modeled after Ericsson and Simon (1984), let to the identification of 63 program exploration and prediction behaviors.The performance of seven formal and seven concrete operational high-school biology students were videotaped during a three-phase learning sequence on water pollution. Subjects explored the effects of five independent variables on two dependent variables over time using a computer-simulation program. Predictions were made concerning the effect of the independent variables upon dependent variables through time. Subjects were identified according to initial knowledge of the subject matter and success at solving three selected prediction problems.Successful predictors generally had high initial knowledge of the subject matter and were formal operational. Unsuccessful predictors generally had low initial knowledge and were concrete operational. High initial knowledge seemed to be more important to predictive success than stage of Piagetian cognitive development.Successful prediction behaviors involved systematic manipulation of the independent variables, note taking, identification and use of appropriate independent-dependent variable relationships, high interest and motivation, and in general, higher-level thinking skills. Behaviors characteristic of unsuccessful predictors were nonsystematic manipulation of independent variables, lack of motivation and persistence, misconceptions, and the identification and use of inappropriate independent-dependent variable relationships.
Variables Associated With Tic Exacerbation in Children With Chronic Tic Disorders
Himle, Michael B.; Capriotti, Matthew R.; Hayes, Loran P.; Ramanujam, Krishnapriya; Scahill, Lawrence; Sukhodolsky, Denis G.; Wilhelm, Sabine; Deckersbach, Thilo; Peterson, Alan L.; Specht, Matt W.; Walkup, John T.; Chang, Susanna; Piacentini, John
2014-01-01
Research has shown that motor and vocal tics fluctuate in frequency, intensity, and form in response to environmental and contextual cues. Behavioral models have proposed that some of the variation in tics may reflect context-dependent interactive learning processes such that once tics are performed, they are influenced by environmental contingencies. The current study describes the results of a function-based assessment of tics (FBAT) from a recently completed study comparing Comprehensive Behavioral Intervention for Tics (CBIT) with supportive psychotherapy. The current study describes the frequency with which antecedent and consequence variables were reported to exacerbate tics and the relationships between these functional variables and sample baseline characteristics, comorbidities, and measures of tic severity. Results showed that tic-exacerbating antecedents and consequences were nearly ubiquitous in a sample of children with chronic tic disorder. In addition, functional variables were related to baseline measures of comorbid internalizing symptoms and specific measures of tic severity. PMID:24778433
Estimating the signal-to-noise ratio of AVIRIS data
NASA Technical Reports Server (NTRS)
Curran, Paul J.; Dungan, Jennifer L.
1988-01-01
To make the best use of narrowband airborne visible/infrared imaging spectrometer (AVIRIS) data, an investigator needs to know the ratio of signal to random variability or noise (signal-to-noise ratio or SNR). The signal is land cover dependent and varies with both wavelength and atmospheric absorption; random noise comprises sensor noise and intrapixel variability (i.e., variability within a pixel). The three existing methods for estimating the SNR are inadequate, since typical laboratory methods inflate while dark current and image methods deflate the SNR. A new procedure is proposed called the geostatistical method. It is based on the removal of periodic noise by notch filtering in the frequency domain and the isolation of sensor noise and intrapixel variability using the semi-variogram. This procedure was applied easily and successfully to five sets of AVIRIS data from the 1987 flying season and could be applied to remotely sensed data from broadband sensors.
Continuous variable quantum key distribution with modulated entangled states.
Madsen, Lars S; Usenko, Vladyslav C; Lassen, Mikael; Filip, Radim; Andersen, Ulrik L
2012-01-01
Quantum key distribution enables two remote parties to grow a shared key, which they can use for unconditionally secure communication over a certain distance. The maximal distance depends on the loss and the excess noise of the connecting quantum channel. Several quantum key distribution schemes based on coherent states and continuous variable measurements are resilient to high loss in the channel, but are strongly affected by small amounts of channel excess noise. Here we propose and experimentally address a continuous variable quantum key distribution protocol that uses modulated fragile entangled states of light to greatly enhance the robustness to channel noise. We experimentally demonstrate that the resulting quantum key distribution protocol can tolerate more noise than the benchmark set by the ideal continuous variable coherent state protocol. Our scheme represents a very promising avenue for extending the distance for which secure communication is possible.
NASA Astrophysics Data System (ADS)
López-Estrada, F. R.; Astorga-Zaragoza, C. M.; Theilliol, D.; Ponsart, J. C.; Valencia-Palomo, G.; Torres, L.
2017-12-01
This paper proposes a methodology to design a Takagi-Sugeno (TS) descriptor observer for a class of TS descriptor systems. Unlike the popular approach that considers measurable premise variables, this paper considers the premise variables depending on unmeasurable vectors, e.g. the system states. This consideration covers a large class of nonlinear systems and represents a real challenge for the observer synthesis. Sufficient conditions to guarantee robustness against the unmeasurable premise variables and asymptotic convergence of the TS descriptor observer are obtained based on the H∞ approach together with the Lyapunov method. As a result, the designing conditions are given in terms of linear matrix inequalities (LMIs). In addition, sensor fault detection and isolation are performed by means of a generalised observer bank. Two numerical experiments, an electrical circuit and a rolling disc system, are presented in order to illustrate the effectiveness of the proposed method.
A model of urban scaling laws based on distance dependent interactions
Ribeiro, Fabiano L.; Meirelles, Joao; Ferreira, Fernando F.
2017-01-01
Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log–log plot, the so-called superlinear scaling. Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences. PMID:28405381
Inferring Cirrus Size Distributions Through Satellite Remote Sensing and Microphysical Databases
NASA Technical Reports Server (NTRS)
Mitchell, David; D'Entremont, Robert P.; Lawson, R. Paul
2010-01-01
Since cirrus clouds have a substantial influence on the global energy balance that depends on their microphysical properties, climate models should strive to realistically characterize the cirrus ice particle size distribution (PSD), at least in a climatological sense. To date, the airborne in situ measurements of the cirrus PSD have contained large uncertainties due to errors in measuring small ice crystals (D<60 m). This paper presents a method to remotely estimate the concentration of the small ice crystals relative to the larger ones using the 11- and 12- m channels aboard several satellites. By understanding the underlying physics producing the emissivity difference between these channels, this emissivity difference can be used to infer the relative concentration of small ice crystals. This is facilitated by enlisting temperature-dependent characterizations of the PSD (i.e., PSD schemes) based on in situ measurements. An average cirrus emissivity relationship between 12 and 11 m is developed here using the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument and is used to retrieve the PSD based on six different PSD schemes. The PSDs from the measurement-based PSD schemes are compared with corresponding retrieved PSDs to evaluate differences in small ice crystal concentrations. The retrieved PSDs generally had lower concentrations of small ice particles, with total number concentration independent of temperature. In addition, the temperature dependence of the PSD effective diameter De and fall speed Vf for these retrieved PSD schemes exhibited less variability relative to the unmodified PSD schemes. The reduced variability in the retrieved De and Vf was attributed to the lower concentrations of small ice crystals in the retrieved PSD.
Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies
NASA Astrophysics Data System (ADS)
Perez Hoyos, Isabel Cristina
The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.
Rodríguez-Díaz, M Teresa; Pérez-Marfil, M Nieves; Cruz-Quintana, Francisco
2016-12-01
The objective of this study is to design and implement an intervention program centered on preventing functional dependence. A pre/post quasi-experimental (typical case) design study with a control group was conducted on a group of 75-90-year-old individuals with functional dependence (n = 59) at three nursing homes in Madrid (Spain). The intervention program consists of two types of activities developed simultaneously. Some focused on emotional well-being (nine 90-minute sessions, once per week), whereas others focused on improving participants' physical condition (two 30-minute sessions, twice per week). The simple randomized participants included 59 elderly individuals (Intervention Group = 30, Control Group = 29) (mean age 86.80) [SD, 5. 19]. Fifty-nine participants were analyzed. The results indicate that the program is effective in improving mood, lowering anxiety levels (d = 0.81), and increasing both self-esteem (d = 0.65) and the perception of self-efficacy (d = 1.04). There are improvements in systolic pressure and functional dependence levels are maintained. Linear simple regression (independent variable pre-Barthel) shows that the pre-intervention dependence level can predict self-esteem after the intervention. We have demonstrated that the program is innovative with regard to bio-psychosocial care in elderly individuals, is based on actual practice, and is effective in increasing both self-esteem and self-efficacy. These variables positively affect functional capabilities and delay functional dependence.
Talley, Amelia E; Brown, Jennifer L; Stevens, Angela K; Littlefield, Andrew K
2014-01-01
Objective: The current study examines the relation between peer descriptive norms for alcohol involvement and alcohol-dependence symptomatology and whether this relation differs as a function of sexual self-concept ambiguity (SSA). This study also examines the associations among peer descriptive norms for alcohol involvement, alcohol-dependence symptomatology, and lifetime HIV risk-taking behavior and how these relations are influenced by SSA. Method: Women between ages 18 and 30 years (N = 351; M = 20.96, SD = 2.92) completed an online survey assessing sexual self-concept, peer descriptive norms, alcohol-dependence symptomatology, and HIV risk-taking behaviors. Structural equation modeling was used to test hypotheses of interest. Results: There was a significant latent variable interaction between SSA and descriptive norms for peer alcohol use. There was a stronger positive relationship between peer descriptive norms for alcohol and alcohol-dependence symptomatology when SSA was higher compared with when SSA was lower. Both latent variables exhibited positive simple associations with alcohol-dependence symptoms. Peer descriptive norms for alcohol involvement directly and indirectly influenced HIV risk-taking behaviors, and the indirect influence was conditional based on SSA. Conclusions: The current findings illustrate complex, nuanced associations between perceived norms, identity-related self-concepts, and risky health behaviors from various domains. Future intervention efforts may be warranted to address both problem alcohol use and HIV-risk engagement among individuals with greater sexual self-concept ambiguity. PMID:25343661
Talley, Amelia E; Brown, Jennifer L; Stevens, Angela K; Littlefield, Andrew K
2014-11-01
The current study examines the relation between peer descriptive norms for alcohol involvement and alcohol-dependence symptomatology and whether this relation differs as a function of sexual self-concept ambiguity (SSA). This study also examines the associations among peer descriptive norms for alcohol involvement, alcohol-dependence symptomatology, and lifetime HIV risk-taking behavior and how these relations are influenced by SSA. Women between ages 18 and 30 years (N = 351; M = 20.96, SD = 2.92) completed an online survey assessing sexual self-concept, peer descriptive norms, alcohol-dependence symptomatology, and HIV risk-taking behaviors. Structural equation modeling was used to test hypotheses of interest. There was a significant latent variable interaction between SSA and descriptive norms for peer alcohol use. There was a stronger positive relationship between peer descriptive norms for alcohol and alcohol-dependence symptomatology when SSA was higher compared with when SSA was lower. Both latent variables exhibited positive simple associations with alcohol-dependence symptoms. Peer descriptive norms for alcohol involvement directly and indirectly influenced HIV risk-taking behaviors, and the indirect influence was conditional based on SSA. The current findings illustrate complex, nuanced associations between perceived norms, identity-related self-concepts, and risky health behaviors from various domains. Future intervention efforts may be warranted to address both problem alcohol use and HIV-risk engagement among individuals with greater sexual self-concept ambiguity.
NASA Astrophysics Data System (ADS)
Vrac, Mathieu
2018-06-01
Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.
Choice of Variables and Preconditioning for Time Dependent Problems
NASA Technical Reports Server (NTRS)
Turkel, Eli; Vatsa, Verr N.
2003-01-01
We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.
Focal length hysteresis of a double-liquid lens based on electrowetting
NASA Astrophysics Data System (ADS)
Peng, Runling; Wang, Dazhen; Hu, Zhiwei; Chen, Jiabi; Zhuang, Songlin
2013-02-01
In this paper, an extended Young equation especially suited for an ideal cylindrical double-liquid variable-focus lens is derived by means of an energy minimization method. Based on the extended Young equation, a kind of focal length hysteresis effect is introduced into the double-liquid variable-focus lens. Such an effect can be explained theoretically by adding a force of friction to the tri-phase contact line. Theoretical analysis shows that the focal length at a particular voltage can be different depending on whether the applied voltage is increasing or decreasing, that is, there is a focal length hysteresis effect. Moreover, the focal length at a particular voltage must be larger when the voltage is rising than when it is dropping. These conclusions are also verified by experiments.
Predictors of Start of Different Antidepressants in Patient Charts among Patients with Depression
Kim, Hyungjin Myra; Zivin, Kara; Choe, Hae Mi; Stano, Clare M.; Ganoczy, Dara; Walters, Heather; Valenstein, Marcia
2016-01-01
Background In usual psychiatric care, antidepressant treatments are selected based on physician and patient preferences rather than being randomly allocated, resulting in spurious associations between these treatments and outcome studies. Objectives To identify factors recorded in electronic medical chart progress notes predictive of antidepressant selection among patients who had received a depression diagnosis. Methods This retrospective study sample consisted of 556 randomly selected Veterans Health Administration (VHA) patients diagnosed with depression from April 1, 1999 to September 30, 2004, stratified by the antidepressant agent, geographic region, gender, and year of depression cohort entry. Predictors were obtained from administrative data, and additional variables were abstracted from electronic medical chart notes in the year prior to the start of the antidepressant in five categories: clinical symptoms and diagnoses, substance use, life stressors, behavioral/ideation measures (e.g., suicide attempts), and treatments received. Multinomial logistic regression analysis was used to assess the predictors associated with different antidepressant prescribing, and adjusted relative risk ratios (RRR) are reported. Results Of the administrative data-based variables, gender, age, illicit drug abuse or dependence, and number of psychiatric medications in prior year were significantly associated with antidepressant selection. After adjusting for administrative data-based variables, sleep problems (RRR = 2.47) or marital issues (RRR = 2.64) identified in the charts were significantly associated with prescribing mirtazapine rather than sertraline; however, no other chart-based variables showed a significant association or an association with a large magnitude. Conclusion Some chart data-based variables were predictive of antidepressant selection, but we neither found many nor found them highly predictive of antidepressant selection in patients treated for depression. PMID:25943003
Localization Decisions of Entrepreneurs: The Role of Path Dependency and Market Forces
NASA Astrophysics Data System (ADS)
Pylak, Korneliusz; Majerek, Dariusz
2017-10-01
The purpose of this research is to determine the role of path dependency and market forces in the localization decisions of entrepreneurs from different industries. We hypothesize that most industries develop new entities based on the number of companies from the same industry that already exist in a region. We also hypothesize that entrepreneurs create new entities based on related industries operating within the same knowledge pool. To test these hypotheses, we used the machine learning decision tree method. The input variables are the number of companies from 86 industries located in 2,531 communities in Poland in 2009. The target values are the number of new companies from these industries created in the years 2009-2015. The principal results show that localization decisions are mostly based on demand and supply industries, in which manufacturing industries play crucial role. Path dependency appears in less than half of the industries’ models and thus is not the main factor influencing decisions regarding the creation of new companies. The highest share of path-dependent industries is in manufacturing sector, but the degree of the dependence is lower than in the service sector. The service sector seems to be the least path-dependent, as services usually serve other industries. Competition in industries is a rare factor in new company creation; however, if it appears, it usually shrinks the industry.
NASA Astrophysics Data System (ADS)
Carisi, Francesca; Domeneghetti, Alessio; Kreibich, Heidi; Schröter, Kai; Castellarin, Attilio
2017-04-01
Flood risk is function of flood hazard and vulnerability, therefore its accurate assessment depends on a reliable quantification of both factors. The scientific literature proposes a number of objective and reliable methods for assessing flood hazard, yet it highlights a limited understanding of the fundamental damage processes. Loss modelling is associated with large uncertainty which is, among other factors, due to a lack of standard procedures; for instance, flood losses are often estimated based on damage models derived in completely different contexts (i.e. different countries or geographical regions) without checking its applicability, or by considering only one explanatory variable (i.e. typically water depth). We consider the Secchia river flood event of January 2014, when a sudden levee-breach caused the inundation of nearly 200 km2 in Northern Italy. In the aftermath of this event, local authorities collected flood loss data, together with additional information on affected private households and industrial activities (e.g. buildings surface and economic value, number of company's employees and others). Based on these data we implemented and compared a quadratic-regression damage function, with water depth as the only explanatory variable, and a multi-variable model that combines multiple regression trees and considers several explanatory variables (i.e. bagging decision trees). Our results show the importance of data collection revealing that (1) a simple quadratic regression damage function based on empirical data from the study area can be significantly more accurate than literature damage-models derived for a different context and (2) multi-variable modelling may outperform the uni-variable approach, yet it is more difficult to develop and apply due to a much higher demand of detailed data.
NASA Astrophysics Data System (ADS)
Khan, F.; Pilz, J.; Spöck, G.
2017-12-01
Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.
Compensation for Lithography Induced Process Variations during Physical Design
NASA Astrophysics Data System (ADS)
Chin, Eric Yiow-Bing
This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-05-01
We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.
Does Nationality Matter in the B2C Environment? Results from a Two Nation Study
NASA Astrophysics Data System (ADS)
Peikari, Hamid Reza
Different studies have explored the relations between different dimensions of e-commerce transactions and lots of models and findings have been proposed to the academic and business worlds. However, there is a doubt on the applications and generalization of such models and findings in different countries and nations. In other words, this study argues that the relations among the variables of a model ay differ in different countries, which raises questions on the findings of researchers collecting data in one country to test their hypotheses. This study intends to examine if different nations have different perceptions toward the elements of Website interface, security and purchase intention on Internet. Moreover, a simple model was developed to investigate whether the independent variables of the model are equally important in different nations and significantly influence the dependent variable in such nations or not. Since majority of the studies in the context of e-commerce were either focused on the developed countries which have a high e-readiness indices and overall ranks, two developing countries with different e-readiness indices and ranks were selected for the data collection. The results showed that the samples had different significant perceptions of security and some of the Website interface factors. Moreover, it was found that the significance of relations among the independent variables ad the dependent variable are different between the samples, which questions the findings of the researchers testing their model and hypotheses only based on the data collected in one country.
Shandra, John M; Nobles, Jenna; London, Bruce; Williamson, John B
2004-07-01
This study presents quantitative, sociological models designed to account for cross-national variation in infant mortality rates. We consider variables linked to four different theoretical perspectives: the economic modernization, social modernization, political modernization, and dependency perspectives. The study is based on a panel regression analysis of a sample of 59 developing countries. Our preliminary analysis based on additive models replicates prior studies to the extent that we find that indicators linked to economic and social modernization have beneficial effects on infant mortality. We also find support for hypotheses derived from the dependency perspective suggesting that multinational corporate penetration fosters higher levels of infant mortality. Subsequent analysis incorporating interaction effects suggest that the level of political democracy conditions the effects of dependency relationships based upon exports, investments from multinational corporations, and international lending institutions. Transnational economic linkages associated with exports, multinational corporations, and international lending institutions adversely affect infant mortality more strongly at lower levels of democracy than at higher levels of democracy: intranational, political factors interact with the international, economic forces to affect infant mortality. We conclude with some brief policy recommendations and suggestions for the direction of future research.
Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.
Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea
2017-05-01
Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Goswami, Prashant; Murty, Upadhayula Suryanarayana; Mutheneni, Srinivasa Rao; Krishnan, Swathi Trithala
2014-01-01
Pro-active and effective control as well as quantitative assessment of impact of climate change on malaria requires identification of the major drivers of the epidemic. Malaria depends on vector abundance which, in turn, depends on a combination of weather variables. However, there remain several gaps in our understanding and assessment of malaria in a changing climate. Most of the studies have considered weekly or even monthly mean values of weather variables, while the malaria vector is sensitive to daily variations. Secondly, rarely all the relevant meteorological variables have been considered together. An important question is the relative roles of weather variables (vector abundance) and change in host (human) population, in the change in disease load. We consider the 28 states of India, characterized by diverse climatic zones and changing population as well as complex variability in malaria, as a natural test bed. An annual vector load for each of the 28 states is defined based on the number of vector genesis days computed using daily values of temperature, rainfall and humidity from NCEP daily Reanalysis; a prediction of potential malaria load is defined by taking into consideration changes in the human population and compared with the reported number of malaria cases. For most states, the number of malaria cases is very well correlated with the vector load calculated with the combined conditions of daily values of temperature, rainfall and humidity; no single weather variable has any significant association with the observed disease prevalence. The association between vector-load and daily values of weather variables is robust and holds for different climatic regions (states of India). Thus use of all the three weather variables provides a reliable means of pro-active and efficient vector sanitation and control as well as assessment of impact of climate change on malaria.
Goswami, Prashant; Murty, Upadhayula Suryanarayana; Mutheneni, Srinivasa Rao; Krishnan, Swathi Trithala
2014-01-01
Background Pro-active and effective control as well as quantitative assessment of impact of climate change on malaria requires identification of the major drivers of the epidemic. Malaria depends on vector abundance which, in turn, depends on a combination of weather variables. However, there remain several gaps in our understanding and assessment of malaria in a changing climate. Most of the studies have considered weekly or even monthly mean values of weather variables, while the malaria vector is sensitive to daily variations. Secondly, rarely all the relevant meteorological variables have been considered together. An important question is the relative roles of weather variables (vector abundance) and change in host (human) population, in the change in disease load. Method We consider the 28 states of India, characterized by diverse climatic zones and changing population as well as complex variability in malaria, as a natural test bed. An annual vector load for each of the 28 states is defined based on the number of vector genesis days computed using daily values of temperature, rainfall and humidity from NCEP daily Reanalysis; a prediction of potential malaria load is defined by taking into consideration changes in the human population and compared with the reported number of malaria cases. Results For most states, the number of malaria cases is very well correlated with the vector load calculated with the combined conditions of daily values of temperature, rainfall and humidity; no single weather variable has any significant association with the observed disease prevalence. Conclusion The association between vector-load and daily values of weather variables is robust and holds for different climatic regions (states of India). Thus use of all the three weather variables provides a reliable means of pro-active and efficient vector sanitation and control as well as assessment of impact of climate change on malaria. PMID:24971510
On the comparison of the strength of morphological integration across morphometric datasets.
Adams, Dean C; Collyer, Michael L
2016-11-01
Evolutionary morphologists frequently wish to understand the extent to which organisms are integrated, and whether the strength of morphological integration among subsets of phenotypic variables differ among taxa or other groups. However, comparisons of the strength of integration across datasets are difficult, in part because the summary measures that characterize these patterns (RV coefficient and r PLS ) are dependent both on sample size and on the number of variables. As a solution to this issue, we propose a standardized test statistic (a z-score) for measuring the degree of morphological integration between sets of variables. The approach is based on a partial least squares analysis of trait covariation, and its permutation-based sampling distribution. Under the null hypothesis of a random association of variables, the method displays a constant expected value and confidence intervals for datasets of differing sample sizes and variable number, thereby providing a consistent measure of integration suitable for comparisons across datasets. A two-sample test is also proposed to statistically determine whether levels of integration differ between datasets, and an empirical example examining cranial shape integration in Mediterranean wall lizards illustrates its use. Some extensions of the procedure are also discussed. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Zubillaga, María; Skewes, Oscar; Soto, Nicolás; Rabinovich, Jorge E.; Colchero, Fernando
2014-01-01
Understanding the mechanisms that drive population dynamics is fundamental for management of wild populations. The guanaco (Lama guanicoe) is one of two wild camelid species in South America. We evaluated the effects of density dependence and weather variables on population regulation based on a time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km2, with an apparent monotonic growth during the first 25 years; however, in the last 10 years the population has shown large fluctuations, suggesting that it might have reached its carrying capacity. We used a Bayesian state-space framework and model selection to determine the effect of density and environmental variables on guanaco population dynamics. Our results show that the population is under density dependent regulation and that it is currently fluctuating around an average carrying capacity of 45,000 guanacos. We also found a significant positive effect of previous winter temperature while sheep density has a strong negative effect on the guanaco population growth. We conclude that there are significant density dependent processes and that climate as well as competition with domestic species have important effects determining the population size of guanacos, with important implications for management and conservation. PMID:25514510
Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229
Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.
Zubillaga, María; Skewes, Oscar; Soto, Nicolás; Rabinovich, Jorge E; Colchero, Fernando
2014-01-01
Understanding the mechanisms that drive population dynamics is fundamental for management of wild populations. The guanaco (Lama guanicoe) is one of two wild camelid species in South America. We evaluated the effects of density dependence and weather variables on population regulation based on a time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km2, with an apparent monotonic growth during the first 25 years; however, in the last 10 years the population has shown large fluctuations, suggesting that it might have reached its carrying capacity. We used a Bayesian state-space framework and model selection to determine the effect of density and environmental variables on guanaco population dynamics. Our results show that the population is under density dependent regulation and that it is currently fluctuating around an average carrying capacity of 45,000 guanacos. We also found a significant positive effect of previous winter temperature while sheep density has a strong negative effect on the guanaco population growth. We conclude that there are significant density dependent processes and that climate as well as competition with domestic species have important effects determining the population size of guanacos, with important implications for management and conservation.
Dependence of drivers affects risks associated with compound events
NASA Astrophysics Data System (ADS)
Zscheischler, Jakob; Seneviratne, Sonia I.
2017-04-01
Compound climate extremes are receiving increasing attention because of their disproportionate impacts on humans and ecosystems. Risks assessments, however, generally focus on univariate statistics even when multiple stressors are considered. Concurrent extreme droughts and heatwaves have been observed to cause a suite of extreme impacts on natural and human systems alike. For example, they can substantially affect vegetation health, prompting tree mortality, and thereby facilitating insect outbreaks and fires. In addition, hot droughts have the potential to trigger and intensify fires and can cause severe economical damage. By promoting disease spread, extremely hot and dry conditions also strongly affect human health. We analyse the co-occurrence of dry and hot summers and show that these are strongly correlated for many regions, inducing a much higher frequency of concurrent hot and dry summers than what would be assumed from the independent combination of the univariate statistics. Our results demonstrate how the dependence structure between variables affects the occurrence frequency of multivariate extremes. Assessments based on univariate statistics can thus strongly underestimate risks associated with given extremes, if impacts depend on multiple (dependent) variables. We conclude that a multivariate perspective is necessary in order to appropriately assess changes in climate extremes and their impacts, and to design adaptation strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozaki, Toshiro, E-mail: ganronbun@amail.plala.or.jp; Seki, Hiroshi; Shiina, Makoto
2009-09-15
The purpose of the present study was to elucidate a method for predicting the intrahepatic arteriovenous shunt rate from computed tomography (CT) images and biochemical data, instead of from arterial perfusion scintigraphy, because adverse exacerbated systemic effects may be induced in cases where a high shunt rate exists. CT and arterial perfusion scintigraphy were performed in patients with liver metastases from gastric or colorectal cancer. Biochemical data and tumor marker levels of 33 enrolled patients were measured. The results were statistically verified by multiple regression analysis. The total metastatic hepatic tumor volume (V{sub metastasized}), residual hepatic parenchyma volume (V{sub residual};more » calculated from CT images), and biochemical data were treated as independent variables; the intrahepatic arteriovenous (IHAV) shunt rate (calculated from scintigraphy) was treated as a dependent variable. The IHAV shunt rate was 15.1 {+-} 11.9%. Based on the correlation matrixes, the best correlation coefficient of 0.84 was established between the IHAV shunt rate and V{sub metastasized} (p < 0.01). In the multiple regression analysis with the IHAV shunt rate as the dependent variable, the coefficient of determination (R{sup 2}) was 0.75, which was significant at the 0.1% level with two significant independent variables (V{sub metastasized} and V{sub residual}). The standardized regression coefficients ({beta}) of V{sub metastasized} and V{sub residual} were significant at the 0.1 and 5% levels, respectively. Based on this result, we can obtain a predicted value of IHAV shunt rate (p < 0.001) using CT images. When a high shunt rate was predicted, beneficial and consistent clinical monitoring can be initiated in, for example, hepatic arterial infusion chemotherapy.« less
Babinski, Paul J
2016-01-01
This cross-sectional quantitative study was undertaken to determine the extent to which individuals who have differing health care leadership roles perceived the importance of selected leadership competencies in their specific roles based on their experience. A total of 313 participants responded to the health care questionnaire. Principal component analysis identified factor structure and Cronbach α at .96 supported the reliability of the factor analysis. Multivariate analysis of variance tested the 4 health care leadership roles to determine if an effect was present among the competencies. A subsequent analysis of variance test was conducted on the competencies to confirm an effect was present, and a Games-Howell post hoc test followed. These tests indicated that there was a significant difference in rating the perceived importance of specific leadership competencies by the health care leaders in each competency domain. The participants included in this study consisted of the chief executive officer (CEO), director of nursing (DON), operating room director (ORD), and director of radiology (DOR). Based on the Games-Howell post hoc test, a commonality existed between the leaders. The CEOs and DONs often indicated no significant difference in competency perception to one another in relation to the dependent variables, yet indicated a significant difference in competency perception when compared with the ORDs and DORs. Similarly, the ORD and DOR variables often indicated no significant difference in competency perception to one another in relation to the dependent variables, yet indicated a significant difference in competency perception compared with the CEO and DON variables. This study positively indicated that health care leadership's perception of competencies does differ between the various leadership roles.
Global growth and stability of agricultural yield decrease with pollinator dependence
Garibaldi, Lucas A.; Aizen, Marcelo A.; Klein, Alexandra M.; Cunningham, Saul A.; Harder, Lawrence D.
2011-01-01
Human welfare depends on the amount and stability of agricultural production, as determined by crop yield and cultivated area. Yield increases asymptotically with the resources provided by farmers’ inputs and environmentally sensitive ecosystem services. Declining yield growth with increased inputs prompts conversion of more land to cultivation, but at the risk of eroding ecosystem services. To explore the interdependence of agricultural production and its stability on ecosystem services, we present and test a general graphical model, based on Jensen's inequality, of yield–resource relations and consider implications for land conversion. For the case of animal pollination as a resource influencing crop yield, this model predicts that incomplete and variable pollen delivery reduces yield mean and stability (inverse of variability) more for crops with greater dependence on pollinators. Data collected by the Food and Agriculture Organization of the United Nations during 1961–2008 support these predictions. Specifically, crops with greater pollinator dependence had lower mean and stability in relative yield and yield growth, despite global yield increases for most crops. Lower yield growth was compensated by increased land cultivation to enhance production of pollinator-dependent crops. Area stability also decreased with pollinator dependence, as it correlated positively with yield stability among crops. These results reveal that pollen limitation hinders yield growth of pollinator-dependent crops, decreasing temporal stability of global agricultural production, while promoting compensatory land conversion to agriculture. Although we examined crop pollination, our model applies to other ecosystem services for which the benefits to human welfare decelerate as the maximum is approached. PMID:21422295
NASA Astrophysics Data System (ADS)
Kopacz, Michał
2017-09-01
The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.
Computational implications of activity-dependent neuronal processes
NASA Astrophysics Data System (ADS)
Goldman, Mark Steven
Synapses, the connections between neurons, often fail to transmit a large percentage of the action potentials that they receive. I describe several models of synaptic transmission at a single stochastic synapse with an activity-dependent probability of transmission and demonstrate how synaptic transmission failures may increase the efficiency with which a synapse transmits information. Spike trains in the visual cortex of freely viewing monkeys have positive auto correlations that are indicative of a redundant representation of the information they contain. I show how a synapse with activity-dependent transmission failures modeled after those occurring in visual cortical synapses can remove this redundancy by transmitting a decorrelated subset of the spike trains it receives. I suggest that redundancy reduction at individual synapses saves synaptic resources while increasing the sensitivity of the postsynaptic neuron to information arriving along many inputs. For a neuron receiving input from many decorrelating synapses, my analysis leads to a prediction of the number of visual inputs to a neuron and the cross-correlations between these inputs and suggests that the time scale of synaptic dynamics observed in sensory areas corresponds to a fundamental time scale for processing sensory information. Systems with activity-dependent changes in their parameters, or plasticity, often display a wide variability in their individual components that belies the stability of their function, Motivated by experiments demonstrating that identified neurons with stereotyped function can have a large variability in the densities of their ion channels, or ionic conductances, I build a conductance-based model of a single neuron. The neuron's firing activity is relatively insensitive to changes in certain combinations of conductances, but markedly sensitive to changes in other combinations. Using a combined modeling and experimental approach, I show that neuromodulators and regulatory processes target sensitive combinations of conductances. I suggest that the variability observed in conductance measurements occurs along insensitive combinations of conductances and could result from homeostatic processes that allow the neuron's conductances to drift without triggering activity- dependent feedback mechanisms. These results together suggest that plastic systems may have a high degree of flexibility and variability in their components without a loss of robustness in their response properties.
Simulation of an enzyme-based glucose sensor
NASA Astrophysics Data System (ADS)
Sha, Xianzheng; Jablecki, Michael; Gough, David A.
2001-09-01
An important biosensor application is the continuous monitoring blood or tissue fluid glucose concentration in people with diabetes. Our research focuses on the development of a glucose sensor based on potentiostatic oxygen electrodes and immobilized glucose oxidase for long- term application as an implant in tissues. As the sensor signal depends on many design variables, a trial-and-error approach to sensor optimization can be time-consuming. Here, the properties of an implantable glucose sensor are optimized by a systematic computational simulation approach.
ERIC Educational Resources Information Center
Pittman, Jeremy; Wittrock, Virginia; Kulshreshtha, Surendra; Wheaton, Elaine
2011-01-01
With the likelihood of future changes in climate and climate variability, it is important to understand how human systems may be vulnerable. Rural communities in Saskatchewan having agricultural-based economies are particularly dependent on climate and could be among the most vulnerable human systems in Canada. Future changes in climate are likely…
ERIC Educational Resources Information Center
Teo, Timothy; Lee, Chwee Beng; Chai, Ching Sing; Choy, Doris
2009-01-01
The purpose of this study is to examine the factors that influence pre-service teachers' perceived usefulness of an Information and Communication Technology (ICT) course that was conducted using the student-centred learning (SCL) approach. In this study, perceived usefulness was used as the dependent variable and perceived competence, course…
The Development of Hyper-MNP: Hyper-Media Navigational Performance Scale
ERIC Educational Resources Information Center
Firat, Mehmet; Yurdakul, Isil Kabakci
2016-01-01
The present study aimed at developing a scale to evaluate navigational performance as a whole, which is one of the factors influencing learning in hyper media. In line with this purpose, depending on the related literature, an item pool of 15 factors was prepared, and these variables were decreased to 5 based on the views of 38 field experts. In…
Use of a Technology-Enhanced Version of the Good Behavior Game in an Elementary School Setting
ERIC Educational Resources Information Center
Lynne, Shauna; Radley, Keith C.; Dart, Evan H.; Tingstrom, Daniel H.; Barry, Christopher T.; Lum, John D. K.
2017-01-01
The purpose of this study was to investigate the effectiveness of a variation of the Good Behavior Game (GBG) in which teachers used ClassDojo to manage each team's progress. ClassDojo is a computer-based program that enables teachers to award students with points for demonstrating target behaviors. Dependent variables included class-wide…
ERIC Educational Resources Information Center
Guymon, Ronald E.
An innovative classroom-based approach to reading instruction in the context of Spanish instruction was proposed. The effects of this instruction on the pronunciation ability of students were analyzed. The subjects were 30 adult missionary trainees who had no previous exposure to Spanish. The dependent variable was measured using two instruments.…
Safeguarding End-User Military Software
2014-12-04
product lines using composi- tional symbolic execution [17] Software product lines are families of products defined by feature commonality and vari...ability, with a well-managed asset base. Recent work in testing of software product lines has exploited similarities across development phases to reuse...feature dependence graph to extract the set of possible interaction trees in a product family. It composes these to incrementally and symbolically
The Effects of Age, Years of Experience, and Type of Experience in the Teacher Selection Process
ERIC Educational Resources Information Center
Place, A. William; Vail, David S.
2013-01-01
Paper screening in the pre-selection process of hiring teachers has been an established line of research starting with Young and Allison (1982). Administrators were asked to rate hypothetical candidates based on the information provided by the researcher. The dependent variable in several of these studies (e.g. Young & Fox, 2002; Young & Schmidt,…
Are Some Pre-Cataclysmic Variables also Post-Cataclysmic Variables?
NASA Astrophysics Data System (ADS)
Sarna, M. J.; Marks, P. B.; Smith, R. C.
1995-10-01
We propose an evolutionary scenario in which post-common-envelope binaries (PCEBs) with secondary component masses between 0.8 Msun and 1.2 M0 start semi-detached evolution almost immediately after the common-envelope (CE) phase. These systems detach due to unstable mass transfer when the secondary develops a thick convective envelope. The duration of the detached phase is a few times 108 yr, depending on the efficiency of magnetic braking and gravitational radiation. We suggest that some of the systems that have been classified as PCEBs may be in this stage of evolution and hence would be more realistically classified as pre-cataclysmic variables (PreCVs). We also propose an observational test based on measurements of the carbon and oxygen isotopic ratios from the infrared CO bands.
Predictions of Poisson's ratio in cross-ply laminates containing matrix cracks and delaminations
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Allen, David H.; Nottorf, Eric W.
1989-01-01
A damage-dependent constitutive model for laminated composites has been developed for the combined damage modes of matrix cracks and delaminations. The model is based on the concept of continuum damage mechanics and uses second-order tensor valued internal state variables to represent each mode of damage. The internal state variables are defined as the local volume average of the relative crack face displacements. Since the local volume for delaminations is specified at the laminate level, the constitutive model takes the form of laminate analysis equations modified by the internal state variables. Model implementation is demonstrated for the laminate engineering modulus E(x) and Poisson's ratio nu(xy) of quasi-isotropic and cross-ply laminates. The model predictions are in close agreement to experimental results obtained for graphite/epoxy laminates.
NASA Technical Reports Server (NTRS)
Davies, Misty D.; Gundy-Burlet, Karen
2010-01-01
A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.
Socioeconomic Status and Health: A New Approach to the Measurement of Bivariate Inequality.
Erreygers, Guido; Kessels, Roselinde
2017-06-23
We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health variable, our indices are based on the levels of both the socioeconomic and the health variable. We also indicate how the indices can be modified in order to introduce sensitivity to inequality in the socioeconomic distribution and to inequality in the health distribution. As an empirical illustration, we make a comparative study of the relation between income and well-being in 16 European countries using data from the Survey of Health, Ageing and Retirement in Europe (SHARE) Wave 4.
Human phoneme recognition depending on speech-intrinsic variability.
Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger
2010-11-01
The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).
NASA Astrophysics Data System (ADS)
Webb, G. M.; Zank, G. P.; Burrows, R. H.; Ratkiewicz, R. E.
2011-02-01
Multi-dimensional Alfvén simple waves in magnetohydrodynamics (MHD) are investigated using Boillat's formalism. For simple wave solutions, all physical variables (the gas density, pressure, fluid velocity, entropy, and magnetic field induction in the MHD case) depend on a single phase function ϕ, which is a function of the space and time variables. The simple wave ansatz requires that the wave normal and the normal speed of the wave front depend only on the phase function ϕ. This leads to an implicit equation for the phase function and a generalization of the concept of a plane wave. We obtain examples of Alfvén simple waves, based on the right eigenvector solutions for the Alfvén mode. The Alfvén mode solutions have six integrals, namely that the entropy, density, magnetic pressure, and the group velocity (the sum of the Alfvén and fluid velocity) are constant throughout the wave. The eigenequations require that the rate of change of the magnetic induction B with ϕ throughout the wave is perpendicular to both the wave normal n and B. Methods to construct simple wave solutions based on specifying either a solution ansatz for n(ϕ) or B(ϕ) are developed.
NASA Astrophysics Data System (ADS)
Webb, G. M.; Zank, G. P.; Burrows, R.
2009-12-01
Multi-dimensional Alfvén simple waves in magnetohydrodynamics (MHD) are investigated using Boillat's formalism. For simple wave solutions, all physical variables (the gas density, pressure, fluid velocity, entropy, and magnetic field induction in the MHD case) depend on a single phase function ǎrphi which is a function of the space and time variables. The simple wave ansatz requires that the wave normal and the normal speed of the wave front depend only on the phase function ǎrphi. This leads to an implicit equation for the phase function, and a generalisation of the concept of a plane wave. We obtain examples of Alfvén simple waves, based on the right eigenvector solutions for the Alfvén mode. The Alfvén mode solutions have six integrals, namely that the entropy, density, magnetic pressure and the group velocity (the sum of the Alfvén and fluid velocity) are constant throughout the wave. The eigen-equations require that the rate of change of the magnetic induction B with ǎrphi throughout the wave is perpendicular to both the wave normal n and B. Methods to construct simple wave solutions based on specifying either a solution ansatz for n(ǎrphi) or B(ǎrphi) are developed.
NASA Astrophysics Data System (ADS)
Winiwarter, Susanne; Middleton, Brian; Jones, Barry; Courtney, Paul; Lindmark, Bo; Page, Ken M.; Clark, Alan; Landqvist, Claire
2015-09-01
We demonstrate here a novel use of statistical tools to study intra- and inter-site assay variability of five early drug metabolism and pharmacokinetics in vitro assays over time. Firstly, a tool for process control is presented. It shows the overall assay variability but allows also the following of changes due to assay adjustments and can additionally highlight other, potentially unexpected variations. Secondly, we define the minimum discriminatory difference/ratio to support projects to understand how experimental values measured at different sites at a given time can be compared. Such discriminatory values are calculated for 3 month periods and followed over time for each assay. Again assay modifications, especially assay harmonization efforts, can be noted. Both the process control tool and the variability estimates are based on the results of control compounds tested every time an assay is run. Variability estimates for a limited set of project compounds were computed as well and found to be comparable. This analysis reinforces the need to consider assay variability in decision making, compound ranking and in silico modeling.
Multivariate analysis in thoracic research.
Mengual-Macenlle, Noemí; Marcos, Pedro J; Golpe, Rafael; González-Rivas, Diego
2015-03-01
Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use.
Silva, Denize Francisca da; Barros, Warley Rocha; Almeida, Maria da Conceição Chagas de; Rêgo, Marco Antônio Vasconcelos
2015-10-01
The aim of this study was to investigate the association between exposure to non-ionizing electromagnetic radiation from mobile phone base stations and psychiatric symptoms. In a cross-sectional study in Salvador, Bahia State, Brazil, 440 individuals were interviewed. Psychiatric complaints and diagnoses were the dependent variables and distance from the individual's residence to the base station was considered the main independent variable. Hierarchical logistic regression analysis was conducted to assess confounding. An association was observed between psychiatric symptoms and residential proximity to the base station and different forms of mobile phone use (making calls with weak signal coverage, keeping the mobile phone close to the body, having two or more chips, and never turning off the phone while sleeping), and with the use of other electronic devices. The study concluded that exposure to electromagnetic radiation from mobile phone base stations and other electronic devices was associated with psychiatric symptoms, independently of gender, schooling, and smoking status. The adoption of precautionary measures to reduce such exposure is recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafiq ul Hassan, M; Zhang, G; Moros, E
2016-06-15
Purpose: A simple approach to investigate Interscanner variability of Radiomics features in computed tomography (CT) using a standard ACR phantom. Methods: The standard ACR phantom was scanned on CT scanners from three different manufacturers. Scanning parameters of 120 KVp, 200 mA were used while slice thickness of 3.0 mm on two scanners and 3.27 mm on third scanner was used. Three spherical regions of interest (ROI) from water, medium density and high density inserts were contoured. Ninety four Radiomics features were extracted using an in-house program. These features include shape (11), intensity (22), GLCM (26), GLZSM (11), RLM (11), andmore » NGTDM (5) and 8 fractal dimensions features. To evaluate the Interscanner variability across three scanners, a coefficient of variation (COV) is calculated for each feature group. Each group is further classified according to the COV- by calculating the percentage of features in each of the following categories: COV less than 2%, between 2 and 10% and greater than 10%. Results: For all feature groups, similar trend was observed for three different inserts. Shape features were the most robust for all scanners as expected. 70% of the shape features had COV <2%. For intensity feature group, 2% COV varied from 9 to 32% for three scanners. All features in four groups GLCM, GLZSM, RLM and NGTDM were found to have Interscanner variability ≥2%. The fractal dimensions dependence for medium and high density inserts were similar while it was different for water inserts. Conclusion: We concluded that even for similar scanning conditions, Interscanner variability across different scanners was significant. The texture features based on GLCM, GLZSM, RLM and NGTDM are highly scanner dependent. Since the inserts of the ACR Phantom are not heterogeneous in HU values suggests that matrix based 2nd order features are highly affected by variation in noise. Research partly funded by NIH/NCI R01CA190105-01.« less
Novak, Dario; Štefan, Lovro; Prosoli, Rebeka; Emeljanovas, Arunas; Mieziene, Brigita; Milanović, Ivana; Radisavljević-Janić, Snežana
2017-02-22
Little is known about the factors which might influence the adherence to a Mediterranean diet in non-Mediterranean European countries. Thus, the main purpose of this study was to determine the associations between socioeconomic, psychological, and physical factors on a Mediterranean diet. In this cross-sectional study, participants were 14-18-year-old adolescents ( N = 3071) from two non-Mediterranean countries: Lithuania ( N = 1863) and Serbia ( N = 1208). The dependent variable was Mediterranean diet, and was assessed with the Mediterranean Diet Quality Index for children and adolescents questionnaire. Independent variables were gender, body-mass index, self-rated health, socioeconomic status, psychological distress, physical activity, and sedentary behavior. The associations between dependent and independent variables were analyzed by using logistic regression. Results showed that higher adherence to a Mediterranean diet was associated with higher self-rated health, socioeconomic status, and physical activity, yet low adherence to a Mediterranean diet was associated with being female, having higher body-mass index, psychological distress, and sedentary behavior. Our findings suggest that future studies need to explore associations between lifestyle habits-especially in target populations, such as primary and secondary school students.
Novak, Dario; Štefan, Lovro; Prosoli, Rebeka; Emeljanovas, Arunas; Mieziene, Brigita; Milanović, Ivana; Radisavljević-Janić, Snežana
2017-01-01
Little is known about the factors which might influence the adherence to a Mediterranean diet in non-Mediterranean European countries. Thus, the main purpose of this study was to determine the associations between socioeconomic, psychological, and physical factors on a Mediterranean diet. In this cross-sectional study, participants were 14–18-year-old adolescents (N = 3071) from two non-Mediterranean countries: Lithuania (N = 1863) and Serbia (N = 1208). The dependent variable was Mediterranean diet, and was assessed with the Mediterranean Diet Quality Index for children and adolescents questionnaire. Independent variables were gender, body-mass index, self-rated health, socioeconomic status, psychological distress, physical activity, and sedentary behavior. The associations between dependent and independent variables were analyzed by using logistic regression. Results showed that higher adherence to a Mediterranean diet was associated with higher self-rated health, socioeconomic status, and physical activity, yet low adherence to a Mediterranean diet was associated with being female, having higher body-mass index, psychological distress, and sedentary behavior. Our findings suggest that future studies need to explore associations between lifestyle habits—especially in target populations, such as primary and secondary school students. PMID:28241432
Input-independent, Scalable and Fast String Matching on the Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Chavarría-Miranda, Daniel; Maschhoff, Kristyn J
2009-05-25
String searching is at the core of many security and network applications like search engines, intrusion detection systems, virus scanners and spam filters. The growing size of on-line content and the increasing wire speeds push the need for fast, and often real- time, string searching solutions. For these conditions, many software implementations (if not all) targeting conventional cache-based microprocessors do not perform well. They either exhibit overall low performance or exhibit highly variable performance depending on the types of inputs. For this reason, real-time state of the art solutions rely on the use of either custom hardware or Field-Programmable Gatemore » Arrays (FPGAs) at the expense of overall system flexibility and programmability. This paper presents a software based implementation of the Aho-Corasick string searching algorithm on the Cray XMT multithreaded shared memory machine. Our so- lution relies on the particular features of the XMT architecture and on several algorith- mic strategies: it is fast, scalable and its performance is virtually content-independent. On a 128-processor Cray XMT, it reaches a scanning speed of ≈ 28 Gbps with a performance variability below 10 %. In the 10 Gbps performance range, variability is below 2.5%. By comparison, an Intel dual-socket, 8-core system running at 2.66 GHz achieves a peak performance which varies from 500 Mbps to 10 Gbps depending on the type of input and dictionary size.« less
Nelson, Gregory C; Gruca, Thomas S
2017-06-01
States are seeking ways to retain primary care physicians trained within their borders. We analyzed the 5-year retention and rural Iowa location decisions for 1,645 graduates of the Iowa Family Medicine Training Network (IFMTN)-eight residency programs (in seven different cities) that are affiliated with the Carver College of Medicine (University of Iowa). Data from 1977-2014 includes 98.5% of active graduates. Location in Iowa 5 years after graduation was the dependent variable in a binary logistic regression. A second model used rural location in Iowa as the dependent variable. Independent variables included graduation year cohort, IMG status, sex, undergraduate medical training in Iowa, medical degree, and residency location. Undergraduate medical training in Iowa was strongly related to retention. Compared to graduates of the AMC residency, graduates of six of the seven community-based programs were significantly more likely to be practicing in Iowa. While the overall proportion of graduates practicing in rural Iowa was high (47.3%), women and IMGs were significantly less likely to practice in rural areas. Graduates of the Mason City program were significantly more likely to practice in a rural area after graduation. The experience of the IFMTN suggests that educating family physicians in community-based programs contributes significantly to in-state retention even 5 years after graduation. While all programs contribute to the rural FM workforce in Iowa, the residency program located in a rural community (Mason City) has a disproportionately positive impact.
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
Rupture Propagation for Stochastic Fault Models
NASA Astrophysics Data System (ADS)
Favreau, P.; Lavallee, D.; Archuleta, R.
2003-12-01
The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.
A computer graphics display and data compression technique
NASA Technical Reports Server (NTRS)
Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)
1974-01-01
The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.
Armato, Samuel G.; Roberts, Rachael Y.; Kocherginsky, Masha; Aberle, Denise R.; Kazerooni, Ella A.; MacMahon, Heber; van Beek, Edwin J.R.; Yankelevitz, David; McLennan, Geoffrey; McNitt-Gray, Michael F.; Meyer, Charles R.; Reeves, Anthony P.; Caligiuri, Philip; Quint, Leslie E.; Sundaram, Baskaran; Croft, Barbara Y.; Clarke, Laurence P.
2008-01-01
Rationale and Objectives Studies that evaluate the lung-nodule-detection performance of radiologists or computerized methods depend on an initial inventory of the nodules within the thoracic images (the “truth”). The purpose of this study was to analyze (1) variability in the “truth” defined by different combinations of experienced thoracic radiologists and (2) variability in the performance of other experienced thoracic radiologists based on these definitions of “truth” in the context of lung nodule detection on computed tomography (CT) scans. Materials and Methods Twenty-five thoracic CT scans were reviewed by four thoracic radiologists, who independently marked lesions they considered to be nodules ≥ 3 mm in maximum diameter. Panel “truth” sets of nodules then were derived from the nodules marked by different combinations of two and three of these four radiologists. The nodule-detection performance of the other radiologists was evaluated based on these panel “truth” sets. Results The number of “true” nodules in the different panel “truth” sets ranged from 15–89 (mean: 49.8±25.6). The mean radiologist nodule-detection sensitivities across radiologists and panel “truth” sets for different panel “truth” conditions ranged from 51.0–83.2%; mean false-positive rates ranged from 0.33–1.39 per case. Conclusion Substantial variability exists across radiologists in the task of lung nodule identification in CT scans. The definition of “truth” on which lung nodule detection studies are based must be carefully considered, since even experienced thoracic radiologists may not perform well when measured against the “truth” established by other experienced thoracic radiologists. PMID:19064209
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Simple and Double Alfven Waves: Hamiltonian Aspects
NASA Astrophysics Data System (ADS)
Webb, G. M.; Zank, G. P.; Hu, Q.; le Roux, J. A.; Dasgupta, B.
2011-12-01
We discuss the nature of simple and double Alfvén waves. Simple waves depend on a single phase variable \\varphi, but double waves depend on two independent phase variables \\varphi1 and \\varphi2. The phase variables depend on the space and time coordinates x and t. Simple and double Alfvén waves have the same integrals, namely, the entropy, density, magnetic pressure, and group velocity (the sum of the Alfvén and fluid velocities) are constant throughout the flow. We present examples of both simple and double Alfvén waves, and discuss Hamiltonian formulations of the waves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilar, Kartik; Rua, Armando; Suarez, Sophia N.
A comprehensive variable temperature, pressure and frequency multinuclear ( 1H, 2H, and 19F) magnetic resonance study was undertaken on selectively deuterated 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)amide (BMIM TFSA) ionic liquid isotopologues. This study builds on our earlier investigation of the effects of increasing alkyl chain length on diffusion and dynamics in imidazolium-based TFSA ionic liquids. Fast field cycling 1H T 1 data revealed multiple modes of motion. Through calculation of diffusion coefficient (D) values and activation energies, the low- and high-field regimes were assigned to the translational and reorientation dynamics respectively. Variable-pressure 2H T 1 measurements reveal site-dependent interactions in the cation withmore » strengths in the order MD 3 > CD 3 > CD 2, indicating dissimilarities in the electric field gradients along the alkyl chain, with the CD 2 sites having the largest gradient. Additionally, the α saturation effect in T 1 vs. P was observed for all three sites, suggesting significant reduction of the short-range rapid reorientational dynamics. This reduction was also deduced from the variable pressure 1H T 1 data, which showed an approach to saturation for both the methyl and butyl group terminal methyl sites. Pressure-dependent D measurements show independent motions for both cations and anions, with the cations having greater D values over the entire pressure range.« less
Addiction treatment dropout: exploring patients' characteristics.
López-Goñi, José J; Fernández-Montalvo, Javier; Arteaga, Alfonso
2012-01-01
This study explored the characteristics associated with treatment dropout in substance dependence patients. A sample of 122 addicted patients (84 treatment completers and 38 treatment dropouts) who sought outpatient treatment was assessed to collect information on sociodemographic, consumption (assessed by EuropASI), psychopathological (assessed by SCL-90-R), and personality variables (assessed by MCMI-II). Completers and dropouts were compared on all studied variables. According to the results, dropouts scored significantly higher on the EuropASI variables measuring employment/support, alcohol consumption, and family/social problems, as well as on the schizotypal scale of MCMI-II. Because most of the significant differences were found in EuropASI variables, three clusters analyses (2, 3, and 4 groups) based on EuropASI mean scores were carried out to determine clinically relevant information predicting dropout. The most relevant results were obtained when four groups were used. Comparisons between the four groups derived from cluster analysis showed statistically significant differences in the rate of dropout, with one group exhibiting the highest dropout rate. The distinctive characteristics of the group with highest dropout rate included the presence of an increased labor problem combined with high alcohol consumption. Furthermore, this group had the highest scores on three scales of the MCMI-II: phobic, dependent, and schizotypal. The implications of these results for further research and clinical practice are discussed. Copyright © American Academy of Addiction Psychiatry.
Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data.
Rahman, Shah Atiqur; Huang, Yuxiao; Claassen, Jan; Heintzman, Nathaniel; Kleinberg, Samantha
2015-12-01
Most clinical and biomedical data contain missing values. A patient's record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across time as well, but current methods have yet to incorporate these temporal relationships as well as multiple types of missingness. To address this, we propose an imputation method (FLk-NN) that incorporates time lagged correlations both within and across variables by combining two imputation methods, based on an extension to k-NN and the Fourier transform. This enables imputation of missing values even when all data at a time point is missing and when there are different types of missingness both within and across variables. In comparison to other approaches on three biological datasets (simulated and actual Type 1 diabetes datasets, and multi-modality neurological ICU monitoring) the proposed method has the highest imputation accuracy. This was true for up to half the data being missing and when consecutive missing values are a significant fraction of the overall time series length. Copyright © 2015 Elsevier Inc. All rights reserved.
Pilar, Kartik; Rua, Armando; Suarez, Sophia N.; ...
2017-05-11
A comprehensive variable temperature, pressure and frequency multinuclear ( 1H, 2H, and 19F) magnetic resonance study was undertaken on selectively deuterated 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)amide (BMIM TFSA) ionic liquid isotopologues. This study builds on our earlier investigation of the effects of increasing alkyl chain length on diffusion and dynamics in imidazolium-based TFSA ionic liquids. Fast field cycling 1H T 1 data revealed multiple modes of motion. Through calculation of diffusion coefficient (D) values and activation energies, the low- and high-field regimes were assigned to the translational and reorientation dynamics respectively. Variable-pressure 2H T 1 measurements reveal site-dependent interactions in the cation withmore » strengths in the order MD 3 > CD 3 > CD 2, indicating dissimilarities in the electric field gradients along the alkyl chain, with the CD 2 sites having the largest gradient. Additionally, the α saturation effect in T 1 vs. P was observed for all three sites, suggesting significant reduction of the short-range rapid reorientational dynamics. This reduction was also deduced from the variable pressure 1H T 1 data, which showed an approach to saturation for both the methyl and butyl group terminal methyl sites. Pressure-dependent D measurements show independent motions for both cations and anions, with the cations having greater D values over the entire pressure range.« less
Increasing importance of precipitation variability on global livestock grazing lands
NASA Astrophysics Data System (ADS)
Sloat, Lindsey L.; Gerber, James S.; Samberg, Leah H.; Smith, William K.; Herrero, Mario; Ferreira, Laerte G.; Godde, Cécile M.; West, Paul C.
2018-03-01
Pastures and rangelands underpin global meat and milk production and are a critical resource for millions of people dependent on livestock for food security1,2. Forage growth, which is highly climate dependent3,4, is potentially vulnerable to climate change, although precisely where and to what extent remains relatively unexplored. In this study, we assess climate-based threats to global pastures, with a specific focus on changes in within- and between-year precipitation variability (precipitation concentration index (PCI) and coefficient of variation of precipitation (CVP), respectively). Relating global satellite measures of vegetation greenness (such as the Normalized Difference Vegetation Index; NDVI) to key climatic factors reveals that CVP is a significant, yet often overlooked, constraint on vegetation productivity across global pastures. Using independent stocking data, we found that areas with high CVP support lower livestock densities than less-variable regions. Globally, pastures experience about a 25% greater year-to-year precipitation variation (CVP = 0.27) than the average global land surface area (0.21). Over the past century, CVP has generally increased across pasture areas, although both positive (49% of pasture area) and negative (31% of pasture area) trends exist. We identify regions in which livestock grazing is important for local food access and economies, and discuss the potential for pasture intensification in the context of long-term regional trends in precipitation variability.
Yousefpour, Rasoul; Temperli, Christian; Bugmann, Harald; Elkin, Che; Hanewinkel, Marc; Meilby, Henrik; Jacobsen, Jette Bredahl; Thorsen, Bo Jellesmark
2013-06-15
We study climate uncertainty and how managers' beliefs about climate change develop and influence their decisions. We develop an approach for updating knowledge and beliefs based on the observation of forest and climate variables and illustrate its application for the adaptive management of an even-aged Norway spruce (Picea abies L. Karst) forest in the Black Forest, Germany. We simulated forest development under a range of climate change scenarios and forest management alternatives. Our analysis used Bayesian updating and Dempster's rule of combination to simulate how observations of climate and forest variables may influence a decision maker's beliefs about climate development and thereby management decisions. While forest managers may be inclined to rely on observed forest variables to infer climate change and impacts, we found that observation of climate state, e.g. temperature or precipitation is superior for updating beliefs and supporting decision-making. However, with little conflict among information sources, the strongest evidence would be offered by a combination of at least two informative variables, e.g., temperature and precipitation. The success of adaptive forest management depends on when managers switch to forward-looking management schemes. Thus, robust climate adaptation policies may depend crucially on a better understanding of what factors influence managers' belief in climate change. Copyright © 2013 Elsevier Ltd. All rights reserved.
Complexity in relational processing predicts changes in functional brain network dynamics.
Cocchi, Luca; Halford, Graeme S; Zalesky, Andrew; Harding, Ian H; Ramm, Brentyn J; Cutmore, Tim; Shum, David H K; Mattingley, Jason B
2014-09-01
The ability to link variables is critical to many high-order cognitive functions, including reasoning. It has been proposed that limits in relating variables depend critically on relational complexity, defined formally as the number of variables to be related in solving a problem. In humans, the prefrontal cortex is known to be important for reasoning, but recent studies have suggested that such processes are likely to involve widespread functional brain networks. To test this hypothesis, we used functional magnetic resonance imaging and a classic measure of deductive reasoning to examine changes in brain networks as a function of relational complexity. As expected, behavioral performance declined as the number of variables to be related increased. Likewise, increments in relational complexity were associated with proportional enhancements in brain activity and task-based connectivity within and between 2 cognitive control networks: A cingulo-opercular network for maintaining task set, and a fronto-parietal network for implementing trial-by-trial control. Changes in effective connectivity as a function of increased relational complexity suggested a key role for the left dorsolateral prefrontal cortex in integrating and implementing task set in a trial-by-trial manner. Our findings show that limits in relational processing are manifested in the brain as complexity-dependent modulations of large-scale networks. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
MULTIVARIATE ANALYSIS OF DRINKING BEHAVIOUR IN A RURAL POPULATION
Mathrubootham, N.; Bashyam, V.S.P.; Shahjahan
1997-01-01
This study was carried out to find out the drinking pattern in a rural population, using multivariate techniques. 386 current users identified in a community were assessed with regard to their drinking behaviours using a structured interview. For purposes of the study the questions were condensed into 46 meaningful variables. In bivariate analysis, 14 variables including dependent variables such as dependence, MAST & CAGE (measuring alcoholic status), Q.F. Index and troubled drinking were found to be significant. Taking these variables and other multivariate techniques too such as ANOVA, correlation, regression analysis and factor analysis were done using both SPSS PC + and HCL magnum mainframe computer with FOCUS package and UNIX systems. Results revealed that number of factors such as drinking style, duration of drinking, pattern of abuse, Q.F. Index and various problems influenced drinking and some of them set up a vicious circle. Factor analysis revealed mainly 3 factors, abuse, dependence and social drinking factors. Dependence could be divided into low/moderate dependence. The implications and practical applications of these tests are also discussed. PMID:21584077
NASA Astrophysics Data System (ADS)
Poulain, Pierre-Marie; Luther, Douglas S.; Patzert, William C.
1992-11-01
Two techniques have been developed for estimating statistics of inertial oscillations from satellite-tracked drifters. These techniques overcome the difficulties inherent in estimating such statistics from data dependent upon space coordinates that are a function of time. Application of these techniques to tropical surface drifter data collected during the NORPAX, EPOCS, and TOGA programs reveals a latitude-dependent, statistically significant "blue shift" of inertial wave frequency. The latitudinal dependence of the blue shift is similar to predictions based on "global" internal wave spectral models, with a superposition of frequency shifting due to modification of the effective local inertial frequency by the presence of strongly sheared zonal mean currents within 12° of the equator.
NASA Astrophysics Data System (ADS)
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Barrett, Adam B.; Seth, Anil K.
2009-12-01
Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference.
Sinn, Chi-Ling Joanna; Jones, Aaron; McMullan, Janet Legge; Ackerman, Nancy; Curtin-Telegdi, Nancy; Eckel, Leslie; Hirdes, John P
2017-11-25
Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Retest reliability of force-time variables of neck muscles under isometric conditions.
Almosnino, Sivan; Pelland, Lucie; Stevenson, Joan M
2010-01-01
Proper conditioning of the neck muscles may play a role in reducing the risk of neck injury and, possibly, concussions in contact sports. However, the ability to reliably measure the force-time-based variables that might be relevant for this purpose has not been addressed. To assess the between-days reliability of discrete force-time-based variables of neck muscles during maximal voluntary isometric contractions in 5 directions. Cohort study. University research center. Twenty-six highly physically active men (age = 21.6 ± 2.1 years, height = 1.85 ± 0.09 m, mass = 81.6 ± 9.9 kg, head circumference = 0.58 ± 0.01 m, neck circumference = 0.39 ± 0.02 m). We used a custom-built testing apparatus to measure maximal voluntary isometric contractions of the neck muscles in 5 directions (extension, flexion, protraction, left lateral bending, and right lateral bending) on 2 separate occasions separated by 7 to 8 days. Variables measured were peak force (PF), rate of force development (RFD), and time to 50% of PF (T(50)PF). Reliability indices calculated for each variable comprised the difference in scores between the testing sessions, with corresponding 95% confidence intervals, the coefficient of variation of the typical error of measurement (CV(TE)), and intraclass correlation coefficients (ICC [3,3]). No evidence of systematic bias was detected for the dependent measures across any movement direction; retest differences in measurements were between 1.8% and 2.7%, with corresponding 95% confidence interval ranges of less than 10% and overlapping zero. The CV(TE) was lowest for PF (range, 2.4%-6.3%) across all testing directions, followed by RFD (range, 4.8%-9.0%) and T(50)PF (range, 7.1%-9.3%). The ICC score range for all dependent measures was 0.90 to 0.99. Discrete variables representative of the force-generating capacity of neck muscles under isometric conditions can be measured with an acceptable degree of reliability. This finding has possible applications for investigating the role of neck muscle strength-training programs in reducing the risk of injuries in sport settings.
NASA Astrophysics Data System (ADS)
Adirosi, Elisa; Tokay, Ali; Roberto, Nicoletta; Gorgucci, Eugenio; Montopoli, Mario; Baldini, Luca
2017-04-01
Ground based weather radars are highly used to generate rainfall products for meteorological and hydrological applications. However, weather radar quantitative rainfall estimation is obtained at a certain altitude that depends mainly on the radar elevation angle and on the distance from the radar. Therefore, depending on the vertical variability of rainfall, a time-height ambiguity between radar measurement and rainfall at the ground can affect the rainfall products. The vertically pointing radars (such as the Micro Rain Radar, MRR) are great tool to investigate the vertical variability of rainfall and its characteristics and ultimately, to fill the gap between the ground level and the first available radar elevation. Furthermore, the knowledge of rain Drop Size Distribution (DSD) variability is linked to the well-known problem of the non-uniform beam filling that is one of the main uncertainties of Global Precipitation Measurement (GPM) mission Dual frequency Precipitation Radar (DPR). During GPM Ground Validation Iowa Flood Studies (IFloodS) field experiment, data collected with 2D video disdrometers (2DVD), Autonomous OTT Parsivel2 Units (APU), and MRR profilers at different sites were available. In three different sites co-located APU, 2DVD and MRR are available and covered by the S-band Dual Polarimetric Doppler radar (NPOL). The first elevation height of the radar beam varies, among the three sites, between 70 m and 1100 m. The IFloodS set-up has been used to compare disdrometers, MRR and NPOL data and to evaluate the uncertainties of those measurements. First, the performance of disdrometers and MRR in determining different rainfall parameters at ground has been evaluated and then the MRR based parameters have been compared with the ones obtained from NPOL data at the lowest elevations. Furthermore, the vertical variability of DSD and integral rainfall parameters within the MRR bins (from ground to 1085 m each 35 m) has been investigated in order to provide some insight on the variability of the rainfall microphysical characteristics within about 1 km above the ground.
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo
2017-11-01
[Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.
NASA Astrophysics Data System (ADS)
Koshigai, Masaru; Marui, Atsunao
Water table provides important information for the evaluation of groundwater resource. Recently, the estimation of water table in wide area is required for effective evaluation of groundwater resources. However, evaluation process is met with difficulties due to technical and economic constraints. Regression analysis for the prediction of groundwater levels based on geomorphologic and geologic conditions is considered as a reliable tool for the estimation of water table of wide area. Data of groundwater levels were extracted from the public database of geotechnical information. It was observed that changes in groundwater level depend on climate conditions. It was also observed and confirmed that there exist variations of groundwater levels according to geomorphologic and geologic conditions. The objective variable of the regression analysis was groundwater level. And the explanatory variables were elevation and the dummy variable consisting of group number. The constructed regression formula was significant according to the determination coefficients and analysis of the variance. Therefore, combining the regression formula and mesh map, the statistical method to estimate the water table based on geomorphologic and geologic condition for the whole country could be established.
DOT National Transportation Integrated Search
2015-12-01
We develop an econometric framework for incorporating spatial dependence in integrated model systems of latent variables and multidimensional mixed data outcomes. The framework combines Bhats Generalized Heterogeneous Data Model (GHDM) with a spat...
Symbol-and-Arrow Diagrams in Teaching Pharmacokinetics.
ERIC Educational Resources Information Center
Hayton, William L.
1990-01-01
Symbol-and-arrow diagrams are helpful adjuncts to equations derived from pharmacokinetic models. Both show relationships among dependent and independent variables. Diagrams show only qualitative relationships, but clearly show which variables are dependent and which are independent, helping students understand complex but important functional…
Groundwater level responses to precipitation variability in Mediterranean insular aquifers
NASA Astrophysics Data System (ADS)
Lorenzo-Lacruz, Jorge; Garcia, Celso; Morán-Tejeda, Enrique
2017-09-01
Groundwater is one of the largest and most important sources of fresh water on many regions under Mediterranean climate conditions, which are exposed to large precipitation variability that includes frequent meteorological drought episodes, and present high evapotranspiration rates and water demand during the dry season. The dependence on groundwater increases in those areas with predominant permeable lithologies, contributing to aquifer recharge and the abundance of ephemeral streams. The increasing pressure of tourism on water resources in many Mediterranean coastal areas, and uncertainty related to future precipitation and water availability, make it urgent to understand the spatio-temporal response of groundwater bodies to precipitation variability, if sustainable use of the resource is to be achieved. We present an assessment of the response of aquifers to precipitation variability based on correlations between the Standardized Precipitation Index (SPI) at various time scales and the Standardized Groundwater Index (SGI) across a Mediterranean island. We detected three main responses of aquifers to accumulated precipitation anomalies: (i) at short time scales of the SPI (<6 months); (ii) at medium time scales (6-24 months); and at long time scales (>24 months). The differing responses were mainly explained by differences in lithology and the percentage of highly permeable rock strata in the aquifer recharge areas. We also identified differences in the months and seasons when aquifer storages are more dependent on precipitation; these were related to climate seasonality and the degree of aquifer exploitation or underground water extraction. The recharge of some aquifers, especially in mountainous areas, is related to precipitation variability within a limited spatial extent, whereas for aquifers located in the plains, precipitation variability influence much larger areas; the topography and geological structure of the island explain these differences. Results indicate large spatial variability in the response of aquifers to precipitation in a very small area, highlighting the importance of having high spatial resolution hydro-climatic databases available to enable full understanding of the effects of climate variability on scarce water resources.
Hammerstein system represention of financial volatility processes
NASA Astrophysics Data System (ADS)
Capobianco, E.
2002-05-01
We show new modeling aspects of stock return volatility processes, by first representing them through Hammerstein Systems, and by then approximating the observed and transformed dynamics with wavelet-based atomic dictionaries. We thus propose an hybrid statistical methodology for volatility approximation and non-parametric estimation, and aim to use the information embedded in a bank of volatility sources obtained by decomposing the observed signal with multiresolution techniques. Scale dependent information refers both to market activity inherent to different temporally aggregated trading horizons, and to a variable degree of sparsity in representing the signal. A decomposition of the expansion coefficients in least dependent coordinates is then implemented through Independent Component Analysis. Based on the described steps, the features of volatility can be more effectively detected through global and greedy algorithms.
Constitutive behavior and fracture toughness properties of the F82H ferritic/martensitic steel
NASA Astrophysics Data System (ADS)
Spätig, P.; Odette, G. R.; Donahue, E.; Lucas, G. E.
2000-12-01
A detailed investigation of the constitutive behavior of the International Energy Agency (IEA) program heat of 8 Cr unirradiated F82H ferritic-martensitic steel has been undertaken in the temperature range of 80-723 K. The overall tensile flow stress is decomposed into temperature-dependent and athermal yield stress contributions plus a mildly temperature-dependent strain-hardening component. The fitting forms are based on a phenomenological dislocation mechanics model. This formulation provides a more accurate and physically based representation of the flow stress as a function of the key variables of test temperature, strain and stain rate compared to simple power law treatments. Fracture toughness measurements from small compact tension specimens are also reported and analyzed in terms of a critical stress-critical area local fracture model.
Amphetamine modulates brain signal variability and working memory in younger and older adults.
Garrett, Douglas D; Nagel, Irene E; Preuschhof, Claudia; Burzynska, Agnieszka Z; Marchner, Janina; Wiegert, Steffen; Jungehülsing, Gerhard J; Nyberg, Lars; Villringer, Arno; Li, Shu-Chen; Heekeren, Hauke R; Bäckman, Lars; Lindenberger, Ulman
2015-06-16
Better-performing younger adults typically express greater brain signal variability relative to older, poorer performers. Mechanisms for age and performance-graded differences in brain dynamics have, however, not yet been uncovered. Given the age-related decline of the dopamine (DA) system in normal cognitive aging, DA neuromodulation is one plausible mechanism. Hence, agents that boost systemic DA [such as d-amphetamine (AMPH)] may help to restore deficient signal variability levels. Furthermore, despite the standard practice of counterbalancing drug session order (AMPH first vs. placebo first), it remains understudied how AMPH may interact with practice effects, possibly influencing whether DA up-regulation is functional. We examined the effects of AMPH on functional-MRI-based blood oxygen level-dependent (BOLD) signal variability (SD(BOLD)) in younger and older adults during a working memory task (letter n-back). Older adults expressed lower brain signal variability at placebo, but met or exceeded young adult SD(BOLD) levels in the presence of AMPH. Drug session order greatly moderated change-change relations between AMPH-driven SD(BOLD) and reaction time means (RT(mean)) and SDs (RT(SD)). Older adults who received AMPH in the first session tended to improve in RT(mean) and RT(SD) when SD(BOLD) was boosted on AMPH, whereas younger and older adults who received AMPH in the second session showed either a performance improvement when SD(BOLD) decreased (for RT(mean)) or no effect at all (for RT(SD)). The present findings support the hypothesis that age differences in brain signal variability reflect aging-induced changes in dopaminergic neuromodulation. The observed interactions among AMPH, age, and session order highlight the state- and practice-dependent neurochemical basis of human brain dynamics.
Amphetamine modulates brain signal variability and working memory in younger and older adults
Garrett, Douglas D.; Nagel, Irene E.; Preuschhof, Claudia; Burzynska, Agnieszka Z.; Marchner, Janina; Wiegert, Steffen; Jungehülsing, Gerhard J.; Nyberg, Lars; Villringer, Arno; Li, Shu-Chen; Heekeren, Hauke R.; Bäckman, Lars; Lindenberger, Ulman
2015-01-01
Better-performing younger adults typically express greater brain signal variability relative to older, poorer performers. Mechanisms for age and performance-graded differences in brain dynamics have, however, not yet been uncovered. Given the age-related decline of the dopamine (DA) system in normal cognitive aging, DA neuromodulation is one plausible mechanism. Hence, agents that boost systemic DA [such as d-amphetamine (AMPH)] may help to restore deficient signal variability levels. Furthermore, despite the standard practice of counterbalancing drug session order (AMPH first vs. placebo first), it remains understudied how AMPH may interact with practice effects, possibly influencing whether DA up-regulation is functional. We examined the effects of AMPH on functional-MRI–based blood oxygen level-dependent (BOLD) signal variability (SDBOLD) in younger and older adults during a working memory task (letter n-back). Older adults expressed lower brain signal variability at placebo, but met or exceeded young adult SDBOLD levels in the presence of AMPH. Drug session order greatly moderated change–change relations between AMPH-driven SDBOLD and reaction time means (RTmean) and SDs (RTSD). Older adults who received AMPH in the first session tended to improve in RTmean and RTSD when SDBOLD was boosted on AMPH, whereas younger and older adults who received AMPH in the second session showed either a performance improvement when SDBOLD decreased (for RTmean) or no effect at all (for RTSD). The present findings support the hypothesis that age differences in brain signal variability reflect aging-induced changes in dopaminergic neuromodulation. The observed interactions among AMPH, age, and session order highlight the state- and practice-dependent neurochemical basis of human brain dynamics. PMID:26034283
Morphometric abnormalities of the lateral ventricles in methamphetamine-dependent subjects☆
Jeong, Hyeonseok S.; Lee, Sunho; Yoon, Sujung; Jung, Jiyoung J.; Cho, Han Byul; Kim, Binna N.; Ma, Jiyoung; Ko, Eun; Im, Jooyeon Jamie; Ban, Soonhyun; Renshaw, Perry F.; Lyoo, In Kyoon
2017-01-01
Background The presence of morphometric abnormalities of the lateral ventricles, which can reflect focal or diffuse atrophic changes of nearby brain structures, is not well characterized in methamphetamine dependence. The current study was aimed to examine the size and shape alterations of the lateral ventricles in methamphetamine-dependent subjects. Methods High-resolution brain structural images were obtained from 37 methamphetamine-dependent subjects and 25 demographically matched healthy individuals. Using a combined volumetric and surface-based morphometric approach, the structural variability of the lateral ventricles, with respect to extent and location, was examined. Results Methamphetamine-dependent subjects had an enlarged right lateral ventricle compared with healthy individuals. Morphometric analysis revealed a region-specific pattern of lateral ventricular expansion associated with methamphetamine dependence, which was mainly distributed in the areas adjacent to the ventral striatum, medial prefrontal cortex, and thalamus. Conclusions Patterns of shape decomposition in the lateral ventricles may have relevance to the structural vulnerability of the prefrontal-ventral striatal-thalamic circuit to methamphetamine-induced neurotoxicity. PMID:23769159
Copula-based model for rainfall and El- Niño in Banyuwangi Indonesia
NASA Astrophysics Data System (ADS)
Caraka, R. E.; Supari; Tahmid, M.
2018-04-01
Modelling, describing and measuring the structure dependences between different random events is at the very heart of statistics. Therefore, a broad variety of varying dependence concepts has been developed in the past. Most often, practitioners rely only on the linear correlation to describe the degree of dependence between two or more variables; an approach that can lead to quite misleading conclusions as this measure is only capable of capturing linear relationships. Copulas go beyond dependence measures and provide a sound framework for general dependence modelling. This paper will introduce an application of Copula to estimate, understand, and interpret the dependence structure in a given set of data El-Niño in Banyuwangi, Indonesia. In a nutshell, we proved the flexibility of Copulas Archimedean in rainfall modelling and catching phenomena of El Niño in Banyuwangi, East Java, Indonesia. Also, it was found that SST of nino3, nino4, and nino3.4 are most appropriate ENSO indicators in identifying the relationship of El Nino and rainfall.
NASA Astrophysics Data System (ADS)
Zhang, Sheng; Hong, Siyu
2018-07-01
In this paper, a generalized Ablowitz-Kaup-Newell-Segur (AKNS) hierarchy in inhomogeneities of media described by variable coefficients is investigated, which includes some important nonlinear evolution equations as special cases, for example, the celebrated Korteweg-de Vries equation modeling waves on shallow water surfaces. To be specific, the known AKNS spectral problem and its time evolution equation are first generalized by embedding a finite number of differentiable and time-dependent functions. Starting from the generalized AKNS spectral problem and its generalized time evolution equation, a generalized AKNS hierarchy with variable coefficients is then derived. Furthermore, based on a systematic analysis on the time dependence of related scattering data of the generalized AKNS spectral problem, exact solutions of the generalized AKNS hierarchy are formulated through the inverse scattering transform method. In the case of reflectionless potentials, the obtained exact solutions are reduced to n-soliton solutions. It is graphically shown that the dynamical evolutions of such soliton solutions are influenced by not only the time-dependent coefficients but also the related scattering data in the process of propagations.
A consistent framework for Horton regression statistics that leads to a modified Hack's law
Furey, P.R.; Troutman, B.M.
2008-01-01
A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.
Multifractal Properties of Process Control Variables
NASA Astrophysics Data System (ADS)
Domański, Paweł D.
2017-06-01
Control system is an inevitable element of any industrial installation. Its quality affects overall process performance significantly. The assessment, whether control system needs any improvement or not, requires relevant and constructive measures. There are various methods, like time domain based, Minimum Variance, Gaussian and non-Gaussian statistical factors, fractal and entropy indexes. Majority of approaches use time series of control variables. They are able to cover many phenomena. But process complexities and human interventions cause effects that are hardly visible for standard measures. It is shown that the signals originating from industrial installations have multifractal properties and such an analysis may extend standard approach to further observations. The work is based on industrial and simulation data. The analysis delivers additional insight into the properties of control system and the process. It helps to discover internal dependencies and human factors, which are hardly detectable.
NASA Astrophysics Data System (ADS)
Zuhdi, Shaifudin; Saputro, Dewi Retno Sari
2017-03-01
GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".
Neurocomputing strategies in decomposition based structural design
NASA Technical Reports Server (NTRS)
Szewczyk, Z.; Hajela, P.
1993-01-01
The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.
NASA Technical Reports Server (NTRS)
Barrett, C. A.
1985-01-01
Multiple linear regression analysis was used to determine an equation for estimating hot corrosion attack for a series of Ni base cast turbine alloys. The U transform (i.e., 1/sin (% A/100) to the 1/2) was shown to give the best estimate of the dependent variable, y. A complete second degree equation is described for the centered" weight chemistries for the elements Cr, Al, Ti, Mo, W, Cb, Ta, and Co. In addition linear terms for the minor elements C, B, and Zr were added for a basic 47 term equation. The best reduced equation was determined by the stepwise selection method with essentially 13 terms. The Cr term was found to be the most important accounting for 60 percent of the explained variability hot corrosion attack.
Variable aperture-based ptychographical iterative engine method
NASA Astrophysics Data System (ADS)
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.
A biodynamic feedthrough model based on neuromuscular principles.
Venrooij, Joost; Abbink, David A; Mulder, Mark; van Paassen, Marinus M; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H
2014-07-01
A biodynamic feedthrough (BDFT) model is proposed that describes how vehicle accelerations feed through the human body, causing involuntary limb motions and so involuntary control inputs. BDFT dynamics strongly depend on limb dynamics, which can vary between persons (between-subject variability), but also within one person over time, e.g., due to the control task performed (within-subject variability). The proposed BDFT model is based on physical neuromuscular principles and is derived from an established admittance model-describing limb dynamics-which was extended to include control device dynamics and account for acceleration effects. The resulting BDFT model serves primarily the purpose of increasing the understanding of the relationship between neuromuscular admittance and biodynamic feedthrough. An added advantage of the proposed model is that its parameters can be estimated using a two-stage approach, making the parameter estimation more robust, as the procedure is largely based on the well documented procedure required for the admittance model. To estimate the parameter values of the BDFT model, data are used from an experiment in which both neuromuscular admittance and biodynamic feedthrough are measured. The quality of the BDFT model is evaluated in the frequency and time domain. Results provide strong evidence that the BDFT model and the proposed method of parameter estimation put forward in this paper allows for accurate BDFT modeling across different subjects (accounting for between-subject variability) and across control tasks (accounting for within-subject variability).
Redlberger, S; Fischer, S; Köhler, H; Diller, R; Reinhold, P
2017-11-01
There is a paucity of published data reporting acid-base equilibrium in goats, and no information is available on how the acid-base complexity changes when suckling goat kids become ruminants. The aims of this study were to evaluate young healthy goats for age-related changes in serum proteins, metabolites, and electrolytes; differences in results when the Henderson-Hasselbalch equation or strong ion approaches were used were also assessed. To assess biological variability and reproducibility, two consecutive long-term studies, each lasting from the 6th to 56th week of life (wl), were performed in 15 (Study 1) and 10 (Study 2) animals. Blood gas analysis, serum biochemical analysis, and electrophoresis were performed on venous blood, and acid-base information was obtained using the traditional Henderson-Hasselbalch approach, Stewart's strong ion model, and Constable's simplified strong ion model. In all goats within the first 4-5 months, serum concentrations of glucose, l-lactate, and inorganic phosphate decreased significantly, while serum concentrations of total protein, albumin, and gamma globulin increased. Consequently, nonvolatile weak acids (A tot Alb and A tot TP ) increased. At the end of this 'adaptation period', i.e. when milk was replaced by purely plant-based food, significantly lower bicarbonate and base excess values were accompanied by blood pH that shifted towards acidosis. Electrolytes (Na + , K + , Ca 2+ , and Cl - ), anion gap, strong ion difference, and strong ion gap did not show age-dependent trends. In conclusion, somatic growth and development of gastro-intestinal fermentation in growing goats act as complex sources of physiological variability on acid-base equilibrium that was not reflected by the Henderson-Hasselbalch equation only. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kari, Jaana T.; Pehkonen, Jaakko; Hirvensalo, Mirja; Yang, Xiaolin; Hutri-Kähönen, Nina; Raitakari, Olli T.; Tammelin, Tuija H.
2015-01-01
This study examined the relationship between income and physical activity by using three measures to illustrate daily physical activity: the self-reported physical activity index for leisure-time physical activity, pedometer-based total steps for overall daily physical activity, and pedometer-based aerobic steps that reflect continuous steps for more than 10 min at a time. The study population consisted of 753 adults from Finland (mean age 41.7 years; 64% women) who participated in 2011 in the follow-up of the ongoing Young Finns study. Ordinary least squares models were used to evaluate the associations between income and physical activity. The consistency of the results was explored by using register-based income information from Statistics Finland, employing the instrumental variable approach, and dividing the pedometer-based physical activity according to weekdays and weekend days. The results indicated that higher income was associated with higher self-reported physical activity for both genders. The results were robust to the inclusion of the control variables and the use of register-based income information. However, the pedometer-based results were gender-specific and depended on the measurement day (weekday vs. weekend day). In more detail, the association was positive for women and negative or non-existing for men. According to the measurement day, among women, income was positively associated with aerobic steps despite the measurement day and with totals steps measured on the weekend. Among men, income was negatively associated with aerobic steps measured on weekdays. The results indicate that there is an association between income and physical activity, but the association is gender-specific and depends on the measurement type of physical activity. PMID:26317865
Kari, Jaana T; Pehkonen, Jaakko; Hirvensalo, Mirja; Yang, Xiaolin; Hutri-Kähönen, Nina; Raitakari, Olli T; Tammelin, Tuija H
2015-01-01
This study examined the relationship between income and physical activity by using three measures to illustrate daily physical activity: the self-reported physical activity index for leisure-time physical activity, pedometer-based total steps for overall daily physical activity, and pedometer-based aerobic steps that reflect continuous steps for more than 10 min at a time. The study population consisted of 753 adults from Finland (mean age 41.7 years; 64% women) who participated in 2011 in the follow-up of the ongoing Young Finns study. Ordinary least squares models were used to evaluate the associations between income and physical activity. The consistency of the results was explored by using register-based income information from Statistics Finland, employing the instrumental variable approach, and dividing the pedometer-based physical activity according to weekdays and weekend days. The results indicated that higher income was associated with higher self-reported physical activity for both genders. The results were robust to the inclusion of the control variables and the use of register-based income information. However, the pedometer-based results were gender-specific and depended on the measurement day (weekday vs. weekend day). In more detail, the association was positive for women and negative or non-existing for men. According to the measurement day, among women, income was positively associated with aerobic steps despite the measurement day and with totals steps measured on the weekend. Among men, income was negatively associated with aerobic steps measured on weekdays. The results indicate that there is an association between income and physical activity, but the association is gender-specific and depends on the measurement type of physical activity.
Tredennick, Andrew T; Adler, Peter B; Adler, Frederick R
2017-08-01
Theory relating species richness to ecosystem variability typically ignores the potential for environmental variability to promote species coexistence. Failure to account for fluctuation-dependent coexistence may explain deviations from the expected negative diversity-ecosystem variability relationship, and limits our ability to predict the consequences of increases in environmental variability. We use a consumer-resource model to explore how coexistence via the temporal storage effect and relative nonlinearity affects ecosystem variability. We show that a positive, rather than negative, diversity-ecosystem variability relationship is possible when ecosystem function is sampled across a natural gradient in environmental variability and diversity. We also show how fluctuation-dependent coexistence can buffer ecosystem functioning against increasing environmental variability by promoting species richness and portfolio effects. Our work provides a general explanation for variation in observed diversity-ecosystem variability relationships and highlights the importance of conserving regional species pools to help buffer ecosystems against predicted increases in environmental variability. © 2017 John Wiley & Sons Ltd/CNRS.
NASA Astrophysics Data System (ADS)
Alpert, P. A.; Knopf, D. A.
2015-05-01
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature (T) and relative humidity (RH) at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling rate dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nuclei (IN) all have the same IN surface area (ISA), however the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses physically observable parameters including the total number of droplets (Ntot) and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time dependent isothermal frozen fractions exhibiting non-exponential behavior with time can be readily explained by this model considering varying ISA. An apparent cooling rate dependence ofJhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. In an idealized cloud parcel model applying variability in ISAs for each droplet, the model predicts enhanced immersion freezing temperatures and greater ice crystal production compared to a case when ISAs are uniform in each droplet. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.
NASA Astrophysics Data System (ADS)
Alpert, Peter A.; Knopf, Daniel A.
2016-02-01
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, Ntot, and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of Jhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Kourgialas, Nektarios; Karatzas, George; Giannakis, Georgios; Lilli, Maria; Nikolaidis, Nikolaos
2014-05-01
Riverbank erosion affects the river morphology and the local habitat and results in riparian land loss, damage to property and infrastructures, ultimately weakening flood defences. An important issue concerning riverbank erosion is the identification of the areas vulnerable to erosion, as it allows for predicting changes and assists with stream management and restoration. One way to predict the vulnerable to erosion areas is to determine the erosion probability by identifying the underlying relations between riverbank erosion and the geomorphological and/or hydrological variables that prevent or stimulate erosion. A statistical model for evaluating the probability of erosion based on a series of independent local variables and by using logistic regression is developed in this work. The main variables affecting erosion are vegetation index (stability), the presence or absence of meanders, bank material (classification), stream power, bank height, river bank slope, riverbed slope, cross section width and water velocities (Luppi et al. 2009). In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable, e.g. binary response, based on one or more predictor variables (continuous or categorical). The probabilities of the possible outcomes are modelled as a function of independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. 1 = "presence of erosion" and 0 = "no erosion") for any value of the independent variables. The regression coefficients are estimated by using maximum likelihood estimation. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested (Atkinson et al. 2003). The developed statistical model is applied to the Koiliaris River Basin in the island of Crete, Greece. The aim is to determine the probability of erosion along the Koiliaris' riverbanks considering a series of independent geomorphological and/or hydrological variables. Data for the river bank slope and for the river cross section width are available at ten locations along the river. The riverbank has indications of erosion at six of the ten locations while four has remained stable. Based on a recent work, measurements for the two independent variables and data regarding bank stability are available at eight different locations along the river. These locations were used as validation points for the proposed statistical model. The results show a very close agreement between the observed erosion indications and the statistical model as the probability of erosion was accurately predicted at seven out of the eight locations. The next step is to apply the model at more locations along the riverbanks. In November 2013, stakes were inserted at selected locations in order to be able to identify the presence or absence of erosion after the winter period. In April 2014 the presence or absence of erosion will be identified and the model results will be compared to the field data. Our intent is to extend the model by increasing the number of independent variables in order to indentify the key factors favouring erosion along the Koiliaris River. We aim at developing an easy to use statistical tool that will provide a quantified measure of the erosion probability along the riverbanks, which could consequently be used to prevent erosion and flooding events. Atkinson, P. M., German, S. E., Sear, D. A. and Clark, M. J. 2003. Exploring the relations between riverbank erosion and geomorphological controls using geographically weighted logistic regression. Geographical Analysis, 35 (1), 58-82. Luppi, L., Rinaldi, M., Teruggi, L. B., Darby, S. E. and Nardi, L. 2009. Monitoring and numerical modelling of riverbank erosion processes: A case study along the Cecina River (central Italy). Earth Surface Processes and Landforms, 34 (4), 530-546. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES
van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.
2014-01-01
SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298
Masevicius, Fabio D; Dubin, Arnaldo
2015-02-04
The Stewart approach-the application of basic physical-chemical principles of aqueous solutions to blood-is an appealing method for analyzing acid-base disorders. These principles mainly dictate that pH is determined by three independent variables, which change primarily and independently of one other. In blood plasma in vivo these variables are: (1) the PCO2; (2) the strong ion difference (SID)-the difference between the sums of all the strong (i.e., fully dissociated, chemically nonreacting) cations and all the strong anions; and (3) the nonvolatile weak acids (Atot). Accordingly, the pH and the bicarbonate levels (dependent variables) are only altered when one or more of the independent variables change. Moreover, the source of H(+) is the dissociation of water to maintain electroneutrality when the independent variables are modified. The basic principles of the Stewart approach in blood, however, have been challenged in different ways. First, the presumed independent variables are actually interdependent as occurs in situations such as: (1) the Hamburger effect (a chloride shift when CO2 is added to venous blood from the tissues); (2) the loss of Donnan equilibrium (a chloride shift from the interstitium to the intravascular compartment to balance the decrease of Atot secondary to capillary leak; and (3) the compensatory response to a primary disturbance in either independent variable. Second, the concept of water dissociation in response to changes in SID is controversial and lacks experimental evidence. In addition, the Stewart approach is not better than the conventional method for understanding acid-base disorders such as hyperchloremic metabolic acidosis secondary to a chloride-rich-fluid load. Finally, several attempts were performed to demonstrate the clinical superiority of the Stewart approach. These studies, however, have severe methodological drawbacks. In contrast, the largest study on this issue indicated the interchangeability of the Stewart and conventional methods. Although the introduction of the Stewart approach was a new insight into acid-base physiology, the method has not significantly improved our ability to understand, diagnose, and treat acid-base alterations in critically ill patients.
ERIC Educational Resources Information Center
Güzeller, Cem Oktay; Eser, Mehmet Taha; Aksu, Gökhan
2016-01-01
This study attempts to determine the factors affecting the mathematics achievement of students in Turkey based on data from the Programme for International Student Assessment 2012 and the correct classification ratio of the established model. The study used mathematics achievement as a dependent variable while sex, having a study room, preparation…
D{sub 2} dopamine receptor gene and behavioral characteristics in nicotine dependence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, E.P.; Fitch, R.J.; Syndulko, K.
1994-09-01
The D{sub 2} dopamine receptor (DRD2) A1 allele has been recently associated with nicotine dependence. In the present study, TaqI A alleles (the minor A1 and the major A2 allele) of the DRD2 were determined in medically-ill subjects. The sample was composed of 41 non-smokers (N), 69 ex-smokers (X) and 63 active smokers (A). The relationships of DRD2 alleles to personality (Eysenick`s Addictive Personality [AP]), depression and nicotine dependence (Fagerstroem) scores were ascertained. A significant (P = 0.002) group effect prevailed in the AP scores, with the A group having the highest scores. Moreover, a significant (P = 0.025) allelemore » by group interaction was found, with A1 allelic subjects in group A showing the highest AP scores. Significant group effects were also found in both the depression (P = 0.0004) and the nicotine dependence (P = 0.0003) scores, with the A group again showing the highest scores. However, in contrast to the AP scores, no significant allele by group interaction was found either in the depression or the nicotine dependence scores. In conclusion, the present findings suggest a role for the DRD2 gene in personality of smokers. However, relationship of the DRD2 gene to the degree of depression or nicotine dependence was not found. The data indicate the importance of using behavioral and genetic variables in dissecting the complex set of variables associated with the smoking habit, and thus in achieving a better understanding of the biobehavioral bases of this addiction.« less
DENSITY-DEPENDENT FLOW IN ONE-DIMENSIONAL VARIABLY-SATURATED MEDIA
A one-dimensional finite element is developed to simulate density-dependent flow of saltwater in variably saturated media. The flow and solute equations were solved in a coupled mode (iterative), in a partially coupled mode (non-iterative), and in a completely decoupled mode. P...
NASA Astrophysics Data System (ADS)
Wei, Jun; Jiang, Guo-Qing; Liu, Xin
2017-09-01
This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.
Leathers, Marvin L; Olson, Carl R
2017-04-01
Neurons in the lateral intraparietal (LIP) area of macaque monkey parietal cortex respond to cues predicting rewards and penalties of variable size in a manner that depends on the motivational salience of the predicted outcome (strong for both large reward and large penalty) rather than on its value (positive for large reward and negative for large penalty). This finding suggests that LIP mediates the capture of attention by salient events and does not encode value in the service of value-based decision making. It leaves open the question whether neurons elsewhere in the brain encode value in the identical task. To resolve this issue, we recorded neuronal activity in the amygdala in the context of the task employed in the LIP study. We found that responses to reward-predicting cues were similar between areas, with the majority of reward-sensitive neurons responding more strongly to cues that predicted large reward than to those that predicted small reward. Responses to penalty-predicting cues were, however, markedly different. In the amygdala, unlike LIP, few neurons were sensitive to penalty size, few penalty-sensitive neurons favored large over small penalty, and the dependence of firing rate on penalty size was negatively correlated with its dependence on reward size. These results indicate that amygdala neurons encoded cue value under circumstances in which LIP neurons exhibited sensitivity to motivational salience. However, the representation of negative value, as reflected in sensitivity to penalty size, was weaker than the representation of positive value, as reflected in sensitivity to reward size. NEW & NOTEWORTHY This is the first study to characterize amygdala neuronal responses to cues predicting rewards and penalties of variable size in monkeys making value-based choices. Manipulating reward and penalty size allowed distinguishing activity dependent on motivational salience from activity dependent on value. This approach revealed in a previous study that neurons of the lateral intraparietal (LIP) area encode motivational salience. Here, it reveals that amygdala neurons encode value. The results establish a sharp functional distinction between the two areas. Copyright © 2017 the American Physiological Society.
A Preliminary Investigation of the Predictors of Tanning Dependence
Heckman, Carolyn J.; Egleston, Brian L.; Wilson, Diane B.; Ingersoll, Karen S.
2014-01-01
Objectives To investigate possible predictors of tanning dependence including demographic variables, exposure and protective behaviors, and other health-related behaviors. Methods This study consisted of an online survey of 400 students and other volunteers from a university community. Results Twenty-seven percent of the sample was classified as tanning dependent. Tanning dependence was predicted by ethnicity and skin type, indoor and outdoor tanning and burning, and lower skin protective behaviors, as well as smoking and body mass index. Conclusions Young adults are at risk for tanning dependence, which can be predicted by specific demographic and behavioral variables. PMID:18241130
NASA Astrophysics Data System (ADS)
Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade
2015-12-01
The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.
Perception of Organic Food Consumption in Romania.
Petrescu, Anca Gabriela; Oncioiu, Ionica; Petrescu, Marius
2017-05-30
This study provides insight into the attitude of Romanian consumers towards organic food. Furthermore, it examines the sustainable food production system in Romania from the perspective of consumer behavior. This study used a mathematical model of linear regression with the main purpose being to determine the best prediction for the dependent variable when given a number of new values for the independent variable. This empirical research is based on a survey with a sample of 672 consumers, which uses a questionnaire to analyze their intentions towards sustainable food products. The results indicate that a more positive attitude of consumers towards organic food products will further strengthen their purchasing intentions, while the status of the consumption of organic consumers will not affect their willingness to purchase organic food products. Statistics have shown that sustainable food consumption is beneficial for health, so it can also become a profitable business in Romania. Furthermore, food sustainability in Romania depends on the ability of an organic food business to adapt to the new requirements of green consumption.
Variability of adjacency effects in sky reflectance measurements.
Groetsch, Philipp M M; Gege, Peter; Simis, Stefan G H; Eleveld, Marieke A; Peters, Steef W M
2017-09-01
Sky reflectance R sky (λ) is used to correct in situ reflectance measurements in the remote detection of water color. We analyzed the directional and spectral variability in R sky (λ) due to adjacency effects against an atmospheric radiance model. The analysis is based on one year of semi-continuous R sky (λ) observations that were recorded in two azimuth directions. Adjacency effects contributed to R sky (λ) dependence on season and viewing angle and predominantly in the near-infrared (NIR). For our test area, adjacency effects spectrally resembled a generic vegetation spectrum. The adjacency effect was weakly dependent on the magnitude of Rayleigh- and aerosol-scattered radiance. The reflectance differed between viewing directions 5.4±6.3% for adjacency effects and 21.0±19.8% for Rayleigh- and aerosol-scattered R sky (λ) in the NIR. Under which conditions in situ water reflectance observations require dedicated correction for adjacency effects is discussed. We provide an open source implementation of our method to aid identification of such conditions.
Functional Freedom: A Psychological Model of Freedom in Decision-Making
Lau, Stephan; Hiemisch, Anette
2017-01-01
The freedom of a decision is not yet sufficiently described as a psychological variable. We present a model of functional decision freedom that aims to fill that role. The model conceptualizes functional freedom as a capacity of people that varies depending on certain conditions of a decision episode. It denotes an inner capability to consciously shape complex decisions according to one’s own values and needs. Functional freedom depends on three compensatory dimensions: it is greatest when the decision-maker is highly rational, when the structure of the decision is highly underdetermined, and when the decision process is strongly based on conscious thought and reflection. We outline possible research questions, argue for psychological benefits of functional decision freedom, and explicate the model’s implications on current knowledge and research. In conclusion, we show that functional freedom is a scientific variable, permitting an additional psychological foothold in research on freedom, and that is compatible with a deterministic worldview. PMID:28678165
Perception of Organic Food Consumption in Romania
Petrescu, Anca Gabriela; Oncioiu, Ionica; Petrescu, Marius
2017-01-01
This study provides insight into the attitude of Romanian consumers towards organic food. Furthermore, it examines the sustainable food production system in Romania from the perspective of consumer behavior. This study used a mathematical model of linear regression with the main purpose being to determine the best prediction for the dependent variable when given a number of new values for the independent variable. This empirical research is based on a survey with a sample of 672 consumers, which uses a questionnaire to analyze their intentions towards sustainable food products. The results indicate that a more positive attitude of consumers towards organic food products will further strengthen their purchasing intentions, while the status of the consumption of organic consumers will not affect their willingness to purchase organic food products. Statistics have shown that sustainable food consumption is beneficial for health, so it can also become a profitable business in Romania. Furthermore, food sustainability in Romania depends on the ability of an organic food business to adapt to the new requirements of green consumption. PMID:28556795
Optimization of Turbine Blade Design for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Shyy, Wei
1998-01-01
To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.
Therrien, Amanda S; Wolpert, Daniel M; Bastian, Amy J
2016-01-01
Reinforcement and error-based processes are essential for motor learning, with the cerebellum thought to be required only for the error-based mechanism. Here we examined learning and retention of a reaching skill under both processes. Control subjects learned similarly from reinforcement and error-based feedback, but showed much better retention under reinforcement. To apply reinforcement to cerebellar patients, we developed a closed-loop reinforcement schedule in which task difficulty was controlled based on recent performance. This schedule produced substantial learning in cerebellar patients and controls. Cerebellar patients varied in their learning under reinforcement but fully retained what was learned. In contrast, they showed complete lack of retention in error-based learning. We developed a mechanistic model of the reinforcement task and found that learning depended on a balance between exploration variability and motor noise. While the cerebellar and control groups had similar exploration variability, the patients had greater motor noise and hence learned less. Our results suggest that cerebellar damage indirectly impairs reinforcement learning by increasing motor noise, but does not interfere with the reinforcement mechanism itself. Therefore, reinforcement can be used to learn and retain novel skills, but optimal reinforcement learning requires a balance between exploration variability and motor noise. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain.
Therrien, Amanda S.; Wolpert, Daniel M.
2016-01-01
Abstract See Miall and Galea (doi: 10.1093/awv343 ) for a scientific commentary on this article. Reinforcement and error-based processes are essential for motor learning, with the cerebellum thought to be required only for the error-based mechanism. Here we examined learning and retention of a reaching skill under both processes. Control subjects learned similarly from reinforcement and error-based feedback, but showed much better retention under reinforcement. To apply reinforcement to cerebellar patients, we developed a closed-loop reinforcement schedule in which task difficulty was controlled based on recent performance. This schedule produced substantial learning in cerebellar patients and controls. Cerebellar patients varied in their learning under reinforcement but fully retained what was learned. In contrast, they showed complete lack of retention in error-based learning. We developed a mechanistic model of the reinforcement task and found that learning depended on a balance between exploration variability and motor noise. While the cerebellar and control groups had similar exploration variability, the patients had greater motor noise and hence learned less. Our results suggest that cerebellar damage indirectly impairs reinforcement learning by increasing motor noise, but does not interfere with the reinforcement mechanism itself. Therefore, reinforcement can be used to learn and retain novel skills, but optimal reinforcement learning requires a balance between exploration variability and motor noise. PMID:26626368
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huertas-Hernando, Daniel; Farahmand, Hossein; Holttinen, Hannele
2016-06-20
Hydro power is one of the most flexible sources of electricity production. Power systems with considerable amounts of flexible hydro power potentially offer easier integration of variable generation, e.g., wind and solar. However, there exist operational constraints to ensure mid-/long-term security of supply while keeping river flows and reservoirs levels within permitted limits. In order to properly assess the effective available hydro power flexibility and its value for storage, a detailed assessment of hydro power is essential. Due to the inherent uncertainty of the weather-dependent hydrological cycle, regulation constraints on the hydro system, and uncertainty of internal load as wellmore » as variable generation (wind and solar), this assessment is complex. Hence, it requires proper modeling of all the underlying interactions between hydro power and the power system, with a large share of other variable renewables. A summary of existing experience of wind integration in hydro-dominated power systems clearly points to strict simulation methodologies. Recommendations include requirements for techno-economic models to correctly assess strategies for hydro power and pumped storage dispatch. These models are based not only on seasonal water inflow variations but also on variable generation, and all these are in time horizons from very short term up to multiple years, depending on the studied system. Another important recommendation is to include a geographically detailed description of hydro power systems, rivers' flows, and reservoirs as well as grid topology and congestion.« less
Lee, Kyu Ha; Tadesse, Mahlet G; Baccarelli, Andrea A; Schwartz, Joel; Coull, Brent A
2017-03-01
The analysis of multiple outcomes is becoming increasingly common in modern biomedical studies. It is well-known that joint statistical models for multiple outcomes are more flexible and more powerful than fitting a separate model for each outcome; they yield more powerful tests of exposure or treatment effects by taking into account the dependence among outcomes and pooling evidence across outcomes. It is, however, unlikely that all outcomes are related to the same subset of covariates. Therefore, there is interest in identifying exposures or treatments associated with particular outcomes, which we term outcome-specific variable selection. In this work, we propose a variable selection approach for multivariate normal responses that incorporates not only information on the mean model, but also information on the variance-covariance structure of the outcomes. The approach effectively leverages evidence from all correlated outcomes to estimate the effect of a particular covariate on a given outcome. To implement this strategy, we develop a Bayesian method that builds a multivariate prior for the variable selection indicators based on the variance-covariance of the outcomes. We show via simulation that the proposed variable selection strategy can boost power to detect subtle effects without increasing the probability of false discoveries. We apply the approach to the Normative Aging Study (NAS) epigenetic data and identify a subset of five genes in the asthma pathway for which gene-specific DNA methylations are associated with exposures to either black carbon, a marker of traffic pollution, or sulfate, a marker of particles generated by power plants. © 2016, The International Biometric Society.
Family burden in opioid dependence syndrome in tertiary care centre.
Shyangwa, P M; Tripathi, B M; Lal, R
2008-01-01
This is a cross-sectional, hospital based study conducted in De-Addiction centre under department of psychiatry, AIIMS, New Delhi, India. Patients and their spouses fulfilling inclusion criteria were enrolled in the study after taking informed consent. A diagnosis of Opioid Dependence Syndrome (ODS) was made based on ICD-10 criteria and the assessment of severity of ODS was determined by Addiction Severity Index (Hindi version). Subsequently the family burden, perceived by spouses was assessed using Family Burden Interview Schedule (FBIS). Most of the subjects were from urban or semi-urban areas, mostly from around the service facility. The maximum number of subjects was of age group 31-40 years with majority of having below high school level education. Both subjective and objective family burden was perceived as "severe" by subjects' spouses. The relationship between spouses' perceived burden and socio-demographic variables including duration of substance abuse were not correlated. Hence it was found that opioid dependent subjects cause considerable amount of distress to their care providers.
Changing crops in response to climate: virtual Nang Rong, Thailand in an agent based simulation
Malanson, George P.; Verdery, Ashton M.; Walsh, Stephen J.; Sawangdee, Yothin; Heumann, Benjamin W.; McDaniel, Philip M.; Frizzelle, Brian G.; Williams, Nathalie E.; Yao, Xiaozheng; Entwisle, Barbara; Rindfuss, Ronald R.
2014-01-01
The effects of extended climatic variability on agricultural land use were explored for the type of system found in villages of northeastern Thailand. An agent based model developed for the Nang Rong district was used to simulate land allotted to jasmine rice, heavy rice, cassava, and sugar cane. The land use choices in the model depended on likely economic outcomes, but included elements of bounded rationality in dependence on household demography. The socioeconomic dynamics are endogenous in the system, and climate changes were added as exogenous drivers. Villages changed their agricultural effort in many different ways. Most villages reduced the amount of land under cultivation, primarily with reduction in jasmine rice, but others did not. The variation in responses to climate change indicates potential sensitivity to initial conditions and path dependence for this type of system. The differences between our virtual villages and the real villages of the region indicate effects of bounded rationality and limits on model applications. PMID:25061240
Photometric variability in earthshine observations.
Langford, Sally V; Wyithe, J Stuart B; Turner, Edwin L
2009-04-01
The identification of an extrasolar planet as Earth-like will depend on the detection of atmospheric signatures or surface non-uniformities. In this paper we present spatially unresolved flux light curves of Earth for the purpose of studying a prototype extrasolar terrestrial planet. Our monitoring of the photometric variability of earthshine revealed changes of up to 23% per hour in the brightness of Earth's scattered light at around 600 nm, due to the removal of specular reflection from the view of the Moon. This variability is accompanied by reddening of the spectrum and results from a change in surface properties across the continental boundary between the Indian Ocean and Africa's east coast. Our results based on earthshine monitoring indicate that specular reflection should provide a useful tool in determining the presence of liquid water on extrasolar planets via photometric observations.
Socioeconomic Status and Health: A New Approach to the Measurement of Bivariate Inequality
Kessels, Roselinde
2017-01-01
We suggest an alternative way to construct a family of indices of socioeconomic inequality of health. Our indices belong to the broad category of linear indices. In contrast to rank-dependent indices, which are defined in terms of the ranks of the socioeconomic variable and the levels of the health variable, our indices are based on the levels of both the socioeconomic and the health variable. We also indicate how the indices can be modified in order to introduce sensitivity to inequality in the socioeconomic distribution and to inequality in the health distribution. As an empirical illustration, we make a comparative study of the relation between income and well-being in 16 European countries using data from the Survey of Health, Ageing and Retirement in Europe (SHARE) Wave 4. PMID:28644405
Zhang, Kai; Li, Yun; Schwartz, Joel D.; O'Neill, Marie S.
2014-01-01
Hot weather increases risk of mortality. Previous studies used different sets of weather variables to characterize heat stress, resulting in variation in heat-mortality- associations depending on the metric used. We employed a statistical learning method – random forests – to examine which of various weather variables had the greatest impact on heat-related mortality. We compiled a summertime daily weather and mortality counts dataset from four U.S. cities (Chicago, IL; Detroit, MI; Philadelphia, PA; and Phoenix, AZ) from 1998 to 2006. A variety of weather variables were ranked in predicting deviation from typical daily all-cause and cause-specific death counts. Ranks of weather variables varied with city and health outcome. Apparent temperature appeared to be the most important predictor of heat-related mortality for all-cause mortality. Absolute humidity was, on average, most frequently selected one of the top variables for all-cause mortality and seven cause-specific mortality categories. Our analysis affirms that apparent temperature is a reasonable variable for activating heat alerts and warnings, which are commonly based on predictions of total mortality in next few days. Additionally, absolute humidity should be included in future heat-health studies. Finally, random forests can be used to guide choice of weather variables in heat epidemiology studies. PMID:24834832
Zilcha-Mano, Sigal; Keefe, John R; Chui, Harold; Rubin, Avinadav; Barrett, Marna S; Barber, Jacques P
2016-12-01
Premature discontinuation of therapy is a widespread problem that hampers the delivery of mental health treatment. A high degree of variability has been found among rates of premature treatment discontinuation, suggesting that rates may differ depending on potential moderators. In the current study, our aim was to identify demographic and interpersonal variables that moderate the association between treatment assignment and dropout. Data from a randomized controlled trial conducted from November 2001 through June 2007 (N = 156) comparing supportive-expressive therapy, antidepressant medication, and placebo for the treatment of depression (based on DSM-IV criteria) were used. Twenty prerandomization variables were chosen based on previous literature. These variables were subjected to exploratory bootstrapped variable selection and included in the logistic regression models if they passed variable selection. Three variables were found to moderate the association between treatment assignment and dropout: age, pretreatment therapeutic alliance expectations, and the presence of vindictive tendencies in interpersonal relationships. When patients were divided into those randomly assigned to their optimal treatment and those assigned to their least optimal treatment, dropout rates in the optimal treatment group (24.4%) were significantly lower than those in the least optimal treatment group (47.4%; P = .03). Present findings suggest that a patient's age and pretreatment interpersonal characteristics predict the association between common depression treatments and dropout rate. If validated by further studies, these characteristics can assist in reducing dropout through targeted treatment assignment. Secondary analysis of data from ClinicalTrials.gov identifier: NCT00043550. © Copyright 2016 Physicians Postgraduate Press, Inc.
Problems Identifying Independent and Dependent Variables
ERIC Educational Resources Information Center
Leatham, Keith R.
2012-01-01
This paper discusses one step from the scientific method--that of identifying independent and dependent variables--from both scientific and mathematical perspectives. It begins by analyzing an episode from a middle school mathematics classroom that illustrates the need for students and teachers alike to develop a robust understanding of…
Some Methodological Considerations in Researching the Family Career.
ERIC Educational Resources Information Center
White, James
Methodological issues which confront researchers using the concept of the family career include the selection of appropriate dependent variables; the efficacy of historical versus immediate effects; and scaling the family career (a proposed replacement for the "family life cycle"). The issue of which dependent variables should be…
Variable Order and Distributed Order Fractional Operators
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
2002-01-01
Many physical processes appear to exhibit fractional order behavior that may vary with time or space. The continuum of order in the fractional calculus allows the order of the fractional operator to be considered as a variable. This paper develops the concept of variable and distributed order fractional operators. Definitions based on the Riemann-Liouville definitions are introduced and behavior of the operators is studied. Several time domain definitions that assign different arguments to the order q in the Riemann-Liouville definition are introduced. For each of these definitions various characteristics are determined. These include: time invariance of the operator, operator initialization, physical realization, linearity, operational transforms. and memory characteristics of the defining kernels. A measure (m2) for memory retentiveness of the order history is introduced. A generalized linear argument for the order q allows the concept of "tailored" variable order fractional operators whose a, memory may be chosen for a particular application. Memory retentiveness (m2) and order dynamic behavior are investigated and applications are shown. The concept of distributed order operators where the order of the time based operator depends on an additional independent (spatial) variable is also forwarded. Several definitions and their Laplace transforms are developed, analysis methods with these operators are demonstrated, and examples shown. Finally operators of multivariable and distributed order are defined in their various applications are outlined.
Estimating Selected Streamflow Statistics Representative of 1930-2002 in West Virginia
Wiley, Jeffrey B.
2008-01-01
Regional equations and procedures were developed for estimating 1-, 3-, 7-, 14-, and 30-day 2-year; 1-, 3-, 7-, 14-, and 30-day 5-year; and 1-, 3-, 7-, 14-, and 30-day 10-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the 1-day, 3-year and 4-day, 3-year biologically based low-flow frequency values; the U.S. Environmental Protection Agency harmonic-mean flows; and the 10-, 25-, 50-, 75-, and 90-percent flow-duration values. Regional equations were developed using ordinary least-squares regression using statistics from 117 U.S. Geological Survey continuous streamflow-gaging stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia - North, South-Central, and Eastern Panhandle - were determined. Drainage area, precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. Estimating procedures are presented for determining statistics at a gaging station, a partial-record station, and an ungaged location. Examples of some estimating procedures are presented.
A Model Based on Environmental Factors for Diameter Distribution in Black Wattle in Brazil
Sanquetta, Carlos Roberto; Behling, Alexandre; Dalla Corte, Ana Paula; Péllico Netto, Sylvio; Rodrigues, Aurelio Lourenço; Simon, Augusto Arlindo
2014-01-01
This article discusses the dynamics of a diameter distribution in stands of black wattle throughout its growth cycle using the Weibull probability density function. Moreover, the parameters of this distribution were related to environmental variables from meteorological data and surface soil horizon with the aim of finding a model for diameter distribution which their coefficients were related to the environmental variables. We found that the diameter distribution of the stand changes only slightly over time and that the estimators of the Weibull function are correlated with various environmental variables, with accumulated rainfall foremost among them. Thus, a model was obtained in which the estimators of the Weibull function are dependent on rainfall. Such a function can have important applications, such as in simulating growth potential in regions where historical growth data is lacking, as well as the behavior of the stand under different environmental conditions. The model can also be used to project growth in diameter, based on the rainfall affecting the forest over a certain time period. PMID:24932909
Vakorin, Vasily A.; Mišić, Bratislav; Krakovska, Olga; McIntosh, Anthony Randal
2011-01-01
Variability in source dynamics across the sources in an activated network may be indicative of how the information is processed within a network. Information-theoretic tools allow one not only to characterize local brain dynamics but also to describe interactions between distributed brain activity. This study follows such a framework and explores the relations between signal variability and asymmetry in mutual interdependencies in a data-driven pipeline of non-linear analysis of neuromagnetic sources reconstructed from human magnetoencephalographic (MEG) data collected as a reaction to a face recognition task. Asymmetry in non-linear interdependencies in the network was analyzed using transfer entropy, which quantifies predictive information transfer between the sources. Variability of the source activity was estimated using multi-scale entropy, quantifying the rate of which information is generated. The empirical results are supported by an analysis of synthetic data based on the dynamics of coupled systems with time delay in coupling. We found that the amount of information transferred from one source to another was correlated with the difference in variability between the dynamics of these two sources, with the directionality of net information transfer depending on the time scale at which the sample entropy was computed. The results based on synthetic data suggest that both time delay and strength of coupling can contribute to the relations between variability of brain signals and information transfer between them. Our findings support the previous attempts to characterize functional organization of the activated brain, based on a combination of non-linear dynamics and temporal features of brain connectivity, such as time delay. PMID:22131968
Time-frequency dynamics of resting-state brain connectivity measured with fMRI.
Chang, Catie; Glover, Gary H
2010-03-01
Most studies of resting-state functional connectivity using fMRI employ methods that assume temporal stationarity, such as correlation and data-driven decompositions computed across the duration of the scan. However, evidence from both task-based fMRI studies and animal electrophysiology suggests that functional connectivity may exhibit dynamic changes within time scales of seconds to minutes. In the present study, we investigated the dynamic behavior of resting-state connectivity across the course of a single scan, performing a time-frequency coherence analysis based on the wavelet transform. We focused on the connectivity of the posterior cingulate cortex (PCC), a primary node of the default-mode network, examining its relationship with both the "anticorrelated" ("task-positive") network as well as other nodes of the default-mode network. It was observed that coherence and phase between the PCC and the anticorrelated network was variable in time and frequency, and statistical testing based on Monte Carlo simulations revealed the presence of significant scale-dependent temporal variability. In addition, a sliding-window correlation procedure identified other regions across the brain that exhibited variable connectivity with the PCC across the scan, which included areas previously implicated in attention and salience processing. Although it is unclear whether the observed coherence and phase variability can be attributed to residual noise or modulation of cognitive state, the present results illustrate that resting-state functional connectivity is not static, and it may therefore prove valuable to consider measures of variability, in addition to average quantities, when characterizing resting-state networks. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Variability of isotope and major ion chemistry in the Allequash Basin, Wisconsin
Walker, John F.; Hunt, Randall J.; Bullen, Thomas D.; Krabbenhoft, David P.; Kendall, Carol
2003-01-01
As part of ongoing research conducted at one of the U.S. Geological Survey's Water, Energy, and Biogeochem-ical Budgets sites, work was undertaken to describe the spatial and temporal variability of stream and ground water isotopic composition and cation chemistry in the Trout Lake watershed, to relate the variability to the watershed flow system, and to identify the linkages of geochemical evolution and source of water in the watershed. The results are based on periodic sampling of sites at two scales along Allequash Creek, a small headwater stream in northern Wisconsin. Based on this sampling, there are distinct water isotopic and geochemical differences observed at a smaller hillslope scale and the larger Allequash Creek scale. The variability was larger than expected for this simple watershed, and is likely to be seen in more complex basins. Based on evidence from multiple isotopes and stream chemistry, the flow system arises from three main source waters (terrestrial-, lake-, or wetland-derived recharge) that can be identified along any flowpath using water isotopes together with geochemical characteristics such as iron concentrations. The ground water chemistry demonstrates considerable spatial variability that depends mainly on the flow-path length and water mobility through the aquifer. Calcium concentrations increase with increasing flowpath length, whereas strontium isotope ratios increase with increasing extent of stagnation in either the unsaturated or saturated zones as waters move from source to sink. The flowpath distribution we identify provides important constraints on the calibration of ground water flow models such as that undertaken by Pint et al. (this issue).
Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator
NASA Astrophysics Data System (ADS)
Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.
2018-04-01
The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.
Controls on the variability of net infiltration to desert sandstone
Heilweil, Victor M.; McKinney, Tim S.; Zhdanov, Michael S.; Watt, Dennis E.
2007-01-01
As populations grow in arid climates and desert bedrock aquifers are increasingly targeted for future development, understanding and quantifying the spatial variability of net infiltration becomes critically important for accurately inventorying water resources and mapping contamination vulnerability. This paper presents a conceptual model of net infiltration to desert sandstone and then develops an empirical equation for its spatial quantification at the watershed scale using linear least squares inversion methods for evaluating controlling parameters (independent variables) based on estimated net infiltration rates (dependent variables). Net infiltration rates used for this regression analysis were calculated from environmental tracers in boreholes and more than 3000 linear meters of vadose zone excavations in an upland basin in southwestern Utah underlain by Navajo sandstone. Soil coarseness, distance to upgradient outcrop, and topographic slope were shown to be the primary physical parameters controlling the spatial variability of net infiltration. Although the method should be transferable to other desert sandstone settings for determining the relative spatial distribution of net infiltration, further study is needed to evaluate the effects of other potential parameters such as slope aspect, outcrop parameters, and climate on absolute net infiltration rates.
Variables Associated With Tic Exacerbation in Children With Chronic Tic Disorders.
Himle, Michael B; Capriotti, Matthew R; Hayes, Loran P; Ramanujam, Krishnapriya; Scahill, Lawrence; Sukhodolsky, Denis G; Wilhelm, Sabine; Deckersbach, Thilo; Peterson, Alan L; Specht, Matt W; Walkup, John T; Chang, Susanna; Piacentini, John
2014-03-01
Research has shown that motor and vocal tics fluctuate in frequency, intensity, and form in response to environmental and contextual cues. Behavioral models have proposed that some of the variation in tics may reflect context-dependent interactive learning processes such that once tics are performed, they are influenced by environmental contingencies. The current study describes the results of a function-based assessment of tics (FBAT) from a recently completed study comparing Comprehensive Behavioral Intervention for Tics (CBIT) with supportive psychotherapy. The current study describes the frequency with which antecedent and consequence variables were reported to exacerbate tics and the relationships between these functional variables and sample baseline characteristics, comorbidities, and measures of tic severity. Results showed that tic-exacerbating antecedents and consequences were nearly ubiquitous in a sample of children with chronic tic disorder. In addition, functional variables were related to baseline measures of comorbid internalizing symptoms and specific measures of tic severity. © The Author(s) 2014.
The QSAR study of flavonoid-metal complexes scavenging rad OH free radical
NASA Astrophysics Data System (ADS)
Wang, Bo-chu; Qian, Jun-zhen; Fan, Ying; Tan, Jun
2014-10-01
Flavonoid-metal complexes have antioxidant activities. However, quantitative structure-activity relationships (QSAR) of flavonoid-metal complexes and their antioxidant activities has still not been tackled. On the basis of 21 structures of flavonoid-metal complexes and their antioxidant activities for scavenging rad OH free radical, we optimised their structures using Gaussian 03 software package and we subsequently calculated and chose 18 quantum chemistry descriptors such as dipole, charge and energy. Then we chose several quantum chemistry descriptors that are very important to the IC50 of flavonoid-metal complexes for scavenging rad OH free radical through method of stepwise linear regression, Meanwhile we obtained 4 new variables through the principal component analysis. Finally, we built the QSAR models based on those important quantum chemistry descriptors and the 4 new variables as the independent variables and the IC50 as the dependent variable using an Artificial Neural Network (ANN), and we validated the two models using experimental data. These results show that the two models in this paper are reliable and predictable.
Directional Dependence in Developmental Research
ERIC Educational Resources Information Center
von Eye, Alexander; DeShon, Richard P.
2012-01-01
In this article, we discuss and propose methods that may be of use to determine direction of dependence in non-normally distributed variables. First, it is shown that standard regression analysis is unable to distinguish between explanatory and response variables. Then, skewness and kurtosis are discussed as tools to assess deviation from…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography.Our derivation, which is based on the rate-summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees.more » This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
Affective dependence and aggression: an exploratory study.
Petruccelli, Filippo; Diotaiuti, Pierluigi; Verrastro, Valeria; Petruccelli, Irene; Federico, Roberta; Martinotti, Giovanni; Fossati, Andrea; Di Giannantonio, Massimo; Janiri, Luigi
2014-01-01
Emotionally dependent subjects may engage in controlling, restrictive, and aggressive behaviours, which limit their partner's autonomy. The underlying causes of such behaviours are not solely based on levels of aggression, but act as a mean of maintaining the subject's own sense of self-worth, identity, and general functioning. The aim of the paper is to explore the correlation between affective dependency and reactive/proactive aggression and to evaluate individual differences as predisposing factors for aggressive behaviour and emotional dependency. The Spouse-Specific Dependency Scale (SSDS) and the Reactive Proactive Questionnaire (RPQ) were administered to a sample of 3375 subjects. In the whole sample, a positive correlation between emotional dependency and proactive aggression was identified. Differences with regard to sex, age group, and geographical distribution were evidenced for the scores of the different scales. A fundamental distinction between reactive and proactive aggression was observed, anchoring proactive aggression more strictly to emotional dependency. Sociocultural and demographical variables, together with the previous structuring of attachment styles, help to determine the scope, frequency, and intensity of the demands made to the partner, as well as to feed the fears of loss, abandonment, or betrayal.
Dependence of vestibular reactions on frequency of action of sign-variable accelerations
NASA Technical Reports Server (NTRS)
Lapayev, E. V.; Vorobyev, O. A.; Ivanov, V. V.
1980-01-01
It was revealed that during the tests with continuous action of sign variable Coriolis acceleration the development of kinetosis was proportionate to the time of head inclinations in the range of 1 to 4 seconds while illusions of rocking in sagittal plane was more expressed in fast inclinations. The obtained data provided the evidence of sufficient dependence of vestibulovegetative and vestibulosensory reactions on the period of repetition of sign variable Coriolis acceleration.
Occupant perception of indoor air and comfort in four hospitality environments.
Moschandreas, D J; Chu, P
2002-01-01
This article reports on a survey of customer and staff perceptions of indoor air quality at two restaurants, a billiard hall, and a casino. The survey was conducted at each environment for 8 days: 2 weekend days on 2 consecutive weekends and 4 weekdays. Before and during the survey, each hospitality environment satisfied ventilation requirements set in ASHRAE Standard 62-1999, Ventilation for Acceptable Indoor Air. An objective of this study was to test the hypothesis: If a hospitality environment satisfies ASHRAE ventilation requirements, then the indoor air is acceptable, that is, fewer than 20% of the exposed occupants perceive the environment as unacceptable. A second objective was to develop a multiple regression model that predicts the dependent variable, the environment is acceptable, as a function of a number of independent perception variables. Occupant perception of environmental, comfort, and physical variables was measured using a questionnaire. This instrument was designed to be efficient and unobtrusive; subjects could complete it within 3 min. Significant differences of occupant environment perception were identified among customers and staff. The dependent variable, the environment is acceptable, is affected by temperature, occupant density, and occupant smoking status, odor perception, health conditions, sensitivity to chemicals, and enjoyment of activities. Depending on the hospitality environment, variation of independent variables explains as much as 77% of the variation of the dependent variable.
Salt-dependent properties of proteins from extremely halophilic bacteria
NASA Technical Reports Server (NTRS)
Lanyi, J. K.
1974-01-01
Based on information concerning the interaction of salts and macromolecules the literature of the enzymes of halophilic bacteria and their constituents is examined. Although in halophilic systems the salt requirement of enzyme activity is variable the enzymes investigated show a time-dependent inactivation at lower salt concentrations especially in the absence of salt. The studies described show that in some halophilic systems the effect of salt may be restricted to a small region on the protein molecule. The concept of the hydrophobic bond to consider certain solvent-dependent phenomena is introduced. It is shown that some halophilic enzymes are unable to maintain their structure without the involvement of hydrophobic interactions that are usually not supported by water. A table lists indices of hydrophobicity and polarity for various halophilic and nonhalophilic proteins.
Bayesian effect estimation accounting for adjustment uncertainty.
Wang, Chi; Parmigiani, Giovanni; Dominici, Francesca
2012-09-01
Model-based estimation of the effect of an exposure on an outcome is generally sensitive to the choice of which confounding factors are included in the model. We propose a new approach, which we call Bayesian adjustment for confounding (BAC), to estimate the effect of an exposure of interest on the outcome, while accounting for the uncertainty in the choice of confounders. Our approach is based on specifying two models: (1) the outcome as a function of the exposure and the potential confounders (the outcome model); and (2) the exposure as a function of the potential confounders (the exposure model). We consider Bayesian variable selection on both models and link the two by introducing a dependence parameter, ω, denoting the prior odds of including a predictor in the outcome model, given that the same predictor is in the exposure model. In the absence of dependence (ω= 1), BAC reduces to traditional Bayesian model averaging (BMA). In simulation studies, we show that BAC, with ω > 1, estimates the exposure effect with smaller bias than traditional BMA, and improved coverage. We, then, compare BAC, a recent approach of Crainiceanu, Dominici, and Parmigiani (2008, Biometrika 95, 635-651), and traditional BMA in a time series data set of hospital admissions, air pollution levels, and weather variables in Nassau, NY for the period 1999-2005. Using each approach, we estimate the short-term effects of on emergency admissions for cardiovascular diseases, accounting for confounding. This application illustrates the potentially significant pitfalls of misusing variable selection methods in the context of adjustment uncertainty. © 2012, The International Biometric Society.
2013-01-01
Background People with visual disabilities have increased health needs but face worse inequity to preventive health examinations. To date, only a few nationwide studies have analyzed the utilization of preventive adult health examinations by the visually disabled population. The aim of this study was to investigate the utilization of health examinations by the visually disabled population, and analyze the factors associated with the utilization. Methods Visual disability was certified by ophthalmologists and authenticated by the Ministry of the Interior (MOI), Taiwan. We linked data from three different nationwide datasets (from the MOI, Bureau of Health Promotion, and National Health Research Institutes) between 2006 and 2008 as the data sources. Independent variables included demographic characteristics, income status, health status, and severity of disability; health examination utilization status was the dependent variable. The chi-square test was used to check statistical differences between variables, and a multivariate logistic regression model was used to examine the associated factors with health examination utilization. Results In total, 47,812 visually disabled subjects aged 40 years and over were included in this study, only 16.6% of whom received a health examination. Lower utilization was more likely in male subjects, in those aged 65 years and above, insured dependents and those with a top-ranked premium-based salary, catastrophic illness/injury, chronic diseases of the genitourinary system, and severe or very severe disabilities. Conclusion The overall health examination utilization in the visually disabled population was very low. Lower utilization occurred mainly in males, the elderly, and those with severe disabilities. PMID:24313981